00:00:00.001 Started by upstream project "autotest-per-patch" build number 132521 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.069 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.069 The recommended git tool is: git 00:00:00.070 using credential 00000000-0000-0000-0000-000000000002 00:00:00.072 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.133 Fetching changes from the remote Git repository 00:00:00.136 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.200 Using shallow fetch with depth 1 00:00:00.200 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.200 > git --version # timeout=10 00:00:00.266 > git --version # 'git version 2.39.2' 00:00:00.266 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.298 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.298 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.434 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.449 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.461 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.461 > git config core.sparsecheckout # timeout=10 00:00:04.477 > git read-tree -mu HEAD # timeout=10 00:00:04.494 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.517 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.517 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.617 [Pipeline] Start of Pipeline 00:00:04.630 [Pipeline] library 00:00:04.631 Loading library shm_lib@master 00:00:04.631 Library shm_lib@master is cached. Copying from home. 00:00:04.647 [Pipeline] node 00:00:04.656 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.659 [Pipeline] { 00:00:04.670 [Pipeline] catchError 00:00:04.671 [Pipeline] { 00:00:04.681 [Pipeline] wrap 00:00:04.687 [Pipeline] { 00:00:04.695 [Pipeline] stage 00:00:04.697 [Pipeline] { (Prologue) 00:00:04.897 [Pipeline] sh 00:00:05.179 + logger -p user.info -t JENKINS-CI 00:00:05.198 [Pipeline] echo 00:00:05.199 Node: CYP12 00:00:05.208 [Pipeline] sh 00:00:05.519 [Pipeline] setCustomBuildProperty 00:00:05.529 [Pipeline] echo 00:00:05.531 Cleanup processes 00:00:05.536 [Pipeline] sh 00:00:05.824 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.824 1766718 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.837 [Pipeline] sh 00:00:06.119 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.119 ++ grep -v 'sudo pgrep' 00:00:06.119 ++ awk '{print $1}' 00:00:06.119 + sudo kill -9 00:00:06.119 + true 00:00:06.165 [Pipeline] cleanWs 00:00:06.185 [WS-CLEANUP] Deleting project workspace... 00:00:06.185 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.201 [WS-CLEANUP] done 00:00:06.222 [Pipeline] setCustomBuildProperty 00:00:06.237 [Pipeline] sh 00:00:06.526 + sudo git config --global --replace-all safe.directory '*' 00:00:06.608 [Pipeline] httpRequest 00:00:07.319 [Pipeline] echo 00:00:07.320 Sorcerer 10.211.164.20 is alive 00:00:07.330 [Pipeline] retry 00:00:07.332 [Pipeline] { 00:00:07.341 [Pipeline] httpRequest 00:00:07.345 HttpMethod: GET 00:00:07.346 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.346 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.363 Response Code: HTTP/1.1 200 OK 00:00:07.363 Success: Status code 200 is in the accepted range: 200,404 00:00:07.363 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.173 [Pipeline] } 00:00:13.190 [Pipeline] // retry 00:00:13.198 [Pipeline] sh 00:00:13.486 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.502 [Pipeline] httpRequest 00:00:14.108 [Pipeline] echo 00:00:14.110 Sorcerer 10.211.164.20 is alive 00:00:14.120 [Pipeline] retry 00:00:14.122 [Pipeline] { 00:00:14.136 [Pipeline] httpRequest 00:00:14.142 HttpMethod: GET 00:00:14.142 URL: http://10.211.164.20/packages/spdk_8afd1c921c6aa1340e442a866f4aeb155cdec456.tar.gz 00:00:14.143 Sending request to url: http://10.211.164.20/packages/spdk_8afd1c921c6aa1340e442a866f4aeb155cdec456.tar.gz 00:00:14.150 Response Code: HTTP/1.1 200 OK 00:00:14.150 Success: Status code 200 is in the accepted range: 200,404 00:00:14.151 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_8afd1c921c6aa1340e442a866f4aeb155cdec456.tar.gz 00:02:50.771 [Pipeline] } 00:02:50.788 [Pipeline] // retry 00:02:50.795 [Pipeline] sh 00:02:51.082 + tar --no-same-owner -xf spdk_8afd1c921c6aa1340e442a866f4aeb155cdec456.tar.gz 00:02:54.403 [Pipeline] sh 00:02:54.690 + git -C spdk log --oneline -n5 00:02:54.690 8afd1c921 blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails 00:02:54.690 9c7e54d62 blob: don't use bs_load_ctx_fail in bs_write_used_* functions 00:02:54.690 9ebbe7008 blob: fix possible memory leak in bs loading 00:02:54.690 ff2e6bfe4 lib/lvol: cluster size must be a multiple of bs_dev->blocklen 00:02:54.690 9885e1d29 lib/blob: cluster_sz must be a multiple of PAGE 00:02:54.701 [Pipeline] } 00:02:54.716 [Pipeline] // stage 00:02:54.726 [Pipeline] stage 00:02:54.728 [Pipeline] { (Prepare) 00:02:54.749 [Pipeline] writeFile 00:02:54.769 [Pipeline] sh 00:02:55.058 + logger -p user.info -t JENKINS-CI 00:02:55.071 [Pipeline] sh 00:02:55.517 + logger -p user.info -t JENKINS-CI 00:02:55.530 [Pipeline] sh 00:02:55.815 + cat autorun-spdk.conf 00:02:55.815 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:55.815 SPDK_TEST_NVMF=1 00:02:55.815 SPDK_TEST_NVME_CLI=1 00:02:55.815 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:55.815 SPDK_TEST_NVMF_NICS=e810 00:02:55.815 SPDK_TEST_VFIOUSER=1 00:02:55.815 SPDK_RUN_UBSAN=1 00:02:55.815 NET_TYPE=phy 00:02:55.822 RUN_NIGHTLY=0 00:02:55.826 [Pipeline] readFile 00:02:55.848 [Pipeline] withEnv 00:02:55.850 [Pipeline] { 00:02:55.862 [Pipeline] sh 00:02:56.154 + set -ex 00:02:56.154 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:56.154 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:56.154 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:56.154 ++ SPDK_TEST_NVMF=1 00:02:56.154 ++ SPDK_TEST_NVME_CLI=1 00:02:56.154 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:56.154 ++ SPDK_TEST_NVMF_NICS=e810 00:02:56.154 ++ SPDK_TEST_VFIOUSER=1 00:02:56.154 ++ SPDK_RUN_UBSAN=1 00:02:56.154 ++ NET_TYPE=phy 00:02:56.154 ++ RUN_NIGHTLY=0 00:02:56.154 + case $SPDK_TEST_NVMF_NICS in 00:02:56.154 + DRIVERS=ice 00:02:56.154 + [[ tcp == \r\d\m\a ]] 00:02:56.154 + [[ -n ice ]] 00:02:56.154 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:56.154 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:56.154 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:56.154 rmmod: ERROR: Module irdma is not currently loaded 00:02:56.154 rmmod: ERROR: Module i40iw is not currently loaded 00:02:56.154 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:56.154 + true 00:02:56.154 + for D in $DRIVERS 00:02:56.154 + sudo modprobe ice 00:02:56.154 + exit 0 00:02:56.164 [Pipeline] } 00:02:56.178 [Pipeline] // withEnv 00:02:56.183 [Pipeline] } 00:02:56.196 [Pipeline] // stage 00:02:56.204 [Pipeline] catchError 00:02:56.205 [Pipeline] { 00:02:56.219 [Pipeline] timeout 00:02:56.219 Timeout set to expire in 1 hr 0 min 00:02:56.221 [Pipeline] { 00:02:56.235 [Pipeline] stage 00:02:56.237 [Pipeline] { (Tests) 00:02:56.247 [Pipeline] sh 00:02:56.532 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:56.533 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:56.533 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:56.533 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:56.533 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:56.533 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:56.533 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:56.533 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:56.533 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:56.533 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:56.533 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:56.533 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:56.533 + source /etc/os-release 00:02:56.533 ++ NAME='Fedora Linux' 00:02:56.533 ++ VERSION='39 (Cloud Edition)' 00:02:56.533 ++ ID=fedora 00:02:56.533 ++ VERSION_ID=39 00:02:56.533 ++ VERSION_CODENAME= 00:02:56.533 ++ PLATFORM_ID=platform:f39 00:02:56.533 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:56.533 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:56.533 ++ LOGO=fedora-logo-icon 00:02:56.533 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:56.533 ++ HOME_URL=https://fedoraproject.org/ 00:02:56.533 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:56.533 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:56.533 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:56.533 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:56.533 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:56.533 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:56.533 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:56.533 ++ SUPPORT_END=2024-11-12 00:02:56.533 ++ VARIANT='Cloud Edition' 00:02:56.533 ++ VARIANT_ID=cloud 00:02:56.533 + uname -a 00:02:56.533 Linux spdk-cyp-12 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:56.533 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:59.830 Hugepages 00:02:59.830 node hugesize free / total 00:02:59.830 node0 1048576kB 0 / 0 00:02:59.830 node0 2048kB 0 / 0 00:02:59.830 node1 1048576kB 0 / 0 00:02:59.830 node1 2048kB 0 / 0 00:02:59.830 00:02:59.830 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:59.830 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:02:59.830 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:02:59.830 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:02:59.830 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:02:59.830 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:02:59.830 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:02:59.830 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:02:59.830 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:02:59.830 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:02:59.830 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:02:59.830 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:02:59.830 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:02:59.830 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:02:59.830 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:02:59.830 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:02:59.830 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:02:59.830 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:02:59.830 + rm -f /tmp/spdk-ld-path 00:02:59.830 + source autorun-spdk.conf 00:02:59.830 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:59.830 ++ SPDK_TEST_NVMF=1 00:02:59.830 ++ SPDK_TEST_NVME_CLI=1 00:02:59.830 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:59.830 ++ SPDK_TEST_NVMF_NICS=e810 00:02:59.830 ++ SPDK_TEST_VFIOUSER=1 00:02:59.830 ++ SPDK_RUN_UBSAN=1 00:02:59.830 ++ NET_TYPE=phy 00:02:59.830 ++ RUN_NIGHTLY=0 00:02:59.830 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:59.830 + [[ -n '' ]] 00:02:59.830 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:59.830 + for M in /var/spdk/build-*-manifest.txt 00:02:59.830 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:59.830 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:59.830 + for M in /var/spdk/build-*-manifest.txt 00:02:59.830 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:59.830 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:59.830 + for M in /var/spdk/build-*-manifest.txt 00:02:59.830 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:59.830 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:59.830 ++ uname 00:02:59.830 + [[ Linux == \L\i\n\u\x ]] 00:02:59.830 + sudo dmesg -T 00:03:00.092 + sudo dmesg --clear 00:03:00.092 + dmesg_pid=1768397 00:03:00.092 + [[ Fedora Linux == FreeBSD ]] 00:03:00.092 + sudo dmesg -Tw 00:03:00.092 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:00.092 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:00.092 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:00.092 + [[ -x /usr/src/fio-static/fio ]] 00:03:00.092 + export FIO_BIN=/usr/src/fio-static/fio 00:03:00.092 + FIO_BIN=/usr/src/fio-static/fio 00:03:00.092 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:00.092 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:00.092 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:00.092 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:00.092 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:00.092 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:00.093 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:00.093 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:00.093 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:00.093 07:12:44 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:00.093 07:12:44 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:00.093 07:12:44 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:00.093 07:12:44 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:03:00.093 07:12:44 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:03:00.093 07:12:44 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:00.093 07:12:44 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:03:00.093 07:12:44 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:03:00.093 07:12:44 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:03:00.093 07:12:44 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:03:00.093 07:12:44 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:03:00.093 07:12:44 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:00.093 07:12:44 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:00.093 07:12:44 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:00.093 07:12:44 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:00.093 07:12:44 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:00.093 07:12:44 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:00.093 07:12:44 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:00.093 07:12:44 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:00.093 07:12:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.093 07:12:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.093 07:12:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.093 07:12:44 -- paths/export.sh@5 -- $ export PATH 00:03:00.093 07:12:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.093 07:12:44 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:00.093 07:12:44 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:00.093 07:12:44 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732601564.XXXXXX 00:03:00.093 07:12:44 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732601564.C9hfrg 00:03:00.093 07:12:44 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:00.093 07:12:44 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:03:00.093 07:12:44 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:03:00.093 07:12:44 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:00.093 07:12:44 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:00.093 07:12:44 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:00.093 07:12:44 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:00.093 07:12:44 -- common/autotest_common.sh@10 -- $ set +x 00:03:00.093 07:12:44 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:03:00.093 07:12:44 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:00.093 07:12:44 -- pm/common@17 -- $ local monitor 00:03:00.093 07:12:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:00.093 07:12:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:00.093 07:12:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:00.093 07:12:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:00.093 07:12:44 -- pm/common@21 -- $ date +%s 00:03:00.093 07:12:44 -- pm/common@25 -- $ sleep 1 00:03:00.093 07:12:44 -- pm/common@21 -- $ date +%s 00:03:00.093 07:12:44 -- pm/common@21 -- $ date +%s 00:03:00.093 07:12:44 -- pm/common@21 -- $ date +%s 00:03:00.093 07:12:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732601564 00:03:00.093 07:12:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732601564 00:03:00.093 07:12:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732601564 00:03:00.093 07:12:44 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732601564 00:03:00.354 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732601564_collect-vmstat.pm.log 00:03:00.354 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732601564_collect-cpu-load.pm.log 00:03:00.354 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732601564_collect-cpu-temp.pm.log 00:03:00.354 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732601564_collect-bmc-pm.bmc.pm.log 00:03:01.301 07:12:45 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:01.301 07:12:45 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:01.301 07:12:45 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:01.301 07:12:45 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:01.301 07:12:45 -- spdk/autobuild.sh@16 -- $ date -u 00:03:01.301 Tue Nov 26 06:12:45 AM UTC 2024 00:03:01.301 07:12:45 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:01.301 v25.01-pre-239-g8afd1c921 00:03:01.301 07:12:45 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:01.301 07:12:45 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:01.301 07:12:45 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:01.301 07:12:45 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:01.301 07:12:45 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:01.301 07:12:45 -- common/autotest_common.sh@10 -- $ set +x 00:03:01.301 ************************************ 00:03:01.301 START TEST ubsan 00:03:01.301 ************************************ 00:03:01.301 07:12:45 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:01.301 using ubsan 00:03:01.301 00:03:01.301 real 0m0.001s 00:03:01.301 user 0m0.000s 00:03:01.301 sys 0m0.000s 00:03:01.301 07:12:45 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:01.301 07:12:45 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:01.301 ************************************ 00:03:01.301 END TEST ubsan 00:03:01.301 ************************************ 00:03:01.301 07:12:45 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:01.301 07:12:45 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:01.301 07:12:45 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:01.301 07:12:45 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:01.301 07:12:45 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:01.301 07:12:45 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:01.301 07:12:45 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:01.301 07:12:45 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:01.301 07:12:45 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:03:01.562 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:01.562 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:01.823 Using 'verbs' RDMA provider 00:03:17.681 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:29.907 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:30.167 Creating mk/config.mk...done. 00:03:30.167 Creating mk/cc.flags.mk...done. 00:03:30.167 Type 'make' to build. 00:03:30.167 07:13:14 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:03:30.167 07:13:14 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:30.167 07:13:14 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:30.167 07:13:14 -- common/autotest_common.sh@10 -- $ set +x 00:03:30.167 ************************************ 00:03:30.167 START TEST make 00:03:30.167 ************************************ 00:03:30.167 07:13:14 make -- common/autotest_common.sh@1129 -- $ make -j144 00:03:30.427 make[1]: Nothing to be done for 'all'. 00:03:31.807 The Meson build system 00:03:31.807 Version: 1.5.0 00:03:31.807 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:31.807 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:31.808 Build type: native build 00:03:31.808 Project name: libvfio-user 00:03:31.808 Project version: 0.0.1 00:03:31.808 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:31.808 C linker for the host machine: cc ld.bfd 2.40-14 00:03:31.808 Host machine cpu family: x86_64 00:03:31.808 Host machine cpu: x86_64 00:03:31.808 Run-time dependency threads found: YES 00:03:31.808 Library dl found: YES 00:03:31.808 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:31.808 Run-time dependency json-c found: YES 0.17 00:03:31.808 Run-time dependency cmocka found: YES 1.1.7 00:03:31.808 Program pytest-3 found: NO 00:03:31.808 Program flake8 found: NO 00:03:31.808 Program misspell-fixer found: NO 00:03:31.808 Program restructuredtext-lint found: NO 00:03:31.808 Program valgrind found: YES (/usr/bin/valgrind) 00:03:31.808 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:31.808 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:31.808 Compiler for C supports arguments -Wwrite-strings: YES 00:03:31.808 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:31.808 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:31.808 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:31.808 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:31.808 Build targets in project: 8 00:03:31.808 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:31.808 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:31.808 00:03:31.808 libvfio-user 0.0.1 00:03:31.808 00:03:31.808 User defined options 00:03:31.808 buildtype : debug 00:03:31.808 default_library: shared 00:03:31.808 libdir : /usr/local/lib 00:03:31.808 00:03:31.808 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:32.067 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:32.327 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:32.327 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:32.327 [3/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:32.327 [4/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:32.327 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:32.327 [6/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:32.327 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:32.327 [8/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:32.327 [9/37] Compiling C object samples/null.p/null.c.o 00:03:32.327 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:32.327 [11/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:32.327 [12/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:32.327 [13/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:32.327 [14/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:32.327 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:32.327 [16/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:32.327 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:32.327 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:32.327 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:32.327 [20/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:32.327 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:32.327 [22/37] Compiling C object samples/server.p/server.c.o 00:03:32.327 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:32.327 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:32.327 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:32.327 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:32.327 [27/37] Linking target lib/libvfio-user.so.0.0.1 00:03:32.327 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:32.327 [29/37] Compiling C object samples/client.p/client.c.o 00:03:32.327 [30/37] Linking target test/unit_tests 00:03:32.327 [31/37] Linking target samples/client 00:03:32.589 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:32.589 [33/37] Linking target samples/shadow_ioeventfd_server 00:03:32.589 [34/37] Linking target samples/lspci 00:03:32.589 [35/37] Linking target samples/server 00:03:32.589 [36/37] Linking target samples/null 00:03:32.589 [37/37] Linking target samples/gpio-pci-idio-16 00:03:32.589 INFO: autodetecting backend as ninja 00:03:32.589 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:32.589 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:32.849 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:32.849 ninja: no work to do. 00:03:39.445 The Meson build system 00:03:39.445 Version: 1.5.0 00:03:39.445 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:03:39.445 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:03:39.445 Build type: native build 00:03:39.445 Program cat found: YES (/usr/bin/cat) 00:03:39.445 Project name: DPDK 00:03:39.445 Project version: 24.03.0 00:03:39.445 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:39.445 C linker for the host machine: cc ld.bfd 2.40-14 00:03:39.445 Host machine cpu family: x86_64 00:03:39.445 Host machine cpu: x86_64 00:03:39.445 Message: ## Building in Developer Mode ## 00:03:39.445 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:39.445 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:39.445 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:39.445 Program python3 found: YES (/usr/bin/python3) 00:03:39.445 Program cat found: YES (/usr/bin/cat) 00:03:39.445 Compiler for C supports arguments -march=native: YES 00:03:39.445 Checking for size of "void *" : 8 00:03:39.445 Checking for size of "void *" : 8 (cached) 00:03:39.445 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:39.445 Library m found: YES 00:03:39.445 Library numa found: YES 00:03:39.445 Has header "numaif.h" : YES 00:03:39.445 Library fdt found: NO 00:03:39.445 Library execinfo found: NO 00:03:39.445 Has header "execinfo.h" : YES 00:03:39.445 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:39.445 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:39.445 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:39.445 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:39.445 Run-time dependency openssl found: YES 3.1.1 00:03:39.445 Run-time dependency libpcap found: YES 1.10.4 00:03:39.445 Has header "pcap.h" with dependency libpcap: YES 00:03:39.445 Compiler for C supports arguments -Wcast-qual: YES 00:03:39.445 Compiler for C supports arguments -Wdeprecated: YES 00:03:39.445 Compiler for C supports arguments -Wformat: YES 00:03:39.445 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:39.445 Compiler for C supports arguments -Wformat-security: NO 00:03:39.445 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:39.445 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:39.445 Compiler for C supports arguments -Wnested-externs: YES 00:03:39.445 Compiler for C supports arguments -Wold-style-definition: YES 00:03:39.445 Compiler for C supports arguments -Wpointer-arith: YES 00:03:39.445 Compiler for C supports arguments -Wsign-compare: YES 00:03:39.445 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:39.445 Compiler for C supports arguments -Wundef: YES 00:03:39.445 Compiler for C supports arguments -Wwrite-strings: YES 00:03:39.445 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:39.445 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:39.445 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:39.445 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:39.445 Program objdump found: YES (/usr/bin/objdump) 00:03:39.445 Compiler for C supports arguments -mavx512f: YES 00:03:39.445 Checking if "AVX512 checking" compiles: YES 00:03:39.445 Fetching value of define "__SSE4_2__" : 1 00:03:39.445 Fetching value of define "__AES__" : 1 00:03:39.445 Fetching value of define "__AVX__" : 1 00:03:39.445 Fetching value of define "__AVX2__" : 1 00:03:39.445 Fetching value of define "__AVX512BW__" : 1 00:03:39.445 Fetching value of define "__AVX512CD__" : 1 00:03:39.445 Fetching value of define "__AVX512DQ__" : 1 00:03:39.445 Fetching value of define "__AVX512F__" : 1 00:03:39.445 Fetching value of define "__AVX512VL__" : 1 00:03:39.445 Fetching value of define "__PCLMUL__" : 1 00:03:39.445 Fetching value of define "__RDRND__" : 1 00:03:39.445 Fetching value of define "__RDSEED__" : 1 00:03:39.445 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:39.445 Fetching value of define "__znver1__" : (undefined) 00:03:39.445 Fetching value of define "__znver2__" : (undefined) 00:03:39.445 Fetching value of define "__znver3__" : (undefined) 00:03:39.445 Fetching value of define "__znver4__" : (undefined) 00:03:39.445 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:39.445 Message: lib/log: Defining dependency "log" 00:03:39.445 Message: lib/kvargs: Defining dependency "kvargs" 00:03:39.445 Message: lib/telemetry: Defining dependency "telemetry" 00:03:39.445 Checking for function "getentropy" : NO 00:03:39.445 Message: lib/eal: Defining dependency "eal" 00:03:39.445 Message: lib/ring: Defining dependency "ring" 00:03:39.445 Message: lib/rcu: Defining dependency "rcu" 00:03:39.445 Message: lib/mempool: Defining dependency "mempool" 00:03:39.445 Message: lib/mbuf: Defining dependency "mbuf" 00:03:39.445 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:39.445 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:39.445 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:39.445 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:39.445 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:39.445 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:39.445 Compiler for C supports arguments -mpclmul: YES 00:03:39.445 Compiler for C supports arguments -maes: YES 00:03:39.445 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:39.445 Compiler for C supports arguments -mavx512bw: YES 00:03:39.445 Compiler for C supports arguments -mavx512dq: YES 00:03:39.445 Compiler for C supports arguments -mavx512vl: YES 00:03:39.445 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:39.445 Compiler for C supports arguments -mavx2: YES 00:03:39.445 Compiler for C supports arguments -mavx: YES 00:03:39.445 Message: lib/net: Defining dependency "net" 00:03:39.445 Message: lib/meter: Defining dependency "meter" 00:03:39.445 Message: lib/ethdev: Defining dependency "ethdev" 00:03:39.445 Message: lib/pci: Defining dependency "pci" 00:03:39.445 Message: lib/cmdline: Defining dependency "cmdline" 00:03:39.445 Message: lib/hash: Defining dependency "hash" 00:03:39.445 Message: lib/timer: Defining dependency "timer" 00:03:39.445 Message: lib/compressdev: Defining dependency "compressdev" 00:03:39.445 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:39.445 Message: lib/dmadev: Defining dependency "dmadev" 00:03:39.445 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:39.445 Message: lib/power: Defining dependency "power" 00:03:39.445 Message: lib/reorder: Defining dependency "reorder" 00:03:39.445 Message: lib/security: Defining dependency "security" 00:03:39.445 Has header "linux/userfaultfd.h" : YES 00:03:39.445 Has header "linux/vduse.h" : YES 00:03:39.445 Message: lib/vhost: Defining dependency "vhost" 00:03:39.445 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:39.445 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:39.445 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:39.445 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:39.445 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:39.445 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:39.445 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:39.445 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:39.445 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:39.445 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:39.445 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:39.445 Configuring doxy-api-html.conf using configuration 00:03:39.445 Configuring doxy-api-man.conf using configuration 00:03:39.445 Program mandb found: YES (/usr/bin/mandb) 00:03:39.445 Program sphinx-build found: NO 00:03:39.445 Configuring rte_build_config.h using configuration 00:03:39.445 Message: 00:03:39.445 ================= 00:03:39.445 Applications Enabled 00:03:39.445 ================= 00:03:39.445 00:03:39.445 apps: 00:03:39.445 00:03:39.445 00:03:39.445 Message: 00:03:39.445 ================= 00:03:39.445 Libraries Enabled 00:03:39.445 ================= 00:03:39.445 00:03:39.445 libs: 00:03:39.445 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:39.445 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:39.445 cryptodev, dmadev, power, reorder, security, vhost, 00:03:39.445 00:03:39.445 Message: 00:03:39.445 =============== 00:03:39.445 Drivers Enabled 00:03:39.445 =============== 00:03:39.445 00:03:39.445 common: 00:03:39.445 00:03:39.445 bus: 00:03:39.445 pci, vdev, 00:03:39.445 mempool: 00:03:39.445 ring, 00:03:39.445 dma: 00:03:39.445 00:03:39.445 net: 00:03:39.445 00:03:39.445 crypto: 00:03:39.445 00:03:39.445 compress: 00:03:39.445 00:03:39.445 vdpa: 00:03:39.445 00:03:39.445 00:03:39.445 Message: 00:03:39.445 ================= 00:03:39.445 Content Skipped 00:03:39.445 ================= 00:03:39.445 00:03:39.445 apps: 00:03:39.445 dumpcap: explicitly disabled via build config 00:03:39.445 graph: explicitly disabled via build config 00:03:39.445 pdump: explicitly disabled via build config 00:03:39.445 proc-info: explicitly disabled via build config 00:03:39.445 test-acl: explicitly disabled via build config 00:03:39.445 test-bbdev: explicitly disabled via build config 00:03:39.445 test-cmdline: explicitly disabled via build config 00:03:39.445 test-compress-perf: explicitly disabled via build config 00:03:39.445 test-crypto-perf: explicitly disabled via build config 00:03:39.446 test-dma-perf: explicitly disabled via build config 00:03:39.446 test-eventdev: explicitly disabled via build config 00:03:39.446 test-fib: explicitly disabled via build config 00:03:39.446 test-flow-perf: explicitly disabled via build config 00:03:39.446 test-gpudev: explicitly disabled via build config 00:03:39.446 test-mldev: explicitly disabled via build config 00:03:39.446 test-pipeline: explicitly disabled via build config 00:03:39.446 test-pmd: explicitly disabled via build config 00:03:39.446 test-regex: explicitly disabled via build config 00:03:39.446 test-sad: explicitly disabled via build config 00:03:39.446 test-security-perf: explicitly disabled via build config 00:03:39.446 00:03:39.446 libs: 00:03:39.446 argparse: explicitly disabled via build config 00:03:39.446 metrics: explicitly disabled via build config 00:03:39.446 acl: explicitly disabled via build config 00:03:39.446 bbdev: explicitly disabled via build config 00:03:39.446 bitratestats: explicitly disabled via build config 00:03:39.446 bpf: explicitly disabled via build config 00:03:39.446 cfgfile: explicitly disabled via build config 00:03:39.446 distributor: explicitly disabled via build config 00:03:39.446 efd: explicitly disabled via build config 00:03:39.446 eventdev: explicitly disabled via build config 00:03:39.446 dispatcher: explicitly disabled via build config 00:03:39.446 gpudev: explicitly disabled via build config 00:03:39.446 gro: explicitly disabled via build config 00:03:39.446 gso: explicitly disabled via build config 00:03:39.446 ip_frag: explicitly disabled via build config 00:03:39.446 jobstats: explicitly disabled via build config 00:03:39.446 latencystats: explicitly disabled via build config 00:03:39.446 lpm: explicitly disabled via build config 00:03:39.446 member: explicitly disabled via build config 00:03:39.446 pcapng: explicitly disabled via build config 00:03:39.446 rawdev: explicitly disabled via build config 00:03:39.446 regexdev: explicitly disabled via build config 00:03:39.446 mldev: explicitly disabled via build config 00:03:39.446 rib: explicitly disabled via build config 00:03:39.446 sched: explicitly disabled via build config 00:03:39.446 stack: explicitly disabled via build config 00:03:39.446 ipsec: explicitly disabled via build config 00:03:39.446 pdcp: explicitly disabled via build config 00:03:39.446 fib: explicitly disabled via build config 00:03:39.446 port: explicitly disabled via build config 00:03:39.446 pdump: explicitly disabled via build config 00:03:39.446 table: explicitly disabled via build config 00:03:39.446 pipeline: explicitly disabled via build config 00:03:39.446 graph: explicitly disabled via build config 00:03:39.446 node: explicitly disabled via build config 00:03:39.446 00:03:39.446 drivers: 00:03:39.446 common/cpt: not in enabled drivers build config 00:03:39.446 common/dpaax: not in enabled drivers build config 00:03:39.446 common/iavf: not in enabled drivers build config 00:03:39.446 common/idpf: not in enabled drivers build config 00:03:39.446 common/ionic: not in enabled drivers build config 00:03:39.446 common/mvep: not in enabled drivers build config 00:03:39.446 common/octeontx: not in enabled drivers build config 00:03:39.446 bus/auxiliary: not in enabled drivers build config 00:03:39.446 bus/cdx: not in enabled drivers build config 00:03:39.446 bus/dpaa: not in enabled drivers build config 00:03:39.446 bus/fslmc: not in enabled drivers build config 00:03:39.446 bus/ifpga: not in enabled drivers build config 00:03:39.446 bus/platform: not in enabled drivers build config 00:03:39.446 bus/uacce: not in enabled drivers build config 00:03:39.446 bus/vmbus: not in enabled drivers build config 00:03:39.446 common/cnxk: not in enabled drivers build config 00:03:39.446 common/mlx5: not in enabled drivers build config 00:03:39.446 common/nfp: not in enabled drivers build config 00:03:39.446 common/nitrox: not in enabled drivers build config 00:03:39.446 common/qat: not in enabled drivers build config 00:03:39.446 common/sfc_efx: not in enabled drivers build config 00:03:39.446 mempool/bucket: not in enabled drivers build config 00:03:39.446 mempool/cnxk: not in enabled drivers build config 00:03:39.446 mempool/dpaa: not in enabled drivers build config 00:03:39.446 mempool/dpaa2: not in enabled drivers build config 00:03:39.446 mempool/octeontx: not in enabled drivers build config 00:03:39.446 mempool/stack: not in enabled drivers build config 00:03:39.446 dma/cnxk: not in enabled drivers build config 00:03:39.446 dma/dpaa: not in enabled drivers build config 00:03:39.446 dma/dpaa2: not in enabled drivers build config 00:03:39.446 dma/hisilicon: not in enabled drivers build config 00:03:39.446 dma/idxd: not in enabled drivers build config 00:03:39.446 dma/ioat: not in enabled drivers build config 00:03:39.446 dma/skeleton: not in enabled drivers build config 00:03:39.446 net/af_packet: not in enabled drivers build config 00:03:39.446 net/af_xdp: not in enabled drivers build config 00:03:39.446 net/ark: not in enabled drivers build config 00:03:39.446 net/atlantic: not in enabled drivers build config 00:03:39.446 net/avp: not in enabled drivers build config 00:03:39.446 net/axgbe: not in enabled drivers build config 00:03:39.446 net/bnx2x: not in enabled drivers build config 00:03:39.446 net/bnxt: not in enabled drivers build config 00:03:39.446 net/bonding: not in enabled drivers build config 00:03:39.446 net/cnxk: not in enabled drivers build config 00:03:39.446 net/cpfl: not in enabled drivers build config 00:03:39.446 net/cxgbe: not in enabled drivers build config 00:03:39.446 net/dpaa: not in enabled drivers build config 00:03:39.446 net/dpaa2: not in enabled drivers build config 00:03:39.446 net/e1000: not in enabled drivers build config 00:03:39.446 net/ena: not in enabled drivers build config 00:03:39.446 net/enetc: not in enabled drivers build config 00:03:39.446 net/enetfec: not in enabled drivers build config 00:03:39.446 net/enic: not in enabled drivers build config 00:03:39.446 net/failsafe: not in enabled drivers build config 00:03:39.446 net/fm10k: not in enabled drivers build config 00:03:39.446 net/gve: not in enabled drivers build config 00:03:39.446 net/hinic: not in enabled drivers build config 00:03:39.446 net/hns3: not in enabled drivers build config 00:03:39.446 net/i40e: not in enabled drivers build config 00:03:39.446 net/iavf: not in enabled drivers build config 00:03:39.446 net/ice: not in enabled drivers build config 00:03:39.446 net/idpf: not in enabled drivers build config 00:03:39.446 net/igc: not in enabled drivers build config 00:03:39.446 net/ionic: not in enabled drivers build config 00:03:39.446 net/ipn3ke: not in enabled drivers build config 00:03:39.446 net/ixgbe: not in enabled drivers build config 00:03:39.446 net/mana: not in enabled drivers build config 00:03:39.446 net/memif: not in enabled drivers build config 00:03:39.446 net/mlx4: not in enabled drivers build config 00:03:39.446 net/mlx5: not in enabled drivers build config 00:03:39.446 net/mvneta: not in enabled drivers build config 00:03:39.446 net/mvpp2: not in enabled drivers build config 00:03:39.446 net/netvsc: not in enabled drivers build config 00:03:39.446 net/nfb: not in enabled drivers build config 00:03:39.446 net/nfp: not in enabled drivers build config 00:03:39.446 net/ngbe: not in enabled drivers build config 00:03:39.446 net/null: not in enabled drivers build config 00:03:39.446 net/octeontx: not in enabled drivers build config 00:03:39.446 net/octeon_ep: not in enabled drivers build config 00:03:39.446 net/pcap: not in enabled drivers build config 00:03:39.446 net/pfe: not in enabled drivers build config 00:03:39.446 net/qede: not in enabled drivers build config 00:03:39.446 net/ring: not in enabled drivers build config 00:03:39.446 net/sfc: not in enabled drivers build config 00:03:39.446 net/softnic: not in enabled drivers build config 00:03:39.446 net/tap: not in enabled drivers build config 00:03:39.446 net/thunderx: not in enabled drivers build config 00:03:39.446 net/txgbe: not in enabled drivers build config 00:03:39.446 net/vdev_netvsc: not in enabled drivers build config 00:03:39.446 net/vhost: not in enabled drivers build config 00:03:39.446 net/virtio: not in enabled drivers build config 00:03:39.446 net/vmxnet3: not in enabled drivers build config 00:03:39.446 raw/*: missing internal dependency, "rawdev" 00:03:39.446 crypto/armv8: not in enabled drivers build config 00:03:39.446 crypto/bcmfs: not in enabled drivers build config 00:03:39.446 crypto/caam_jr: not in enabled drivers build config 00:03:39.446 crypto/ccp: not in enabled drivers build config 00:03:39.446 crypto/cnxk: not in enabled drivers build config 00:03:39.446 crypto/dpaa_sec: not in enabled drivers build config 00:03:39.446 crypto/dpaa2_sec: not in enabled drivers build config 00:03:39.446 crypto/ipsec_mb: not in enabled drivers build config 00:03:39.446 crypto/mlx5: not in enabled drivers build config 00:03:39.446 crypto/mvsam: not in enabled drivers build config 00:03:39.446 crypto/nitrox: not in enabled drivers build config 00:03:39.446 crypto/null: not in enabled drivers build config 00:03:39.446 crypto/octeontx: not in enabled drivers build config 00:03:39.446 crypto/openssl: not in enabled drivers build config 00:03:39.446 crypto/scheduler: not in enabled drivers build config 00:03:39.446 crypto/uadk: not in enabled drivers build config 00:03:39.446 crypto/virtio: not in enabled drivers build config 00:03:39.446 compress/isal: not in enabled drivers build config 00:03:39.446 compress/mlx5: not in enabled drivers build config 00:03:39.446 compress/nitrox: not in enabled drivers build config 00:03:39.446 compress/octeontx: not in enabled drivers build config 00:03:39.446 compress/zlib: not in enabled drivers build config 00:03:39.446 regex/*: missing internal dependency, "regexdev" 00:03:39.446 ml/*: missing internal dependency, "mldev" 00:03:39.446 vdpa/ifc: not in enabled drivers build config 00:03:39.446 vdpa/mlx5: not in enabled drivers build config 00:03:39.446 vdpa/nfp: not in enabled drivers build config 00:03:39.446 vdpa/sfc: not in enabled drivers build config 00:03:39.446 event/*: missing internal dependency, "eventdev" 00:03:39.446 baseband/*: missing internal dependency, "bbdev" 00:03:39.446 gpu/*: missing internal dependency, "gpudev" 00:03:39.446 00:03:39.446 00:03:39.446 Build targets in project: 84 00:03:39.446 00:03:39.446 DPDK 24.03.0 00:03:39.446 00:03:39.446 User defined options 00:03:39.446 buildtype : debug 00:03:39.446 default_library : shared 00:03:39.446 libdir : lib 00:03:39.446 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:39.446 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:39.447 c_link_args : 00:03:39.447 cpu_instruction_set: native 00:03:39.447 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:03:39.447 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:03:39.447 enable_docs : false 00:03:39.447 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:39.447 enable_kmods : false 00:03:39.447 max_lcores : 128 00:03:39.447 tests : false 00:03:39.447 00:03:39.447 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:39.447 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:03:39.709 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:39.709 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:39.709 [3/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:39.709 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:39.709 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:39.709 [6/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:39.709 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:39.709 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:39.709 [9/267] Linking static target lib/librte_kvargs.a 00:03:39.709 [10/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:39.709 [11/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:39.709 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:39.709 [13/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:39.709 [14/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:39.709 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:39.709 [16/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:39.709 [17/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:39.709 [18/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:39.709 [19/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:39.709 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:39.709 [21/267] Linking static target lib/librte_log.a 00:03:39.709 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:39.709 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:39.709 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:39.709 [25/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:39.709 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:39.709 [27/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:39.709 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:39.709 [29/267] Linking static target lib/librte_pci.a 00:03:39.970 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:39.970 [31/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:39.970 [32/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:39.970 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:39.970 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:39.970 [35/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:39.970 [36/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:39.970 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:39.970 [38/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:40.231 [39/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:40.231 [40/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.231 [41/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:40.231 [42/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.231 [43/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:40.231 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:40.231 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:40.231 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:40.231 [47/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:40.231 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:40.231 [49/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:40.231 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:40.231 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:40.231 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:40.231 [53/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:40.231 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:40.231 [55/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:40.231 [56/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:40.231 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:40.231 [58/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:40.231 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:40.231 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:40.231 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:40.231 [62/267] Linking static target lib/librte_meter.a 00:03:40.231 [63/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:40.231 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:40.231 [65/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:40.231 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:40.231 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:40.231 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:40.231 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:40.231 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:40.231 [71/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:40.231 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:40.231 [73/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:40.231 [74/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:40.231 [75/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:40.231 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:40.231 [77/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:40.231 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:40.231 [79/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:40.231 [80/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:40.231 [81/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:40.231 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:40.231 [83/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:40.231 [84/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:40.231 [85/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:40.231 [86/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:40.231 [87/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:40.231 [88/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:40.231 [89/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:40.231 [90/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:40.231 [91/267] Linking static target lib/librte_telemetry.a 00:03:40.231 [92/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:40.231 [93/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:40.231 [94/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:40.231 [95/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:40.231 [96/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:40.231 [97/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:40.231 [98/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:40.231 [99/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:40.231 [100/267] Linking static target lib/librte_ring.a 00:03:40.231 [101/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:40.231 [102/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:40.231 [103/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:40.231 [104/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:40.231 [105/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:40.231 [106/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:40.231 [107/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:40.231 [108/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:40.231 [109/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:40.231 [110/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:40.231 [111/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:40.231 [112/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:40.231 [113/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:40.231 [114/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:40.231 [115/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:40.231 [116/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:40.231 [117/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:40.232 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:40.232 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:40.232 [120/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:40.232 [121/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:40.232 [122/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:40.232 [123/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:40.232 [124/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:40.232 [125/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:40.232 [126/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:40.232 [127/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:40.232 [128/267] Linking static target lib/librte_timer.a 00:03:40.232 [129/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:40.232 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:40.232 [131/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:40.232 [132/267] Linking static target lib/librte_reorder.a 00:03:40.232 [133/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:40.232 [134/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:40.232 [135/267] Linking static target lib/librte_cmdline.a 00:03:40.232 [136/267] Linking static target lib/librte_net.a 00:03:40.232 [137/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:40.232 [138/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:40.232 [139/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:40.232 [140/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:40.232 [141/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:40.232 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:40.232 [143/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:40.232 [144/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:40.232 [145/267] Linking static target lib/librte_dmadev.a 00:03:40.232 [146/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:40.232 [147/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:40.232 [148/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:40.492 [149/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:40.492 [150/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:40.492 [151/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:40.492 [152/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.492 [153/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:40.492 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:40.492 [155/267] Linking static target lib/librte_mempool.a 00:03:40.492 [156/267] Linking static target lib/librte_rcu.a 00:03:40.492 [157/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:40.492 [158/267] Linking static target lib/librte_compressdev.a 00:03:40.492 [159/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:40.492 [160/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:40.492 [161/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:40.492 [162/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:40.492 [163/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:40.492 [164/267] Linking target lib/librte_log.so.24.1 00:03:40.492 [165/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:40.492 [166/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:40.492 [167/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:40.492 [168/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:40.492 [169/267] Linking static target lib/librte_eal.a 00:03:40.492 [170/267] Linking static target lib/librte_power.a 00:03:40.492 [171/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:40.492 [172/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:40.492 [173/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:40.492 [174/267] Linking static target lib/librte_security.a 00:03:40.492 [175/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:40.493 [176/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.493 [177/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:40.493 [178/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:40.493 [179/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:40.493 [180/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:40.493 [181/267] Linking static target lib/librte_mbuf.a 00:03:40.493 [182/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:40.493 [183/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:40.493 [184/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:40.493 [185/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:40.493 [186/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:40.493 [187/267] Linking target lib/librte_kvargs.so.24.1 00:03:40.493 [188/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:40.493 [189/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.493 [190/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:40.493 [191/267] Linking static target drivers/librte_bus_vdev.a 00:03:40.493 [192/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:40.493 [193/267] Linking static target lib/librte_hash.a 00:03:40.754 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:40.754 [195/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.754 [196/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:40.754 [197/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:40.754 [198/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:40.754 [199/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:40.754 [200/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:40.754 [201/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:40.754 [202/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:40.754 [203/267] Linking static target drivers/librte_bus_pci.a 00:03:40.754 [204/267] Linking static target drivers/librte_mempool_ring.a 00:03:40.754 [205/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.754 [206/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.754 [207/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:40.754 [208/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.754 [209/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:40.754 [210/267] Linking static target lib/librte_cryptodev.a 00:03:40.754 [211/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.754 [212/267] Linking target lib/librte_telemetry.so.24.1 00:03:41.015 [213/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:41.015 [214/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.015 [215/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.277 [216/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.277 [217/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.277 [218/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:41.277 [219/267] Linking static target lib/librte_ethdev.a 00:03:41.277 [220/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:41.539 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.539 [222/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.539 [223/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.539 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.801 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.801 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.372 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:42.372 [228/267] Linking static target lib/librte_vhost.a 00:03:42.945 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.330 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.924 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.867 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.867 [233/267] Linking target lib/librte_eal.so.24.1 00:03:51.867 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:51.867 [235/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:51.867 [236/267] Linking target lib/librte_meter.so.24.1 00:03:51.867 [237/267] Linking target lib/librte_ring.so.24.1 00:03:51.867 [238/267] Linking target lib/librte_timer.so.24.1 00:03:51.867 [239/267] Linking target lib/librte_pci.so.24.1 00:03:51.867 [240/267] Linking target lib/librte_dmadev.so.24.1 00:03:51.867 [241/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:52.127 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:52.127 [243/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:52.127 [244/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:52.127 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:52.127 [246/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:52.127 [247/267] Linking target lib/librte_mempool.so.24.1 00:03:52.127 [248/267] Linking target lib/librte_rcu.so.24.1 00:03:52.127 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:52.127 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:52.127 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:52.127 [252/267] Linking target lib/librte_mbuf.so.24.1 00:03:52.388 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:52.388 [254/267] Linking target lib/librte_compressdev.so.24.1 00:03:52.388 [255/267] Linking target lib/librte_net.so.24.1 00:03:52.388 [256/267] Linking target lib/librte_reorder.so.24.1 00:03:52.388 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:03:52.388 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:52.648 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:52.648 [260/267] Linking target lib/librte_cmdline.so.24.1 00:03:52.648 [261/267] Linking target lib/librte_security.so.24.1 00:03:52.648 [262/267] Linking target lib/librte_hash.so.24.1 00:03:52.648 [263/267] Linking target lib/librte_ethdev.so.24.1 00:03:52.648 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:52.648 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:52.909 [266/267] Linking target lib/librte_power.so.24.1 00:03:52.909 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:52.909 INFO: autodetecting backend as ninja 00:03:52.909 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:03:57.114 CC lib/ut_mock/mock.o 00:03:57.114 CC lib/log/log.o 00:03:57.114 CC lib/ut/ut.o 00:03:57.114 CC lib/log/log_flags.o 00:03:57.114 CC lib/log/log_deprecated.o 00:03:57.114 LIB libspdk_log.a 00:03:57.114 LIB libspdk_ut_mock.a 00:03:57.114 LIB libspdk_ut.a 00:03:57.114 SO libspdk_log.so.7.1 00:03:57.114 SO libspdk_ut_mock.so.6.0 00:03:57.114 SO libspdk_ut.so.2.0 00:03:57.114 SYMLINK libspdk_log.so 00:03:57.114 SYMLINK libspdk_ut_mock.so 00:03:57.114 SYMLINK libspdk_ut.so 00:03:57.114 CC lib/util/base64.o 00:03:57.114 CC lib/util/bit_array.o 00:03:57.114 CC lib/util/cpuset.o 00:03:57.114 CC lib/util/crc16.o 00:03:57.114 CC lib/util/crc32c.o 00:03:57.114 CC lib/util/crc32.o 00:03:57.114 CC lib/dma/dma.o 00:03:57.114 CC lib/util/crc32_ieee.o 00:03:57.114 CC lib/util/crc64.o 00:03:57.114 CC lib/util/dif.o 00:03:57.114 CC lib/util/fd.o 00:03:57.114 CXX lib/trace_parser/trace.o 00:03:57.114 CC lib/util/fd_group.o 00:03:57.114 CC lib/util/file.o 00:03:57.114 CC lib/ioat/ioat.o 00:03:57.114 CC lib/util/hexlify.o 00:03:57.114 CC lib/util/iov.o 00:03:57.114 CC lib/util/math.o 00:03:57.114 CC lib/util/net.o 00:03:57.114 CC lib/util/pipe.o 00:03:57.114 CC lib/util/strerror_tls.o 00:03:57.114 CC lib/util/string.o 00:03:57.114 CC lib/util/uuid.o 00:03:57.114 CC lib/util/xor.o 00:03:57.114 CC lib/util/zipf.o 00:03:57.114 CC lib/util/md5.o 00:03:57.114 CC lib/vfio_user/host/vfio_user_pci.o 00:03:57.114 CC lib/vfio_user/host/vfio_user.o 00:03:57.374 LIB libspdk_dma.a 00:03:57.374 SO libspdk_dma.so.5.0 00:03:57.374 LIB libspdk_ioat.a 00:03:57.375 SYMLINK libspdk_dma.so 00:03:57.375 SO libspdk_ioat.so.7.0 00:03:57.375 SYMLINK libspdk_ioat.so 00:03:57.375 LIB libspdk_vfio_user.a 00:03:57.636 SO libspdk_vfio_user.so.5.0 00:03:57.636 LIB libspdk_util.a 00:03:57.636 SYMLINK libspdk_vfio_user.so 00:03:57.636 SO libspdk_util.so.10.1 00:03:57.897 SYMLINK libspdk_util.so 00:03:57.897 LIB libspdk_trace_parser.a 00:03:57.897 SO libspdk_trace_parser.so.6.0 00:03:57.897 SYMLINK libspdk_trace_parser.so 00:03:58.158 CC lib/rdma_utils/rdma_utils.o 00:03:58.158 CC lib/conf/conf.o 00:03:58.158 CC lib/vmd/vmd.o 00:03:58.158 CC lib/vmd/led.o 00:03:58.158 CC lib/idxd/idxd.o 00:03:58.158 CC lib/json/json_parse.o 00:03:58.158 CC lib/idxd/idxd_user.o 00:03:58.158 CC lib/json/json_util.o 00:03:58.158 CC lib/env_dpdk/env.o 00:03:58.158 CC lib/json/json_write.o 00:03:58.158 CC lib/idxd/idxd_kernel.o 00:03:58.158 CC lib/env_dpdk/memory.o 00:03:58.158 CC lib/env_dpdk/pci.o 00:03:58.158 CC lib/env_dpdk/init.o 00:03:58.158 CC lib/env_dpdk/threads.o 00:03:58.158 CC lib/env_dpdk/pci_vmd.o 00:03:58.158 CC lib/env_dpdk/pci_ioat.o 00:03:58.158 CC lib/env_dpdk/pci_virtio.o 00:03:58.158 CC lib/env_dpdk/pci_idxd.o 00:03:58.158 CC lib/env_dpdk/pci_event.o 00:03:58.158 CC lib/env_dpdk/sigbus_handler.o 00:03:58.158 CC lib/env_dpdk/pci_dpdk.o 00:03:58.158 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:58.158 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:58.419 LIB libspdk_conf.a 00:03:58.419 SO libspdk_conf.so.6.0 00:03:58.419 LIB libspdk_rdma_utils.a 00:03:58.419 SO libspdk_rdma_utils.so.1.0 00:03:58.419 LIB libspdk_json.a 00:03:58.419 SYMLINK libspdk_conf.so 00:03:58.680 SO libspdk_json.so.6.0 00:03:58.680 SYMLINK libspdk_rdma_utils.so 00:03:58.680 SYMLINK libspdk_json.so 00:03:58.680 LIB libspdk_idxd.a 00:03:58.680 SO libspdk_idxd.so.12.1 00:03:58.680 LIB libspdk_vmd.a 00:03:58.941 SO libspdk_vmd.so.6.0 00:03:58.941 SYMLINK libspdk_idxd.so 00:03:58.941 SYMLINK libspdk_vmd.so 00:03:58.941 CC lib/rdma_provider/common.o 00:03:58.941 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:58.941 CC lib/jsonrpc/jsonrpc_server.o 00:03:58.941 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:58.941 CC lib/jsonrpc/jsonrpc_client.o 00:03:58.941 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:59.202 LIB libspdk_rdma_provider.a 00:03:59.202 SO libspdk_rdma_provider.so.7.0 00:03:59.202 LIB libspdk_jsonrpc.a 00:03:59.202 SYMLINK libspdk_rdma_provider.so 00:03:59.202 SO libspdk_jsonrpc.so.6.0 00:03:59.462 SYMLINK libspdk_jsonrpc.so 00:03:59.462 LIB libspdk_env_dpdk.a 00:03:59.462 SO libspdk_env_dpdk.so.15.1 00:03:59.722 SYMLINK libspdk_env_dpdk.so 00:03:59.722 CC lib/rpc/rpc.o 00:03:59.982 LIB libspdk_rpc.a 00:03:59.982 SO libspdk_rpc.so.6.0 00:03:59.982 SYMLINK libspdk_rpc.so 00:04:00.242 CC lib/trace/trace.o 00:04:00.502 CC lib/trace/trace_flags.o 00:04:00.502 CC lib/trace/trace_rpc.o 00:04:00.502 CC lib/notify/notify.o 00:04:00.502 CC lib/notify/notify_rpc.o 00:04:00.502 CC lib/keyring/keyring.o 00:04:00.502 CC lib/keyring/keyring_rpc.o 00:04:00.502 LIB libspdk_notify.a 00:04:00.502 SO libspdk_notify.so.6.0 00:04:00.763 LIB libspdk_trace.a 00:04:00.763 LIB libspdk_keyring.a 00:04:00.763 SYMLINK libspdk_notify.so 00:04:00.763 SO libspdk_trace.so.11.0 00:04:00.763 SO libspdk_keyring.so.2.0 00:04:00.763 SYMLINK libspdk_keyring.so 00:04:00.763 SYMLINK libspdk_trace.so 00:04:01.024 CC lib/sock/sock.o 00:04:01.024 CC lib/sock/sock_rpc.o 00:04:01.024 CC lib/thread/thread.o 00:04:01.024 CC lib/thread/iobuf.o 00:04:01.593 LIB libspdk_sock.a 00:04:01.593 SO libspdk_sock.so.10.0 00:04:01.593 SYMLINK libspdk_sock.so 00:04:01.853 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:01.853 CC lib/nvme/nvme_ctrlr.o 00:04:01.853 CC lib/nvme/nvme_fabric.o 00:04:01.853 CC lib/nvme/nvme_ns_cmd.o 00:04:01.853 CC lib/nvme/nvme_ns.o 00:04:01.853 CC lib/nvme/nvme_pcie_common.o 00:04:01.853 CC lib/nvme/nvme_pcie.o 00:04:01.853 CC lib/nvme/nvme_qpair.o 00:04:01.853 CC lib/nvme/nvme.o 00:04:01.853 CC lib/nvme/nvme_quirks.o 00:04:01.853 CC lib/nvme/nvme_transport.o 00:04:01.853 CC lib/nvme/nvme_discovery.o 00:04:01.853 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:01.853 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:01.853 CC lib/nvme/nvme_tcp.o 00:04:01.853 CC lib/nvme/nvme_opal.o 00:04:01.853 CC lib/nvme/nvme_io_msg.o 00:04:01.853 CC lib/nvme/nvme_poll_group.o 00:04:01.853 CC lib/nvme/nvme_zns.o 00:04:01.853 CC lib/nvme/nvme_stubs.o 00:04:02.113 CC lib/nvme/nvme_auth.o 00:04:02.113 CC lib/nvme/nvme_cuse.o 00:04:02.113 CC lib/nvme/nvme_vfio_user.o 00:04:02.113 CC lib/nvme/nvme_rdma.o 00:04:02.372 LIB libspdk_thread.a 00:04:02.633 SO libspdk_thread.so.11.0 00:04:02.633 SYMLINK libspdk_thread.so 00:04:02.894 CC lib/blob/blobstore.o 00:04:02.894 CC lib/blob/request.o 00:04:02.894 CC lib/blob/zeroes.o 00:04:02.894 CC lib/blob/blob_bs_dev.o 00:04:02.894 CC lib/accel/accel.o 00:04:02.894 CC lib/accel/accel_rpc.o 00:04:02.894 CC lib/accel/accel_sw.o 00:04:02.894 CC lib/fsdev/fsdev.o 00:04:02.894 CC lib/fsdev/fsdev_io.o 00:04:02.894 CC lib/fsdev/fsdev_rpc.o 00:04:02.894 CC lib/virtio/virtio.o 00:04:02.894 CC lib/vfu_tgt/tgt_endpoint.o 00:04:02.894 CC lib/virtio/virtio_vhost_user.o 00:04:02.894 CC lib/virtio/virtio_vfio_user.o 00:04:02.894 CC lib/vfu_tgt/tgt_rpc.o 00:04:02.894 CC lib/init/json_config.o 00:04:02.894 CC lib/virtio/virtio_pci.o 00:04:02.894 CC lib/init/subsystem.o 00:04:02.894 CC lib/init/subsystem_rpc.o 00:04:02.894 CC lib/init/rpc.o 00:04:03.154 LIB libspdk_init.a 00:04:03.415 SO libspdk_init.so.6.0 00:04:03.415 LIB libspdk_virtio.a 00:04:03.415 LIB libspdk_vfu_tgt.a 00:04:03.415 SO libspdk_virtio.so.7.0 00:04:03.415 SO libspdk_vfu_tgt.so.3.0 00:04:03.415 SYMLINK libspdk_init.so 00:04:03.415 SYMLINK libspdk_virtio.so 00:04:03.415 SYMLINK libspdk_vfu_tgt.so 00:04:03.676 LIB libspdk_fsdev.a 00:04:03.676 SO libspdk_fsdev.so.2.0 00:04:03.676 SYMLINK libspdk_fsdev.so 00:04:03.676 CC lib/event/app.o 00:04:03.676 CC lib/event/reactor.o 00:04:03.676 CC lib/event/log_rpc.o 00:04:03.676 CC lib/event/app_rpc.o 00:04:03.676 CC lib/event/scheduler_static.o 00:04:03.937 LIB libspdk_accel.a 00:04:03.937 LIB libspdk_nvme.a 00:04:03.937 SO libspdk_accel.so.16.0 00:04:03.937 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:03.937 SYMLINK libspdk_accel.so 00:04:04.199 SO libspdk_nvme.so.15.0 00:04:04.199 LIB libspdk_event.a 00:04:04.199 SO libspdk_event.so.14.0 00:04:04.199 SYMLINK libspdk_event.so 00:04:04.199 SYMLINK libspdk_nvme.so 00:04:04.461 CC lib/bdev/bdev.o 00:04:04.461 CC lib/bdev/bdev_rpc.o 00:04:04.461 CC lib/bdev/bdev_zone.o 00:04:04.461 CC lib/bdev/part.o 00:04:04.461 CC lib/bdev/scsi_nvme.o 00:04:04.722 LIB libspdk_fuse_dispatcher.a 00:04:04.722 SO libspdk_fuse_dispatcher.so.1.0 00:04:04.722 SYMLINK libspdk_fuse_dispatcher.so 00:04:05.664 LIB libspdk_blob.a 00:04:05.664 SO libspdk_blob.so.12.0 00:04:05.664 SYMLINK libspdk_blob.so 00:04:06.237 CC lib/blobfs/blobfs.o 00:04:06.237 CC lib/blobfs/tree.o 00:04:06.237 CC lib/lvol/lvol.o 00:04:06.810 LIB libspdk_bdev.a 00:04:06.810 SO libspdk_bdev.so.17.0 00:04:06.810 LIB libspdk_blobfs.a 00:04:06.810 SO libspdk_blobfs.so.11.0 00:04:06.810 SYMLINK libspdk_bdev.so 00:04:06.810 LIB libspdk_lvol.a 00:04:06.810 SYMLINK libspdk_blobfs.so 00:04:07.072 SO libspdk_lvol.so.11.0 00:04:07.072 SYMLINK libspdk_lvol.so 00:04:07.333 CC lib/nbd/nbd.o 00:04:07.333 CC lib/nbd/nbd_rpc.o 00:04:07.333 CC lib/ublk/ublk.o 00:04:07.333 CC lib/ublk/ublk_rpc.o 00:04:07.333 CC lib/nvmf/ctrlr.o 00:04:07.333 CC lib/nvmf/ctrlr_discovery.o 00:04:07.333 CC lib/nvmf/ctrlr_bdev.o 00:04:07.333 CC lib/nvmf/subsystem.o 00:04:07.333 CC lib/nvmf/nvmf.o 00:04:07.333 CC lib/nvmf/nvmf_rpc.o 00:04:07.333 CC lib/scsi/dev.o 00:04:07.333 CC lib/nvmf/transport.o 00:04:07.333 CC lib/scsi/lun.o 00:04:07.333 CC lib/nvmf/tcp.o 00:04:07.334 CC lib/scsi/port.o 00:04:07.334 CC lib/ftl/ftl_core.o 00:04:07.334 CC lib/nvmf/stubs.o 00:04:07.334 CC lib/ftl/ftl_layout.o 00:04:07.334 CC lib/scsi/scsi.o 00:04:07.334 CC lib/nvmf/mdns_server.o 00:04:07.334 CC lib/ftl/ftl_init.o 00:04:07.334 CC lib/nvmf/rdma.o 00:04:07.334 CC lib/scsi/scsi_bdev.o 00:04:07.334 CC lib/nvmf/vfio_user.o 00:04:07.334 CC lib/scsi/scsi_pr.o 00:04:07.334 CC lib/ftl/ftl_debug.o 00:04:07.334 CC lib/nvmf/auth.o 00:04:07.334 CC lib/ftl/ftl_io.o 00:04:07.334 CC lib/scsi/scsi_rpc.o 00:04:07.334 CC lib/ftl/ftl_sb.o 00:04:07.334 CC lib/scsi/task.o 00:04:07.334 CC lib/ftl/ftl_l2p.o 00:04:07.334 CC lib/ftl/ftl_l2p_flat.o 00:04:07.334 CC lib/ftl/ftl_nv_cache.o 00:04:07.334 CC lib/ftl/ftl_band.o 00:04:07.334 CC lib/ftl/ftl_band_ops.o 00:04:07.334 CC lib/ftl/ftl_writer.o 00:04:07.334 CC lib/ftl/ftl_rq.o 00:04:07.334 CC lib/ftl/ftl_reloc.o 00:04:07.334 CC lib/ftl/ftl_p2l_log.o 00:04:07.334 CC lib/ftl/ftl_l2p_cache.o 00:04:07.334 CC lib/ftl/ftl_p2l.o 00:04:07.334 CC lib/ftl/mngt/ftl_mngt.o 00:04:07.334 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:07.334 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:07.334 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:07.334 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:07.334 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:07.334 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:07.334 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:07.334 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:07.334 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:07.334 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:07.334 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:07.334 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:07.334 CC lib/ftl/utils/ftl_conf.o 00:04:07.334 CC lib/ftl/utils/ftl_md.o 00:04:07.334 CC lib/ftl/utils/ftl_mempool.o 00:04:07.334 CC lib/ftl/utils/ftl_bitmap.o 00:04:07.334 CC lib/ftl/utils/ftl_property.o 00:04:07.334 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:07.334 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:07.334 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:07.334 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:07.334 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:07.334 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:07.334 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:07.334 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:07.334 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:07.334 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:07.334 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:07.334 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:07.334 CC lib/ftl/base/ftl_base_dev.o 00:04:07.334 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:07.334 CC lib/ftl/ftl_trace.o 00:04:07.334 CC lib/ftl/base/ftl_base_bdev.o 00:04:07.903 LIB libspdk_nbd.a 00:04:07.903 SO libspdk_nbd.so.7.0 00:04:07.903 LIB libspdk_scsi.a 00:04:07.903 SYMLINK libspdk_nbd.so 00:04:07.903 SO libspdk_scsi.so.9.0 00:04:07.903 SYMLINK libspdk_scsi.so 00:04:07.903 LIB libspdk_ublk.a 00:04:07.903 SO libspdk_ublk.so.3.0 00:04:08.164 SYMLINK libspdk_ublk.so 00:04:08.164 LIB libspdk_ftl.a 00:04:08.425 CC lib/iscsi/conn.o 00:04:08.425 CC lib/iscsi/init_grp.o 00:04:08.425 CC lib/iscsi/iscsi.o 00:04:08.425 CC lib/iscsi/param.o 00:04:08.425 CC lib/iscsi/portal_grp.o 00:04:08.425 CC lib/iscsi/tgt_node.o 00:04:08.425 CC lib/iscsi/iscsi_subsystem.o 00:04:08.425 CC lib/iscsi/iscsi_rpc.o 00:04:08.425 CC lib/iscsi/task.o 00:04:08.425 CC lib/vhost/vhost.o 00:04:08.425 CC lib/vhost/vhost_rpc.o 00:04:08.425 CC lib/vhost/vhost_scsi.o 00:04:08.425 CC lib/vhost/vhost_blk.o 00:04:08.425 CC lib/vhost/rte_vhost_user.o 00:04:08.425 SO libspdk_ftl.so.9.0 00:04:08.686 SYMLINK libspdk_ftl.so 00:04:09.260 LIB libspdk_nvmf.a 00:04:09.261 SO libspdk_nvmf.so.20.0 00:04:09.261 LIB libspdk_vhost.a 00:04:09.261 SO libspdk_vhost.so.8.0 00:04:09.523 SYMLINK libspdk_nvmf.so 00:04:09.523 SYMLINK libspdk_vhost.so 00:04:09.523 LIB libspdk_iscsi.a 00:04:09.523 SO libspdk_iscsi.so.8.0 00:04:09.785 SYMLINK libspdk_iscsi.so 00:04:10.357 CC module/env_dpdk/env_dpdk_rpc.o 00:04:10.357 CC module/vfu_device/vfu_virtio.o 00:04:10.357 CC module/vfu_device/vfu_virtio_blk.o 00:04:10.357 CC module/vfu_device/vfu_virtio_scsi.o 00:04:10.357 CC module/vfu_device/vfu_virtio_rpc.o 00:04:10.357 CC module/vfu_device/vfu_virtio_fs.o 00:04:10.357 LIB libspdk_env_dpdk_rpc.a 00:04:10.618 CC module/accel/iaa/accel_iaa.o 00:04:10.618 CC module/sock/posix/posix.o 00:04:10.618 CC module/accel/iaa/accel_iaa_rpc.o 00:04:10.618 CC module/accel/error/accel_error.o 00:04:10.618 CC module/accel/error/accel_error_rpc.o 00:04:10.618 CC module/accel/ioat/accel_ioat_rpc.o 00:04:10.618 CC module/accel/ioat/accel_ioat.o 00:04:10.618 CC module/blob/bdev/blob_bdev.o 00:04:10.618 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:10.618 CC module/scheduler/gscheduler/gscheduler.o 00:04:10.618 CC module/accel/dsa/accel_dsa.o 00:04:10.618 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:10.618 CC module/keyring/file/keyring.o 00:04:10.618 CC module/accel/dsa/accel_dsa_rpc.o 00:04:10.618 CC module/keyring/file/keyring_rpc.o 00:04:10.618 CC module/keyring/linux/keyring.o 00:04:10.618 CC module/keyring/linux/keyring_rpc.o 00:04:10.618 CC module/fsdev/aio/fsdev_aio.o 00:04:10.618 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:10.618 CC module/fsdev/aio/linux_aio_mgr.o 00:04:10.618 SO libspdk_env_dpdk_rpc.so.6.0 00:04:10.618 SYMLINK libspdk_env_dpdk_rpc.so 00:04:10.618 LIB libspdk_keyring_file.a 00:04:10.618 LIB libspdk_scheduler_gscheduler.a 00:04:10.618 LIB libspdk_keyring_linux.a 00:04:10.618 SO libspdk_scheduler_gscheduler.so.4.0 00:04:10.618 LIB libspdk_scheduler_dpdk_governor.a 00:04:10.618 LIB libspdk_accel_iaa.a 00:04:10.618 LIB libspdk_scheduler_dynamic.a 00:04:10.618 SO libspdk_keyring_file.so.2.0 00:04:10.879 LIB libspdk_accel_ioat.a 00:04:10.879 LIB libspdk_accel_error.a 00:04:10.879 SO libspdk_keyring_linux.so.1.0 00:04:10.879 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:10.879 SO libspdk_accel_iaa.so.3.0 00:04:10.879 SO libspdk_accel_ioat.so.6.0 00:04:10.879 SO libspdk_scheduler_dynamic.so.4.0 00:04:10.879 SYMLINK libspdk_scheduler_gscheduler.so 00:04:10.879 SO libspdk_accel_error.so.2.0 00:04:10.879 LIB libspdk_blob_bdev.a 00:04:10.879 SYMLINK libspdk_keyring_file.so 00:04:10.879 LIB libspdk_accel_dsa.a 00:04:10.879 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:10.879 SYMLINK libspdk_scheduler_dynamic.so 00:04:10.879 SYMLINK libspdk_accel_iaa.so 00:04:10.879 SYMLINK libspdk_keyring_linux.so 00:04:10.879 SO libspdk_blob_bdev.so.12.0 00:04:10.879 SYMLINK libspdk_accel_ioat.so 00:04:10.879 SO libspdk_accel_dsa.so.5.0 00:04:10.879 SYMLINK libspdk_accel_error.so 00:04:10.879 SYMLINK libspdk_blob_bdev.so 00:04:10.879 LIB libspdk_vfu_device.a 00:04:10.879 SYMLINK libspdk_accel_dsa.so 00:04:10.879 SO libspdk_vfu_device.so.3.0 00:04:11.140 SYMLINK libspdk_vfu_device.so 00:04:11.140 LIB libspdk_fsdev_aio.a 00:04:11.140 LIB libspdk_sock_posix.a 00:04:11.140 SO libspdk_fsdev_aio.so.1.0 00:04:11.140 SO libspdk_sock_posix.so.6.0 00:04:11.401 SYMLINK libspdk_fsdev_aio.so 00:04:11.401 SYMLINK libspdk_sock_posix.so 00:04:11.401 CC module/bdev/null/bdev_null.o 00:04:11.401 CC module/blobfs/bdev/blobfs_bdev.o 00:04:11.401 CC module/bdev/null/bdev_null_rpc.o 00:04:11.401 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:11.401 CC module/bdev/malloc/bdev_malloc.o 00:04:11.401 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:11.401 CC module/bdev/error/vbdev_error.o 00:04:11.401 CC module/bdev/error/vbdev_error_rpc.o 00:04:11.401 CC module/bdev/aio/bdev_aio.o 00:04:11.401 CC module/bdev/aio/bdev_aio_rpc.o 00:04:11.401 CC module/bdev/delay/vbdev_delay.o 00:04:11.401 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:11.401 CC module/bdev/gpt/gpt.o 00:04:11.401 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:11.401 CC module/bdev/gpt/vbdev_gpt.o 00:04:11.401 CC module/bdev/raid/bdev_raid.o 00:04:11.401 CC module/bdev/nvme/bdev_nvme.o 00:04:11.401 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:11.401 CC module/bdev/raid/bdev_raid_rpc.o 00:04:11.401 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:11.401 CC module/bdev/raid/bdev_raid_sb.o 00:04:11.401 CC module/bdev/raid/raid0.o 00:04:11.401 CC module/bdev/nvme/nvme_rpc.o 00:04:11.401 CC module/bdev/nvme/bdev_mdns_client.o 00:04:11.401 CC module/bdev/raid/raid1.o 00:04:11.401 CC module/bdev/nvme/vbdev_opal.o 00:04:11.401 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:11.401 CC module/bdev/split/vbdev_split.o 00:04:11.401 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:11.401 CC module/bdev/raid/concat.o 00:04:11.401 CC module/bdev/lvol/vbdev_lvol.o 00:04:11.401 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:11.401 CC module/bdev/passthru/vbdev_passthru.o 00:04:11.401 CC module/bdev/split/vbdev_split_rpc.o 00:04:11.401 CC module/bdev/ftl/bdev_ftl.o 00:04:11.401 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:11.401 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:11.401 CC module/bdev/iscsi/bdev_iscsi.o 00:04:11.401 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:11.401 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:11.401 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:11.401 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:11.662 LIB libspdk_blobfs_bdev.a 00:04:11.663 SO libspdk_blobfs_bdev.so.6.0 00:04:11.663 LIB libspdk_bdev_null.a 00:04:11.663 LIB libspdk_bdev_split.a 00:04:11.663 LIB libspdk_bdev_error.a 00:04:11.663 SYMLINK libspdk_blobfs_bdev.so 00:04:11.924 SO libspdk_bdev_error.so.6.0 00:04:11.924 SO libspdk_bdev_null.so.6.0 00:04:11.925 SO libspdk_bdev_split.so.6.0 00:04:11.925 LIB libspdk_bdev_gpt.a 00:04:11.925 LIB libspdk_bdev_ftl.a 00:04:11.925 LIB libspdk_bdev_aio.a 00:04:11.925 LIB libspdk_bdev_passthru.a 00:04:11.925 SO libspdk_bdev_gpt.so.6.0 00:04:11.925 SO libspdk_bdev_ftl.so.6.0 00:04:11.925 LIB libspdk_bdev_malloc.a 00:04:11.925 SYMLINK libspdk_bdev_null.so 00:04:11.925 SYMLINK libspdk_bdev_error.so 00:04:11.925 LIB libspdk_bdev_zone_block.a 00:04:11.925 SO libspdk_bdev_aio.so.6.0 00:04:11.925 SO libspdk_bdev_malloc.so.6.0 00:04:11.925 SO libspdk_bdev_passthru.so.6.0 00:04:11.925 SYMLINK libspdk_bdev_split.so 00:04:11.925 LIB libspdk_bdev_iscsi.a 00:04:11.925 LIB libspdk_bdev_delay.a 00:04:11.925 SO libspdk_bdev_zone_block.so.6.0 00:04:11.925 SYMLINK libspdk_bdev_ftl.so 00:04:11.925 SYMLINK libspdk_bdev_gpt.so 00:04:11.925 SO libspdk_bdev_iscsi.so.6.0 00:04:11.925 SYMLINK libspdk_bdev_malloc.so 00:04:11.925 SYMLINK libspdk_bdev_aio.so 00:04:11.925 SO libspdk_bdev_delay.so.6.0 00:04:11.925 SYMLINK libspdk_bdev_passthru.so 00:04:11.925 SYMLINK libspdk_bdev_zone_block.so 00:04:11.925 LIB libspdk_bdev_virtio.a 00:04:11.925 SYMLINK libspdk_bdev_iscsi.so 00:04:11.925 LIB libspdk_bdev_lvol.a 00:04:11.925 SYMLINK libspdk_bdev_delay.so 00:04:12.186 SO libspdk_bdev_lvol.so.6.0 00:04:12.186 SO libspdk_bdev_virtio.so.6.0 00:04:12.186 SYMLINK libspdk_bdev_lvol.so 00:04:12.186 SYMLINK libspdk_bdev_virtio.so 00:04:12.446 LIB libspdk_bdev_raid.a 00:04:12.446 SO libspdk_bdev_raid.so.6.0 00:04:12.446 SYMLINK libspdk_bdev_raid.so 00:04:13.833 LIB libspdk_bdev_nvme.a 00:04:13.833 SO libspdk_bdev_nvme.so.7.1 00:04:13.833 SYMLINK libspdk_bdev_nvme.so 00:04:14.405 CC module/event/subsystems/scheduler/scheduler.o 00:04:14.405 CC module/event/subsystems/iobuf/iobuf.o 00:04:14.405 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:14.405 CC module/event/subsystems/sock/sock.o 00:04:14.405 CC module/event/subsystems/vmd/vmd.o 00:04:14.405 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:14.405 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:14.405 CC module/event/subsystems/fsdev/fsdev.o 00:04:14.405 CC module/event/subsystems/keyring/keyring.o 00:04:14.405 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:14.667 LIB libspdk_event_scheduler.a 00:04:14.667 SO libspdk_event_scheduler.so.4.0 00:04:14.667 LIB libspdk_event_vhost_blk.a 00:04:14.667 LIB libspdk_event_iobuf.a 00:04:14.667 LIB libspdk_event_keyring.a 00:04:14.667 LIB libspdk_event_sock.a 00:04:14.667 LIB libspdk_event_fsdev.a 00:04:14.667 LIB libspdk_event_vmd.a 00:04:14.667 LIB libspdk_event_vfu_tgt.a 00:04:14.667 SO libspdk_event_vhost_blk.so.3.0 00:04:14.667 SO libspdk_event_keyring.so.1.0 00:04:14.667 SO libspdk_event_sock.so.5.0 00:04:14.667 SO libspdk_event_iobuf.so.3.0 00:04:14.667 SO libspdk_event_fsdev.so.1.0 00:04:14.667 SO libspdk_event_vmd.so.6.0 00:04:14.667 SO libspdk_event_vfu_tgt.so.3.0 00:04:14.667 SYMLINK libspdk_event_scheduler.so 00:04:14.667 SYMLINK libspdk_event_vhost_blk.so 00:04:14.667 SYMLINK libspdk_event_keyring.so 00:04:14.667 SYMLINK libspdk_event_iobuf.so 00:04:14.667 SYMLINK libspdk_event_sock.so 00:04:14.667 SYMLINK libspdk_event_fsdev.so 00:04:14.667 SYMLINK libspdk_event_vfu_tgt.so 00:04:14.667 SYMLINK libspdk_event_vmd.so 00:04:15.238 CC module/event/subsystems/accel/accel.o 00:04:15.238 LIB libspdk_event_accel.a 00:04:15.238 SO libspdk_event_accel.so.6.0 00:04:15.499 SYMLINK libspdk_event_accel.so 00:04:15.760 CC module/event/subsystems/bdev/bdev.o 00:04:15.760 LIB libspdk_event_bdev.a 00:04:16.022 SO libspdk_event_bdev.so.6.0 00:04:16.022 SYMLINK libspdk_event_bdev.so 00:04:16.284 CC module/event/subsystems/ublk/ublk.o 00:04:16.284 CC module/event/subsystems/nbd/nbd.o 00:04:16.284 CC module/event/subsystems/scsi/scsi.o 00:04:16.284 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:16.284 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:16.545 LIB libspdk_event_ublk.a 00:04:16.545 LIB libspdk_event_nbd.a 00:04:16.545 LIB libspdk_event_scsi.a 00:04:16.545 SO libspdk_event_ublk.so.3.0 00:04:16.545 SO libspdk_event_nbd.so.6.0 00:04:16.545 SO libspdk_event_scsi.so.6.0 00:04:16.545 LIB libspdk_event_nvmf.a 00:04:16.545 SYMLINK libspdk_event_ublk.so 00:04:16.545 SYMLINK libspdk_event_nbd.so 00:04:16.545 SYMLINK libspdk_event_scsi.so 00:04:16.545 SO libspdk_event_nvmf.so.6.0 00:04:16.806 SYMLINK libspdk_event_nvmf.so 00:04:17.067 CC module/event/subsystems/iscsi/iscsi.o 00:04:17.067 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:17.067 LIB libspdk_event_vhost_scsi.a 00:04:17.067 LIB libspdk_event_iscsi.a 00:04:17.067 SO libspdk_event_vhost_scsi.so.3.0 00:04:17.328 SO libspdk_event_iscsi.so.6.0 00:04:17.328 SYMLINK libspdk_event_vhost_scsi.so 00:04:17.328 SYMLINK libspdk_event_iscsi.so 00:04:17.590 SO libspdk.so.6.0 00:04:17.590 SYMLINK libspdk.so 00:04:17.852 CXX app/trace/trace.o 00:04:17.852 CC app/spdk_lspci/spdk_lspci.o 00:04:17.852 CC app/trace_record/trace_record.o 00:04:17.852 CC test/rpc_client/rpc_client_test.o 00:04:17.852 CC app/spdk_nvme_discover/discovery_aer.o 00:04:17.852 TEST_HEADER include/spdk/accel.h 00:04:17.852 TEST_HEADER include/spdk/accel_module.h 00:04:17.852 TEST_HEADER include/spdk/assert.h 00:04:17.852 CC app/spdk_top/spdk_top.o 00:04:17.852 TEST_HEADER include/spdk/barrier.h 00:04:17.852 CC app/spdk_nvme_identify/identify.o 00:04:17.852 TEST_HEADER include/spdk/base64.h 00:04:17.852 CC app/spdk_nvme_perf/perf.o 00:04:17.852 TEST_HEADER include/spdk/bdev.h 00:04:17.852 TEST_HEADER include/spdk/bdev_module.h 00:04:17.852 TEST_HEADER include/spdk/bit_array.h 00:04:17.852 TEST_HEADER include/spdk/bdev_zone.h 00:04:17.852 CC app/iscsi_tgt/iscsi_tgt.o 00:04:17.852 TEST_HEADER include/spdk/bit_pool.h 00:04:17.852 TEST_HEADER include/spdk/blob_bdev.h 00:04:17.852 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:17.852 TEST_HEADER include/spdk/blobfs.h 00:04:17.852 TEST_HEADER include/spdk/blob.h 00:04:17.852 TEST_HEADER include/spdk/conf.h 00:04:17.852 TEST_HEADER include/spdk/config.h 00:04:17.852 TEST_HEADER include/spdk/cpuset.h 00:04:17.852 TEST_HEADER include/spdk/crc16.h 00:04:17.852 TEST_HEADER include/spdk/crc32.h 00:04:17.852 TEST_HEADER include/spdk/crc64.h 00:04:17.852 TEST_HEADER include/spdk/dif.h 00:04:17.852 TEST_HEADER include/spdk/dma.h 00:04:17.852 TEST_HEADER include/spdk/env_dpdk.h 00:04:17.852 TEST_HEADER include/spdk/endian.h 00:04:17.852 TEST_HEADER include/spdk/env.h 00:04:17.852 TEST_HEADER include/spdk/event.h 00:04:17.852 TEST_HEADER include/spdk/file.h 00:04:17.852 TEST_HEADER include/spdk/fd_group.h 00:04:17.852 TEST_HEADER include/spdk/fd.h 00:04:17.852 TEST_HEADER include/spdk/fsdev_module.h 00:04:17.852 TEST_HEADER include/spdk/fsdev.h 00:04:17.852 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:17.852 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:17.852 TEST_HEADER include/spdk/ftl.h 00:04:17.852 TEST_HEADER include/spdk/gpt_spec.h 00:04:17.852 TEST_HEADER include/spdk/hexlify.h 00:04:17.852 TEST_HEADER include/spdk/histogram_data.h 00:04:17.852 TEST_HEADER include/spdk/idxd.h 00:04:17.852 TEST_HEADER include/spdk/init.h 00:04:17.852 TEST_HEADER include/spdk/idxd_spec.h 00:04:17.852 TEST_HEADER include/spdk/ioat_spec.h 00:04:17.852 TEST_HEADER include/spdk/ioat.h 00:04:17.852 TEST_HEADER include/spdk/iscsi_spec.h 00:04:17.852 CC app/nvmf_tgt/nvmf_main.o 00:04:17.852 TEST_HEADER include/spdk/json.h 00:04:17.852 TEST_HEADER include/spdk/jsonrpc.h 00:04:17.852 CC app/spdk_dd/spdk_dd.o 00:04:17.852 TEST_HEADER include/spdk/keyring.h 00:04:17.852 TEST_HEADER include/spdk/keyring_module.h 00:04:17.852 TEST_HEADER include/spdk/likely.h 00:04:17.852 TEST_HEADER include/spdk/lvol.h 00:04:17.852 TEST_HEADER include/spdk/log.h 00:04:17.852 TEST_HEADER include/spdk/md5.h 00:04:17.852 TEST_HEADER include/spdk/memory.h 00:04:17.852 CC app/spdk_tgt/spdk_tgt.o 00:04:17.852 TEST_HEADER include/spdk/nbd.h 00:04:17.852 TEST_HEADER include/spdk/mmio.h 00:04:17.852 TEST_HEADER include/spdk/net.h 00:04:17.852 TEST_HEADER include/spdk/nvme.h 00:04:17.852 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:17.852 TEST_HEADER include/spdk/notify.h 00:04:17.852 TEST_HEADER include/spdk/nvme_intel.h 00:04:17.852 TEST_HEADER include/spdk/nvme_spec.h 00:04:17.852 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:17.852 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:17.852 TEST_HEADER include/spdk/nvme_zns.h 00:04:17.852 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:17.852 TEST_HEADER include/spdk/nvmf_spec.h 00:04:17.852 TEST_HEADER include/spdk/nvmf.h 00:04:17.852 TEST_HEADER include/spdk/nvmf_transport.h 00:04:17.852 TEST_HEADER include/spdk/opal_spec.h 00:04:17.852 TEST_HEADER include/spdk/opal.h 00:04:17.852 TEST_HEADER include/spdk/pci_ids.h 00:04:17.852 TEST_HEADER include/spdk/pipe.h 00:04:17.852 TEST_HEADER include/spdk/queue.h 00:04:18.116 TEST_HEADER include/spdk/reduce.h 00:04:18.116 TEST_HEADER include/spdk/rpc.h 00:04:18.116 TEST_HEADER include/spdk/scheduler.h 00:04:18.116 TEST_HEADER include/spdk/scsi.h 00:04:18.116 TEST_HEADER include/spdk/sock.h 00:04:18.116 TEST_HEADER include/spdk/scsi_spec.h 00:04:18.116 TEST_HEADER include/spdk/stdinc.h 00:04:18.116 TEST_HEADER include/spdk/thread.h 00:04:18.116 TEST_HEADER include/spdk/string.h 00:04:18.116 TEST_HEADER include/spdk/trace.h 00:04:18.116 TEST_HEADER include/spdk/trace_parser.h 00:04:18.116 TEST_HEADER include/spdk/tree.h 00:04:18.116 TEST_HEADER include/spdk/ublk.h 00:04:18.116 TEST_HEADER include/spdk/util.h 00:04:18.116 TEST_HEADER include/spdk/uuid.h 00:04:18.116 TEST_HEADER include/spdk/version.h 00:04:18.116 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:18.116 TEST_HEADER include/spdk/vhost.h 00:04:18.116 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:18.116 TEST_HEADER include/spdk/vmd.h 00:04:18.116 TEST_HEADER include/spdk/xor.h 00:04:18.116 TEST_HEADER include/spdk/zipf.h 00:04:18.116 CXX test/cpp_headers/accel.o 00:04:18.116 CXX test/cpp_headers/accel_module.o 00:04:18.116 CXX test/cpp_headers/assert.o 00:04:18.116 CXX test/cpp_headers/barrier.o 00:04:18.116 CXX test/cpp_headers/base64.o 00:04:18.116 CXX test/cpp_headers/bdev.o 00:04:18.116 CXX test/cpp_headers/bdev_module.o 00:04:18.116 CXX test/cpp_headers/bdev_zone.o 00:04:18.116 CXX test/cpp_headers/bit_array.o 00:04:18.116 CXX test/cpp_headers/bit_pool.o 00:04:18.116 CXX test/cpp_headers/blob_bdev.o 00:04:18.116 CXX test/cpp_headers/blobfs_bdev.o 00:04:18.116 CXX test/cpp_headers/blobfs.o 00:04:18.116 CXX test/cpp_headers/blob.o 00:04:18.116 CXX test/cpp_headers/conf.o 00:04:18.116 CXX test/cpp_headers/config.o 00:04:18.116 CXX test/cpp_headers/crc32.o 00:04:18.116 CXX test/cpp_headers/cpuset.o 00:04:18.116 CXX test/cpp_headers/crc16.o 00:04:18.116 CXX test/cpp_headers/crc64.o 00:04:18.116 CXX test/cpp_headers/dif.o 00:04:18.116 CXX test/cpp_headers/env.o 00:04:18.116 CXX test/cpp_headers/dma.o 00:04:18.116 CXX test/cpp_headers/endian.o 00:04:18.116 CXX test/cpp_headers/event.o 00:04:18.116 CXX test/cpp_headers/env_dpdk.o 00:04:18.116 CXX test/cpp_headers/fd.o 00:04:18.116 CXX test/cpp_headers/fd_group.o 00:04:18.116 CXX test/cpp_headers/file.o 00:04:18.116 CXX test/cpp_headers/fsdev.o 00:04:18.116 CXX test/cpp_headers/fuse_dispatcher.o 00:04:18.116 CXX test/cpp_headers/fsdev_module.o 00:04:18.116 CXX test/cpp_headers/gpt_spec.o 00:04:18.116 CXX test/cpp_headers/ftl.o 00:04:18.116 CXX test/cpp_headers/hexlify.o 00:04:18.116 CXX test/cpp_headers/idxd_spec.o 00:04:18.116 CXX test/cpp_headers/histogram_data.o 00:04:18.116 CXX test/cpp_headers/idxd.o 00:04:18.116 CXX test/cpp_headers/init.o 00:04:18.116 CXX test/cpp_headers/ioat.o 00:04:18.116 CXX test/cpp_headers/iscsi_spec.o 00:04:18.116 CXX test/cpp_headers/json.o 00:04:18.116 CXX test/cpp_headers/ioat_spec.o 00:04:18.116 CXX test/cpp_headers/jsonrpc.o 00:04:18.116 CXX test/cpp_headers/keyring_module.o 00:04:18.116 CXX test/cpp_headers/keyring.o 00:04:18.116 CXX test/cpp_headers/log.o 00:04:18.116 CXX test/cpp_headers/memory.o 00:04:18.117 CXX test/cpp_headers/lvol.o 00:04:18.117 CXX test/cpp_headers/md5.o 00:04:18.117 CXX test/cpp_headers/likely.o 00:04:18.117 CXX test/cpp_headers/mmio.o 00:04:18.117 CXX test/cpp_headers/nbd.o 00:04:18.117 CXX test/cpp_headers/net.o 00:04:18.117 CXX test/cpp_headers/notify.o 00:04:18.117 CXX test/cpp_headers/nvme.o 00:04:18.117 CXX test/cpp_headers/nvme_intel.o 00:04:18.117 CXX test/cpp_headers/nvme_zns.o 00:04:18.117 LINK spdk_lspci 00:04:18.117 CC test/app/stub/stub.o 00:04:18.117 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:18.117 CXX test/cpp_headers/nvme_ocssd.o 00:04:18.117 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:18.117 CXX test/cpp_headers/nvme_spec.o 00:04:18.117 CXX test/cpp_headers/nvmf_cmd.o 00:04:18.117 CXX test/cpp_headers/nvmf.o 00:04:18.117 CC test/app/jsoncat/jsoncat.o 00:04:18.117 CXX test/cpp_headers/nvmf_transport.o 00:04:18.117 CXX test/cpp_headers/opal_spec.o 00:04:18.117 CXX test/cpp_headers/nvmf_spec.o 00:04:18.117 CXX test/cpp_headers/opal.o 00:04:18.117 CC examples/util/zipf/zipf.o 00:04:18.117 CXX test/cpp_headers/pci_ids.o 00:04:18.117 CXX test/cpp_headers/queue.o 00:04:18.117 CC examples/ioat/verify/verify.o 00:04:18.117 CC test/app/histogram_perf/histogram_perf.o 00:04:18.117 CXX test/cpp_headers/pipe.o 00:04:18.117 CC test/thread/poller_perf/poller_perf.o 00:04:18.117 CXX test/cpp_headers/rpc.o 00:04:18.117 CXX test/cpp_headers/reduce.o 00:04:18.117 CXX test/cpp_headers/scheduler.o 00:04:18.117 CXX test/cpp_headers/scsi.o 00:04:18.117 CC examples/ioat/perf/perf.o 00:04:18.117 CXX test/cpp_headers/scsi_spec.o 00:04:18.117 CXX test/cpp_headers/stdinc.o 00:04:18.117 CXX test/cpp_headers/sock.o 00:04:18.117 CXX test/cpp_headers/string.o 00:04:18.117 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:18.117 CXX test/cpp_headers/thread.o 00:04:18.117 CXX test/cpp_headers/trace_parser.o 00:04:18.117 CXX test/cpp_headers/trace.o 00:04:18.117 CXX test/cpp_headers/tree.o 00:04:18.117 CXX test/cpp_headers/ublk.o 00:04:18.117 CXX test/cpp_headers/util.o 00:04:18.117 CXX test/cpp_headers/uuid.o 00:04:18.117 CXX test/cpp_headers/vfio_user_spec.o 00:04:18.117 CXX test/cpp_headers/version.o 00:04:18.117 CXX test/cpp_headers/vhost.o 00:04:18.117 CXX test/cpp_headers/vmd.o 00:04:18.117 CXX test/cpp_headers/vfio_user_pci.o 00:04:18.117 CXX test/cpp_headers/xor.o 00:04:18.117 CXX test/cpp_headers/zipf.o 00:04:18.117 CC app/fio/nvme/fio_plugin.o 00:04:18.117 CC test/env/pci/pci_ut.o 00:04:18.117 CC test/env/vtophys/vtophys.o 00:04:18.117 CC test/env/memory/memory_ut.o 00:04:18.117 CC test/app/bdev_svc/bdev_svc.o 00:04:18.117 CC test/dma/test_dma/test_dma.o 00:04:18.117 LINK spdk_nvme_discover 00:04:18.117 LINK rpc_client_test 00:04:18.379 CC app/fio/bdev/fio_plugin.o 00:04:18.379 LINK iscsi_tgt 00:04:18.379 LINK interrupt_tgt 00:04:18.379 LINK nvmf_tgt 00:04:18.379 LINK spdk_trace_record 00:04:18.379 LINK spdk_tgt 00:04:18.637 CC test/env/mem_callbacks/mem_callbacks.o 00:04:18.637 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:18.637 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:18.637 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:18.637 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:18.637 LINK spdk_trace 00:04:18.637 LINK jsoncat 00:04:18.637 LINK histogram_perf 00:04:18.637 LINK poller_perf 00:04:18.637 LINK spdk_dd 00:04:18.897 LINK stub 00:04:18.897 LINK env_dpdk_post_init 00:04:18.897 LINK zipf 00:04:18.897 LINK vtophys 00:04:18.897 LINK bdev_svc 00:04:18.897 LINK ioat_perf 00:04:18.897 LINK verify 00:04:18.897 CC app/vhost/vhost.o 00:04:18.897 LINK spdk_top 00:04:19.157 LINK pci_ut 00:04:19.157 LINK nvme_fuzz 00:04:19.157 LINK test_dma 00:04:19.157 LINK vhost_fuzz 00:04:19.157 LINK spdk_nvme 00:04:19.157 LINK spdk_bdev 00:04:19.157 LINK spdk_nvme_perf 00:04:19.157 LINK vhost 00:04:19.157 LINK spdk_nvme_identify 00:04:19.157 CC test/event/event_perf/event_perf.o 00:04:19.157 CC test/event/reactor/reactor.o 00:04:19.157 CC test/event/reactor_perf/reactor_perf.o 00:04:19.157 CC test/event/app_repeat/app_repeat.o 00:04:19.157 CC test/event/scheduler/scheduler.o 00:04:19.418 LINK mem_callbacks 00:04:19.418 CC examples/sock/hello_world/hello_sock.o 00:04:19.418 CC examples/vmd/led/led.o 00:04:19.418 CC examples/vmd/lsvmd/lsvmd.o 00:04:19.418 CC examples/idxd/perf/perf.o 00:04:19.418 CC examples/thread/thread/thread_ex.o 00:04:19.418 LINK reactor 00:04:19.418 LINK reactor_perf 00:04:19.418 LINK event_perf 00:04:19.418 LINK app_repeat 00:04:19.418 LINK lsvmd 00:04:19.418 LINK led 00:04:19.418 LINK scheduler 00:04:19.679 LINK hello_sock 00:04:19.679 CC test/nvme/overhead/overhead.o 00:04:19.679 CC test/nvme/reserve/reserve.o 00:04:19.679 CC test/nvme/aer/aer.o 00:04:19.679 CC test/nvme/startup/startup.o 00:04:19.679 CC test/nvme/e2edp/nvme_dp.o 00:04:19.679 CC test/nvme/fdp/fdp.o 00:04:19.679 CC test/nvme/reset/reset.o 00:04:19.679 CC test/nvme/cuse/cuse.o 00:04:19.679 CC test/nvme/err_injection/err_injection.o 00:04:19.679 CC test/nvme/boot_partition/boot_partition.o 00:04:19.679 CC test/nvme/compliance/nvme_compliance.o 00:04:19.679 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:19.679 CC test/nvme/simple_copy/simple_copy.o 00:04:19.679 CC test/nvme/sgl/sgl.o 00:04:19.679 CC test/nvme/connect_stress/connect_stress.o 00:04:19.679 CC test/accel/dif/dif.o 00:04:19.679 CC test/nvme/fused_ordering/fused_ordering.o 00:04:19.679 CC test/blobfs/mkfs/mkfs.o 00:04:19.679 LINK thread 00:04:19.679 LINK idxd_perf 00:04:19.679 LINK memory_ut 00:04:19.679 CC test/lvol/esnap/esnap.o 00:04:19.679 LINK startup 00:04:19.679 LINK err_injection 00:04:19.679 LINK connect_stress 00:04:19.679 LINK doorbell_aers 00:04:19.679 LINK boot_partition 00:04:19.938 LINK simple_copy 00:04:19.938 LINK reserve 00:04:19.938 LINK fused_ordering 00:04:19.938 LINK nvme_dp 00:04:19.938 LINK overhead 00:04:19.938 LINK mkfs 00:04:19.938 LINK reset 00:04:19.938 LINK sgl 00:04:19.938 LINK aer 00:04:19.938 LINK nvme_compliance 00:04:19.938 LINK fdp 00:04:19.938 CC examples/nvme/reconnect/reconnect.o 00:04:19.938 CC examples/nvme/arbitration/arbitration.o 00:04:19.938 CC examples/nvme/hotplug/hotplug.o 00:04:19.938 CC examples/nvme/abort/abort.o 00:04:19.938 CC examples/nvme/hello_world/hello_world.o 00:04:19.938 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:19.938 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:19.938 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:20.198 LINK iscsi_fuzz 00:04:20.198 CC examples/blob/hello_world/hello_blob.o 00:04:20.198 CC examples/accel/perf/accel_perf.o 00:04:20.198 CC examples/blob/cli/blobcli.o 00:04:20.198 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:20.198 LINK dif 00:04:20.199 LINK pmr_persistence 00:04:20.199 LINK cmb_copy 00:04:20.199 LINK hello_world 00:04:20.199 LINK hotplug 00:04:20.460 LINK arbitration 00:04:20.460 LINK reconnect 00:04:20.460 LINK abort 00:04:20.460 LINK hello_blob 00:04:20.460 LINK nvme_manage 00:04:20.460 LINK hello_fsdev 00:04:20.460 LINK accel_perf 00:04:20.722 LINK blobcli 00:04:20.722 LINK cuse 00:04:20.722 CC test/bdev/bdevio/bdevio.o 00:04:21.296 CC examples/bdev/hello_world/hello_bdev.o 00:04:21.296 CC examples/bdev/bdevperf/bdevperf.o 00:04:21.296 LINK bdevio 00:04:21.296 LINK hello_bdev 00:04:21.869 LINK bdevperf 00:04:22.443 CC examples/nvmf/nvmf/nvmf.o 00:04:23.021 LINK nvmf 00:04:24.038 LINK esnap 00:04:24.325 00:04:24.325 real 0m54.298s 00:04:24.325 user 7m46.594s 00:04:24.325 sys 4m26.688s 00:04:24.325 07:14:08 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:24.325 07:14:08 make -- common/autotest_common.sh@10 -- $ set +x 00:04:24.325 ************************************ 00:04:24.325 END TEST make 00:04:24.325 ************************************ 00:04:24.325 07:14:08 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:24.325 07:14:08 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:24.325 07:14:08 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:24.325 07:14:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.325 07:14:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:24.326 07:14:08 -- pm/common@44 -- $ pid=1768439 00:04:24.326 07:14:08 -- pm/common@50 -- $ kill -TERM 1768439 00:04:24.326 07:14:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.326 07:14:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:24.326 07:14:08 -- pm/common@44 -- $ pid=1768440 00:04:24.326 07:14:08 -- pm/common@50 -- $ kill -TERM 1768440 00:04:24.326 07:14:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.326 07:14:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:24.326 07:14:08 -- pm/common@44 -- $ pid=1768442 00:04:24.326 07:14:08 -- pm/common@50 -- $ kill -TERM 1768442 00:04:24.326 07:14:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.326 07:14:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:24.326 07:14:08 -- pm/common@44 -- $ pid=1768466 00:04:24.326 07:14:08 -- pm/common@50 -- $ sudo -E kill -TERM 1768466 00:04:24.607 07:14:08 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:24.607 07:14:08 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:24.607 07:14:08 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:24.607 07:14:08 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:24.607 07:14:08 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:24.607 07:14:08 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:24.607 07:14:08 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.607 07:14:08 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.607 07:14:08 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.607 07:14:08 -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.607 07:14:08 -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.607 07:14:08 -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.607 07:14:08 -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.607 07:14:08 -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.607 07:14:08 -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.607 07:14:08 -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.607 07:14:08 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.607 07:14:08 -- scripts/common.sh@344 -- # case "$op" in 00:04:24.607 07:14:08 -- scripts/common.sh@345 -- # : 1 00:04:24.607 07:14:08 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.607 07:14:08 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.607 07:14:08 -- scripts/common.sh@365 -- # decimal 1 00:04:24.607 07:14:08 -- scripts/common.sh@353 -- # local d=1 00:04:24.607 07:14:08 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.607 07:14:08 -- scripts/common.sh@355 -- # echo 1 00:04:24.607 07:14:08 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.607 07:14:08 -- scripts/common.sh@366 -- # decimal 2 00:04:24.607 07:14:08 -- scripts/common.sh@353 -- # local d=2 00:04:24.607 07:14:08 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.607 07:14:08 -- scripts/common.sh@355 -- # echo 2 00:04:24.607 07:14:08 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.607 07:14:08 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.607 07:14:08 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.607 07:14:08 -- scripts/common.sh@368 -- # return 0 00:04:24.607 07:14:08 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.607 07:14:08 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:24.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.607 --rc genhtml_branch_coverage=1 00:04:24.607 --rc genhtml_function_coverage=1 00:04:24.607 --rc genhtml_legend=1 00:04:24.607 --rc geninfo_all_blocks=1 00:04:24.607 --rc geninfo_unexecuted_blocks=1 00:04:24.607 00:04:24.607 ' 00:04:24.607 07:14:08 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:24.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.607 --rc genhtml_branch_coverage=1 00:04:24.607 --rc genhtml_function_coverage=1 00:04:24.607 --rc genhtml_legend=1 00:04:24.608 --rc geninfo_all_blocks=1 00:04:24.608 --rc geninfo_unexecuted_blocks=1 00:04:24.608 00:04:24.608 ' 00:04:24.608 07:14:08 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:24.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.608 --rc genhtml_branch_coverage=1 00:04:24.608 --rc genhtml_function_coverage=1 00:04:24.608 --rc genhtml_legend=1 00:04:24.608 --rc geninfo_all_blocks=1 00:04:24.608 --rc geninfo_unexecuted_blocks=1 00:04:24.608 00:04:24.608 ' 00:04:24.608 07:14:08 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:24.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.608 --rc genhtml_branch_coverage=1 00:04:24.608 --rc genhtml_function_coverage=1 00:04:24.608 --rc genhtml_legend=1 00:04:24.608 --rc geninfo_all_blocks=1 00:04:24.608 --rc geninfo_unexecuted_blocks=1 00:04:24.608 00:04:24.608 ' 00:04:24.608 07:14:08 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:24.608 07:14:08 -- nvmf/common.sh@7 -- # uname -s 00:04:24.608 07:14:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:24.608 07:14:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:24.608 07:14:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:24.608 07:14:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:24.608 07:14:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:24.608 07:14:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:24.608 07:14:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:24.608 07:14:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:24.608 07:14:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:24.608 07:14:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:24.608 07:14:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:24.608 07:14:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:24.608 07:14:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:24.608 07:14:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:24.608 07:14:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:24.608 07:14:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:24.608 07:14:08 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:24.608 07:14:08 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:24.608 07:14:08 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:24.608 07:14:08 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:24.608 07:14:08 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:24.608 07:14:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.608 07:14:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.608 07:14:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.608 07:14:08 -- paths/export.sh@5 -- # export PATH 00:04:24.608 07:14:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.608 07:14:08 -- nvmf/common.sh@51 -- # : 0 00:04:24.608 07:14:08 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:24.608 07:14:08 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:24.608 07:14:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:24.608 07:14:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:24.608 07:14:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:24.608 07:14:08 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:24.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:24.608 07:14:08 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:24.608 07:14:08 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:24.608 07:14:08 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:24.608 07:14:08 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:24.608 07:14:08 -- spdk/autotest.sh@32 -- # uname -s 00:04:24.608 07:14:08 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:24.608 07:14:08 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:24.608 07:14:08 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:24.608 07:14:08 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:24.608 07:14:08 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:24.608 07:14:08 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:24.608 07:14:08 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:24.608 07:14:08 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:24.608 07:14:08 -- spdk/autotest.sh@48 -- # udevadm_pid=1833648 00:04:24.608 07:14:08 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:24.608 07:14:08 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:24.608 07:14:08 -- pm/common@17 -- # local monitor 00:04:24.608 07:14:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.608 07:14:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.608 07:14:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.608 07:14:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.608 07:14:08 -- pm/common@21 -- # date +%s 00:04:24.608 07:14:08 -- pm/common@21 -- # date +%s 00:04:24.608 07:14:08 -- pm/common@25 -- # sleep 1 00:04:24.608 07:14:08 -- pm/common@21 -- # date +%s 00:04:24.608 07:14:08 -- pm/common@21 -- # date +%s 00:04:24.608 07:14:08 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732601648 00:04:24.608 07:14:08 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732601648 00:04:24.608 07:14:08 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732601648 00:04:24.608 07:14:08 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732601648 00:04:24.869 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732601648_collect-cpu-load.pm.log 00:04:24.869 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732601648_collect-vmstat.pm.log 00:04:24.869 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732601648_collect-cpu-temp.pm.log 00:04:24.869 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732601648_collect-bmc-pm.bmc.pm.log 00:04:25.809 07:14:09 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:25.809 07:14:09 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:25.809 07:14:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:25.809 07:14:09 -- common/autotest_common.sh@10 -- # set +x 00:04:25.809 07:14:09 -- spdk/autotest.sh@59 -- # create_test_list 00:04:25.809 07:14:09 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:25.809 07:14:09 -- common/autotest_common.sh@10 -- # set +x 00:04:25.809 07:14:09 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:25.809 07:14:09 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:25.809 07:14:09 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:25.809 07:14:09 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:25.809 07:14:09 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:25.809 07:14:09 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:25.809 07:14:09 -- common/autotest_common.sh@1457 -- # uname 00:04:25.809 07:14:09 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:25.809 07:14:09 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:25.809 07:14:09 -- common/autotest_common.sh@1477 -- # uname 00:04:25.809 07:14:09 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:25.809 07:14:09 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:25.809 07:14:09 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:25.809 lcov: LCOV version 1.15 00:04:25.809 07:14:09 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:40.711 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:40.711 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:58.826 07:14:39 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:58.826 07:14:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:58.826 07:14:39 -- common/autotest_common.sh@10 -- # set +x 00:04:58.826 07:14:39 -- spdk/autotest.sh@78 -- # rm -f 00:04:58.826 07:14:39 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:59.770 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:04:59.770 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:04:59.770 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:04:59.770 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:04:59.770 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:04:59.770 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:04:59.770 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:04:59.770 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:04:59.770 0000:65:00.0 (144d a80a): Already using the nvme driver 00:04:59.770 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:04:59.770 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:04:59.770 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:04:59.770 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:04:59.770 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:05:00.031 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:05:00.031 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:05:00.031 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:05:00.292 07:14:44 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:00.292 07:14:44 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:00.292 07:14:44 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:00.292 07:14:44 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:00.292 07:14:44 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:00.292 07:14:44 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:00.292 07:14:44 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:00.292 07:14:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:00.292 07:14:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:00.292 07:14:44 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:00.292 07:14:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:00.292 07:14:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:00.292 07:14:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:00.292 07:14:44 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:00.292 07:14:44 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:00.292 No valid GPT data, bailing 00:05:00.292 07:14:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:00.292 07:14:44 -- scripts/common.sh@394 -- # pt= 00:05:00.292 07:14:44 -- scripts/common.sh@395 -- # return 1 00:05:00.292 07:14:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:00.292 1+0 records in 00:05:00.292 1+0 records out 00:05:00.292 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00440189 s, 238 MB/s 00:05:00.292 07:14:44 -- spdk/autotest.sh@105 -- # sync 00:05:00.292 07:14:44 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:00.292 07:14:44 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:00.292 07:14:44 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:10.300 07:14:52 -- spdk/autotest.sh@111 -- # uname -s 00:05:10.300 07:14:52 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:10.300 07:14:52 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:10.300 07:14:52 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:12.216 Hugepages 00:05:12.216 node hugesize free / total 00:05:12.216 node0 1048576kB 0 / 0 00:05:12.216 node0 2048kB 0 / 0 00:05:12.216 node1 1048576kB 0 / 0 00:05:12.216 node1 2048kB 0 / 0 00:05:12.216 00:05:12.216 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:12.476 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:05:12.476 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:05:12.476 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:05:12.476 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:05:12.477 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:05:12.477 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:05:12.477 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:05:12.477 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:05:12.477 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:05:12.477 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:05:12.477 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:05:12.477 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:05:12.477 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:05:12.477 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:05:12.477 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:05:12.477 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:05:12.477 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:05:12.477 07:14:56 -- spdk/autotest.sh@117 -- # uname -s 00:05:12.477 07:14:56 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:12.477 07:14:56 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:12.477 07:14:56 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:16.685 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:16.685 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:16.685 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:16.685 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:16.685 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:16.685 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:16.685 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:16.685 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:16.685 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:16.685 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:16.685 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:16.685 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:16.685 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:16.685 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:16.685 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:16.685 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:18.601 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:18.601 07:15:02 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:19.987 07:15:03 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:19.987 07:15:03 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:19.987 07:15:03 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:19.987 07:15:03 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:19.987 07:15:03 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:19.987 07:15:03 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:19.987 07:15:03 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:19.987 07:15:03 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:19.987 07:15:03 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:19.987 07:15:03 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:19.987 07:15:03 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:05:19.987 07:15:03 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:24.191 Waiting for block devices as requested 00:05:24.191 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:24.191 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:24.191 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:24.191 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:24.191 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:24.191 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:24.191 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:24.191 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:24.191 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:24.452 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:24.452 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:24.452 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:24.712 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:24.712 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:24.712 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:24.712 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:24.973 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:25.234 07:15:09 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:25.234 07:15:09 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:25.234 07:15:09 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:25.234 07:15:09 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:05:25.234 07:15:09 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:25.234 07:15:09 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:25.234 07:15:09 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:25.234 07:15:09 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:25.234 07:15:09 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:25.234 07:15:09 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:25.234 07:15:09 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:25.234 07:15:09 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:25.234 07:15:09 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:25.234 07:15:09 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:05:25.234 07:15:09 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:25.234 07:15:09 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:25.234 07:15:09 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:25.234 07:15:09 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:25.234 07:15:09 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:25.234 07:15:09 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:25.234 07:15:09 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:25.234 07:15:09 -- common/autotest_common.sh@1543 -- # continue 00:05:25.234 07:15:09 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:25.234 07:15:09 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:25.234 07:15:09 -- common/autotest_common.sh@10 -- # set +x 00:05:25.234 07:15:09 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:25.234 07:15:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:25.234 07:15:09 -- common/autotest_common.sh@10 -- # set +x 00:05:25.234 07:15:09 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:29.444 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:29.444 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:29.444 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:29.444 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:29.444 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:29.444 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:29.444 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:29.444 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:29.444 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:29.444 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:29.444 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:29.444 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:29.444 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:29.444 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:29.444 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:29.444 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:29.444 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:30.016 07:15:13 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:30.016 07:15:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:30.016 07:15:13 -- common/autotest_common.sh@10 -- # set +x 00:05:30.016 07:15:13 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:30.016 07:15:13 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:30.016 07:15:13 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:30.016 07:15:13 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:30.016 07:15:13 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:30.016 07:15:13 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:30.016 07:15:13 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:30.016 07:15:13 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:30.016 07:15:13 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:30.016 07:15:13 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:30.016 07:15:13 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:30.016 07:15:13 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:30.016 07:15:13 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:30.016 07:15:13 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:30.016 07:15:13 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:05:30.016 07:15:13 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:30.016 07:15:13 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:30.016 07:15:13 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:05:30.016 07:15:13 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:30.016 07:15:13 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:30.016 07:15:13 -- common/autotest_common.sh@1572 -- # return 0 00:05:30.016 07:15:13 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:30.016 07:15:13 -- common/autotest_common.sh@1580 -- # return 0 00:05:30.016 07:15:13 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:30.016 07:15:13 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:30.016 07:15:13 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:30.016 07:15:13 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:30.016 07:15:13 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:30.016 07:15:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:30.016 07:15:13 -- common/autotest_common.sh@10 -- # set +x 00:05:30.016 07:15:14 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:30.016 07:15:14 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:30.016 07:15:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.016 07:15:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.016 07:15:14 -- common/autotest_common.sh@10 -- # set +x 00:05:30.016 ************************************ 00:05:30.016 START TEST env 00:05:30.016 ************************************ 00:05:30.016 07:15:14 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:30.016 * Looking for test storage... 00:05:30.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:30.277 07:15:14 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:30.277 07:15:14 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:30.277 07:15:14 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:30.277 07:15:14 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:30.277 07:15:14 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.277 07:15:14 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.277 07:15:14 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.277 07:15:14 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.277 07:15:14 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.277 07:15:14 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.277 07:15:14 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.277 07:15:14 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.277 07:15:14 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.277 07:15:14 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.277 07:15:14 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.277 07:15:14 env -- scripts/common.sh@344 -- # case "$op" in 00:05:30.277 07:15:14 env -- scripts/common.sh@345 -- # : 1 00:05:30.277 07:15:14 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.277 07:15:14 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.277 07:15:14 env -- scripts/common.sh@365 -- # decimal 1 00:05:30.277 07:15:14 env -- scripts/common.sh@353 -- # local d=1 00:05:30.277 07:15:14 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.277 07:15:14 env -- scripts/common.sh@355 -- # echo 1 00:05:30.277 07:15:14 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.277 07:15:14 env -- scripts/common.sh@366 -- # decimal 2 00:05:30.277 07:15:14 env -- scripts/common.sh@353 -- # local d=2 00:05:30.277 07:15:14 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.277 07:15:14 env -- scripts/common.sh@355 -- # echo 2 00:05:30.277 07:15:14 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.277 07:15:14 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.277 07:15:14 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.277 07:15:14 env -- scripts/common.sh@368 -- # return 0 00:05:30.277 07:15:14 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.277 07:15:14 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:30.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.277 --rc genhtml_branch_coverage=1 00:05:30.277 --rc genhtml_function_coverage=1 00:05:30.277 --rc genhtml_legend=1 00:05:30.277 --rc geninfo_all_blocks=1 00:05:30.277 --rc geninfo_unexecuted_blocks=1 00:05:30.277 00:05:30.277 ' 00:05:30.277 07:15:14 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:30.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.277 --rc genhtml_branch_coverage=1 00:05:30.277 --rc genhtml_function_coverage=1 00:05:30.277 --rc genhtml_legend=1 00:05:30.277 --rc geninfo_all_blocks=1 00:05:30.277 --rc geninfo_unexecuted_blocks=1 00:05:30.277 00:05:30.277 ' 00:05:30.277 07:15:14 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:30.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.278 --rc genhtml_branch_coverage=1 00:05:30.278 --rc genhtml_function_coverage=1 00:05:30.278 --rc genhtml_legend=1 00:05:30.278 --rc geninfo_all_blocks=1 00:05:30.278 --rc geninfo_unexecuted_blocks=1 00:05:30.278 00:05:30.278 ' 00:05:30.278 07:15:14 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:30.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.278 --rc genhtml_branch_coverage=1 00:05:30.278 --rc genhtml_function_coverage=1 00:05:30.278 --rc genhtml_legend=1 00:05:30.278 --rc geninfo_all_blocks=1 00:05:30.278 --rc geninfo_unexecuted_blocks=1 00:05:30.278 00:05:30.278 ' 00:05:30.278 07:15:14 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:30.278 07:15:14 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.278 07:15:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.278 07:15:14 env -- common/autotest_common.sh@10 -- # set +x 00:05:30.278 ************************************ 00:05:30.278 START TEST env_memory 00:05:30.278 ************************************ 00:05:30.278 07:15:14 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:30.278 00:05:30.278 00:05:30.278 CUnit - A unit testing framework for C - Version 2.1-3 00:05:30.278 http://cunit.sourceforge.net/ 00:05:30.278 00:05:30.278 00:05:30.278 Suite: memory 00:05:30.278 Test: alloc and free memory map ...[2024-11-26 07:15:14.322951] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:30.278 passed 00:05:30.278 Test: mem map translation ...[2024-11-26 07:15:14.348299] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:30.278 [2024-11-26 07:15:14.348328] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:30.278 [2024-11-26 07:15:14.348374] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:30.278 [2024-11-26 07:15:14.348381] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:30.278 passed 00:05:30.278 Test: mem map registration ...[2024-11-26 07:15:14.403406] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:30.278 [2024-11-26 07:15:14.403420] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:30.539 passed 00:05:30.539 Test: mem map adjacent registrations ...passed 00:05:30.539 00:05:30.539 Run Summary: Type Total Ran Passed Failed Inactive 00:05:30.539 suites 1 1 n/a 0 0 00:05:30.539 tests 4 4 4 0 0 00:05:30.539 asserts 152 152 152 0 n/a 00:05:30.539 00:05:30.539 Elapsed time = 0.191 seconds 00:05:30.539 00:05:30.539 real 0m0.205s 00:05:30.539 user 0m0.193s 00:05:30.539 sys 0m0.012s 00:05:30.539 07:15:14 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.539 07:15:14 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:30.539 ************************************ 00:05:30.539 END TEST env_memory 00:05:30.539 ************************************ 00:05:30.539 07:15:14 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:30.539 07:15:14 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.539 07:15:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.539 07:15:14 env -- common/autotest_common.sh@10 -- # set +x 00:05:30.540 ************************************ 00:05:30.540 START TEST env_vtophys 00:05:30.540 ************************************ 00:05:30.540 07:15:14 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:30.540 EAL: lib.eal log level changed from notice to debug 00:05:30.540 EAL: Detected lcore 0 as core 0 on socket 0 00:05:30.540 EAL: Detected lcore 1 as core 1 on socket 0 00:05:30.540 EAL: Detected lcore 2 as core 2 on socket 0 00:05:30.540 EAL: Detected lcore 3 as core 3 on socket 0 00:05:30.540 EAL: Detected lcore 4 as core 4 on socket 0 00:05:30.540 EAL: Detected lcore 5 as core 5 on socket 0 00:05:30.540 EAL: Detected lcore 6 as core 6 on socket 0 00:05:30.540 EAL: Detected lcore 7 as core 7 on socket 0 00:05:30.540 EAL: Detected lcore 8 as core 8 on socket 0 00:05:30.540 EAL: Detected lcore 9 as core 9 on socket 0 00:05:30.540 EAL: Detected lcore 10 as core 10 on socket 0 00:05:30.540 EAL: Detected lcore 11 as core 11 on socket 0 00:05:30.540 EAL: Detected lcore 12 as core 12 on socket 0 00:05:30.540 EAL: Detected lcore 13 as core 13 on socket 0 00:05:30.540 EAL: Detected lcore 14 as core 14 on socket 0 00:05:30.540 EAL: Detected lcore 15 as core 15 on socket 0 00:05:30.540 EAL: Detected lcore 16 as core 16 on socket 0 00:05:30.540 EAL: Detected lcore 17 as core 17 on socket 0 00:05:30.540 EAL: Detected lcore 18 as core 18 on socket 0 00:05:30.540 EAL: Detected lcore 19 as core 19 on socket 0 00:05:30.540 EAL: Detected lcore 20 as core 20 on socket 0 00:05:30.540 EAL: Detected lcore 21 as core 21 on socket 0 00:05:30.540 EAL: Detected lcore 22 as core 22 on socket 0 00:05:30.540 EAL: Detected lcore 23 as core 23 on socket 0 00:05:30.540 EAL: Detected lcore 24 as core 24 on socket 0 00:05:30.540 EAL: Detected lcore 25 as core 25 on socket 0 00:05:30.540 EAL: Detected lcore 26 as core 26 on socket 0 00:05:30.540 EAL: Detected lcore 27 as core 27 on socket 0 00:05:30.540 EAL: Detected lcore 28 as core 28 on socket 0 00:05:30.540 EAL: Detected lcore 29 as core 29 on socket 0 00:05:30.540 EAL: Detected lcore 30 as core 30 on socket 0 00:05:30.540 EAL: Detected lcore 31 as core 31 on socket 0 00:05:30.540 EAL: Detected lcore 32 as core 32 on socket 0 00:05:30.540 EAL: Detected lcore 33 as core 33 on socket 0 00:05:30.540 EAL: Detected lcore 34 as core 34 on socket 0 00:05:30.540 EAL: Detected lcore 35 as core 35 on socket 0 00:05:30.540 EAL: Detected lcore 36 as core 0 on socket 1 00:05:30.540 EAL: Detected lcore 37 as core 1 on socket 1 00:05:30.540 EAL: Detected lcore 38 as core 2 on socket 1 00:05:30.540 EAL: Detected lcore 39 as core 3 on socket 1 00:05:30.540 EAL: Detected lcore 40 as core 4 on socket 1 00:05:30.540 EAL: Detected lcore 41 as core 5 on socket 1 00:05:30.540 EAL: Detected lcore 42 as core 6 on socket 1 00:05:30.540 EAL: Detected lcore 43 as core 7 on socket 1 00:05:30.540 EAL: Detected lcore 44 as core 8 on socket 1 00:05:30.540 EAL: Detected lcore 45 as core 9 on socket 1 00:05:30.540 EAL: Detected lcore 46 as core 10 on socket 1 00:05:30.540 EAL: Detected lcore 47 as core 11 on socket 1 00:05:30.540 EAL: Detected lcore 48 as core 12 on socket 1 00:05:30.540 EAL: Detected lcore 49 as core 13 on socket 1 00:05:30.540 EAL: Detected lcore 50 as core 14 on socket 1 00:05:30.540 EAL: Detected lcore 51 as core 15 on socket 1 00:05:30.540 EAL: Detected lcore 52 as core 16 on socket 1 00:05:30.540 EAL: Detected lcore 53 as core 17 on socket 1 00:05:30.540 EAL: Detected lcore 54 as core 18 on socket 1 00:05:30.540 EAL: Detected lcore 55 as core 19 on socket 1 00:05:30.540 EAL: Detected lcore 56 as core 20 on socket 1 00:05:30.540 EAL: Detected lcore 57 as core 21 on socket 1 00:05:30.540 EAL: Detected lcore 58 as core 22 on socket 1 00:05:30.540 EAL: Detected lcore 59 as core 23 on socket 1 00:05:30.540 EAL: Detected lcore 60 as core 24 on socket 1 00:05:30.540 EAL: Detected lcore 61 as core 25 on socket 1 00:05:30.540 EAL: Detected lcore 62 as core 26 on socket 1 00:05:30.540 EAL: Detected lcore 63 as core 27 on socket 1 00:05:30.540 EAL: Detected lcore 64 as core 28 on socket 1 00:05:30.540 EAL: Detected lcore 65 as core 29 on socket 1 00:05:30.540 EAL: Detected lcore 66 as core 30 on socket 1 00:05:30.540 EAL: Detected lcore 67 as core 31 on socket 1 00:05:30.540 EAL: Detected lcore 68 as core 32 on socket 1 00:05:30.540 EAL: Detected lcore 69 as core 33 on socket 1 00:05:30.540 EAL: Detected lcore 70 as core 34 on socket 1 00:05:30.540 EAL: Detected lcore 71 as core 35 on socket 1 00:05:30.540 EAL: Detected lcore 72 as core 0 on socket 0 00:05:30.540 EAL: Detected lcore 73 as core 1 on socket 0 00:05:30.540 EAL: Detected lcore 74 as core 2 on socket 0 00:05:30.540 EAL: Detected lcore 75 as core 3 on socket 0 00:05:30.540 EAL: Detected lcore 76 as core 4 on socket 0 00:05:30.540 EAL: Detected lcore 77 as core 5 on socket 0 00:05:30.540 EAL: Detected lcore 78 as core 6 on socket 0 00:05:30.540 EAL: Detected lcore 79 as core 7 on socket 0 00:05:30.540 EAL: Detected lcore 80 as core 8 on socket 0 00:05:30.540 EAL: Detected lcore 81 as core 9 on socket 0 00:05:30.540 EAL: Detected lcore 82 as core 10 on socket 0 00:05:30.540 EAL: Detected lcore 83 as core 11 on socket 0 00:05:30.540 EAL: Detected lcore 84 as core 12 on socket 0 00:05:30.540 EAL: Detected lcore 85 as core 13 on socket 0 00:05:30.540 EAL: Detected lcore 86 as core 14 on socket 0 00:05:30.540 EAL: Detected lcore 87 as core 15 on socket 0 00:05:30.540 EAL: Detected lcore 88 as core 16 on socket 0 00:05:30.540 EAL: Detected lcore 89 as core 17 on socket 0 00:05:30.540 EAL: Detected lcore 90 as core 18 on socket 0 00:05:30.540 EAL: Detected lcore 91 as core 19 on socket 0 00:05:30.540 EAL: Detected lcore 92 as core 20 on socket 0 00:05:30.540 EAL: Detected lcore 93 as core 21 on socket 0 00:05:30.540 EAL: Detected lcore 94 as core 22 on socket 0 00:05:30.540 EAL: Detected lcore 95 as core 23 on socket 0 00:05:30.540 EAL: Detected lcore 96 as core 24 on socket 0 00:05:30.540 EAL: Detected lcore 97 as core 25 on socket 0 00:05:30.540 EAL: Detected lcore 98 as core 26 on socket 0 00:05:30.540 EAL: Detected lcore 99 as core 27 on socket 0 00:05:30.540 EAL: Detected lcore 100 as core 28 on socket 0 00:05:30.540 EAL: Detected lcore 101 as core 29 on socket 0 00:05:30.540 EAL: Detected lcore 102 as core 30 on socket 0 00:05:30.540 EAL: Detected lcore 103 as core 31 on socket 0 00:05:30.540 EAL: Detected lcore 104 as core 32 on socket 0 00:05:30.540 EAL: Detected lcore 105 as core 33 on socket 0 00:05:30.540 EAL: Detected lcore 106 as core 34 on socket 0 00:05:30.540 EAL: Detected lcore 107 as core 35 on socket 0 00:05:30.540 EAL: Detected lcore 108 as core 0 on socket 1 00:05:30.540 EAL: Detected lcore 109 as core 1 on socket 1 00:05:30.540 EAL: Detected lcore 110 as core 2 on socket 1 00:05:30.540 EAL: Detected lcore 111 as core 3 on socket 1 00:05:30.540 EAL: Detected lcore 112 as core 4 on socket 1 00:05:30.540 EAL: Detected lcore 113 as core 5 on socket 1 00:05:30.540 EAL: Detected lcore 114 as core 6 on socket 1 00:05:30.540 EAL: Detected lcore 115 as core 7 on socket 1 00:05:30.540 EAL: Detected lcore 116 as core 8 on socket 1 00:05:30.540 EAL: Detected lcore 117 as core 9 on socket 1 00:05:30.540 EAL: Detected lcore 118 as core 10 on socket 1 00:05:30.540 EAL: Detected lcore 119 as core 11 on socket 1 00:05:30.540 EAL: Detected lcore 120 as core 12 on socket 1 00:05:30.540 EAL: Detected lcore 121 as core 13 on socket 1 00:05:30.540 EAL: Detected lcore 122 as core 14 on socket 1 00:05:30.540 EAL: Detected lcore 123 as core 15 on socket 1 00:05:30.540 EAL: Detected lcore 124 as core 16 on socket 1 00:05:30.540 EAL: Detected lcore 125 as core 17 on socket 1 00:05:30.540 EAL: Detected lcore 126 as core 18 on socket 1 00:05:30.540 EAL: Detected lcore 127 as core 19 on socket 1 00:05:30.540 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:30.540 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:30.540 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:30.540 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:30.540 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:30.540 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:30.540 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:30.540 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:30.540 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:30.540 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:30.540 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:30.540 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:30.540 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:30.540 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:30.541 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:30.541 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:30.541 EAL: Maximum logical cores by configuration: 128 00:05:30.541 EAL: Detected CPU lcores: 128 00:05:30.541 EAL: Detected NUMA nodes: 2 00:05:30.541 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:30.541 EAL: Detected shared linkage of DPDK 00:05:30.541 EAL: No shared files mode enabled, IPC will be disabled 00:05:30.541 EAL: Bus pci wants IOVA as 'DC' 00:05:30.541 EAL: Buses did not request a specific IOVA mode. 00:05:30.541 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:30.541 EAL: Selected IOVA mode 'VA' 00:05:30.541 EAL: Probing VFIO support... 00:05:30.541 EAL: IOMMU type 1 (Type 1) is supported 00:05:30.541 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:30.541 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:30.541 EAL: VFIO support initialized 00:05:30.541 EAL: Ask a virtual area of 0x2e000 bytes 00:05:30.541 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:30.541 EAL: Setting up physically contiguous memory... 00:05:30.541 EAL: Setting maximum number of open files to 524288 00:05:30.541 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:30.541 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:30.541 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:30.541 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.541 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:30.541 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:30.541 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.541 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:30.541 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:30.541 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.541 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:30.541 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:30.541 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.541 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:30.541 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:30.541 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.541 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:30.541 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:30.541 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.541 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:30.541 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:30.541 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.541 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:30.541 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:30.541 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.541 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:30.541 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:30.541 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:30.541 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.541 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:30.541 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:30.541 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.541 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:30.541 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:30.541 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.541 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:30.541 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:30.541 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.541 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:30.541 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:30.541 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.541 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:30.541 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:30.541 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.541 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:30.541 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:30.541 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.541 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:30.541 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:30.541 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.541 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:30.541 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:30.541 EAL: Hugepages will be freed exactly as allocated. 00:05:30.541 EAL: No shared files mode enabled, IPC is disabled 00:05:30.541 EAL: No shared files mode enabled, IPC is disabled 00:05:30.541 EAL: TSC frequency is ~2400000 KHz 00:05:30.541 EAL: Main lcore 0 is ready (tid=7fb39369aa00;cpuset=[0]) 00:05:30.541 EAL: Trying to obtain current memory policy. 00:05:30.541 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.541 EAL: Restoring previous memory policy: 0 00:05:30.541 EAL: request: mp_malloc_sync 00:05:30.541 EAL: No shared files mode enabled, IPC is disabled 00:05:30.541 EAL: Heap on socket 0 was expanded by 2MB 00:05:30.541 EAL: No shared files mode enabled, IPC is disabled 00:05:30.541 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:30.541 EAL: Mem event callback 'spdk:(nil)' registered 00:05:30.541 00:05:30.541 00:05:30.541 CUnit - A unit testing framework for C - Version 2.1-3 00:05:30.541 http://cunit.sourceforge.net/ 00:05:30.541 00:05:30.541 00:05:30.541 Suite: components_suite 00:05:30.541 Test: vtophys_malloc_test ...passed 00:05:30.541 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:30.541 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.541 EAL: Restoring previous memory policy: 4 00:05:30.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.541 EAL: request: mp_malloc_sync 00:05:30.541 EAL: No shared files mode enabled, IPC is disabled 00:05:30.541 EAL: Heap on socket 0 was expanded by 4MB 00:05:30.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.541 EAL: request: mp_malloc_sync 00:05:30.541 EAL: No shared files mode enabled, IPC is disabled 00:05:30.541 EAL: Heap on socket 0 was shrunk by 4MB 00:05:30.541 EAL: Trying to obtain current memory policy. 00:05:30.541 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.541 EAL: Restoring previous memory policy: 4 00:05:30.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.541 EAL: request: mp_malloc_sync 00:05:30.541 EAL: No shared files mode enabled, IPC is disabled 00:05:30.541 EAL: Heap on socket 0 was expanded by 6MB 00:05:30.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.541 EAL: request: mp_malloc_sync 00:05:30.541 EAL: No shared files mode enabled, IPC is disabled 00:05:30.541 EAL: Heap on socket 0 was shrunk by 6MB 00:05:30.541 EAL: Trying to obtain current memory policy. 00:05:30.541 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.541 EAL: Restoring previous memory policy: 4 00:05:30.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.541 EAL: request: mp_malloc_sync 00:05:30.541 EAL: No shared files mode enabled, IPC is disabled 00:05:30.541 EAL: Heap on socket 0 was expanded by 10MB 00:05:30.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.541 EAL: request: mp_malloc_sync 00:05:30.541 EAL: No shared files mode enabled, IPC is disabled 00:05:30.541 EAL: Heap on socket 0 was shrunk by 10MB 00:05:30.541 EAL: Trying to obtain current memory policy. 00:05:30.541 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.541 EAL: Restoring previous memory policy: 4 00:05:30.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.541 EAL: request: mp_malloc_sync 00:05:30.541 EAL: No shared files mode enabled, IPC is disabled 00:05:30.541 EAL: Heap on socket 0 was expanded by 18MB 00:05:30.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.541 EAL: request: mp_malloc_sync 00:05:30.541 EAL: No shared files mode enabled, IPC is disabled 00:05:30.541 EAL: Heap on socket 0 was shrunk by 18MB 00:05:30.542 EAL: Trying to obtain current memory policy. 00:05:30.542 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.803 EAL: Restoring previous memory policy: 4 00:05:30.803 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.803 EAL: request: mp_malloc_sync 00:05:30.803 EAL: No shared files mode enabled, IPC is disabled 00:05:30.803 EAL: Heap on socket 0 was expanded by 34MB 00:05:30.803 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.803 EAL: request: mp_malloc_sync 00:05:30.803 EAL: No shared files mode enabled, IPC is disabled 00:05:30.803 EAL: Heap on socket 0 was shrunk by 34MB 00:05:30.803 EAL: Trying to obtain current memory policy. 00:05:30.803 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.803 EAL: Restoring previous memory policy: 4 00:05:30.803 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.803 EAL: request: mp_malloc_sync 00:05:30.803 EAL: No shared files mode enabled, IPC is disabled 00:05:30.803 EAL: Heap on socket 0 was expanded by 66MB 00:05:30.803 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.803 EAL: request: mp_malloc_sync 00:05:30.803 EAL: No shared files mode enabled, IPC is disabled 00:05:30.803 EAL: Heap on socket 0 was shrunk by 66MB 00:05:30.803 EAL: Trying to obtain current memory policy. 00:05:30.803 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.803 EAL: Restoring previous memory policy: 4 00:05:30.803 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.803 EAL: request: mp_malloc_sync 00:05:30.803 EAL: No shared files mode enabled, IPC is disabled 00:05:30.803 EAL: Heap on socket 0 was expanded by 130MB 00:05:30.803 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.803 EAL: request: mp_malloc_sync 00:05:30.803 EAL: No shared files mode enabled, IPC is disabled 00:05:30.803 EAL: Heap on socket 0 was shrunk by 130MB 00:05:30.803 EAL: Trying to obtain current memory policy. 00:05:30.803 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.803 EAL: Restoring previous memory policy: 4 00:05:30.803 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.803 EAL: request: mp_malloc_sync 00:05:30.803 EAL: No shared files mode enabled, IPC is disabled 00:05:30.803 EAL: Heap on socket 0 was expanded by 258MB 00:05:30.803 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.803 EAL: request: mp_malloc_sync 00:05:30.803 EAL: No shared files mode enabled, IPC is disabled 00:05:30.803 EAL: Heap on socket 0 was shrunk by 258MB 00:05:30.803 EAL: Trying to obtain current memory policy. 00:05:30.803 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.803 EAL: Restoring previous memory policy: 4 00:05:30.803 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.803 EAL: request: mp_malloc_sync 00:05:30.803 EAL: No shared files mode enabled, IPC is disabled 00:05:30.803 EAL: Heap on socket 0 was expanded by 514MB 00:05:31.065 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.065 EAL: request: mp_malloc_sync 00:05:31.065 EAL: No shared files mode enabled, IPC is disabled 00:05:31.065 EAL: Heap on socket 0 was shrunk by 514MB 00:05:31.065 EAL: Trying to obtain current memory policy. 00:05:31.065 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.065 EAL: Restoring previous memory policy: 4 00:05:31.065 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.065 EAL: request: mp_malloc_sync 00:05:31.065 EAL: No shared files mode enabled, IPC is disabled 00:05:31.065 EAL: Heap on socket 0 was expanded by 1026MB 00:05:31.326 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.326 EAL: request: mp_malloc_sync 00:05:31.326 EAL: No shared files mode enabled, IPC is disabled 00:05:31.326 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:31.326 passed 00:05:31.326 00:05:31.326 Run Summary: Type Total Ran Passed Failed Inactive 00:05:31.326 suites 1 1 n/a 0 0 00:05:31.327 tests 2 2 2 0 0 00:05:31.327 asserts 497 497 497 0 n/a 00:05:31.327 00:05:31.327 Elapsed time = 0.646 seconds 00:05:31.327 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.327 EAL: request: mp_malloc_sync 00:05:31.327 EAL: No shared files mode enabled, IPC is disabled 00:05:31.327 EAL: Heap on socket 0 was shrunk by 2MB 00:05:31.327 EAL: No shared files mode enabled, IPC is disabled 00:05:31.327 EAL: No shared files mode enabled, IPC is disabled 00:05:31.327 EAL: No shared files mode enabled, IPC is disabled 00:05:31.327 00:05:31.327 real 0m0.781s 00:05:31.327 user 0m0.419s 00:05:31.327 sys 0m0.335s 00:05:31.327 07:15:15 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.327 07:15:15 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:31.327 ************************************ 00:05:31.327 END TEST env_vtophys 00:05:31.327 ************************************ 00:05:31.327 07:15:15 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:31.327 07:15:15 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.327 07:15:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.327 07:15:15 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.327 ************************************ 00:05:31.327 START TEST env_pci 00:05:31.327 ************************************ 00:05:31.327 07:15:15 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:31.327 00:05:31.327 00:05:31.327 CUnit - A unit testing framework for C - Version 2.1-3 00:05:31.327 http://cunit.sourceforge.net/ 00:05:31.327 00:05:31.327 00:05:31.327 Suite: pci 00:05:31.327 Test: pci_hook ...[2024-11-26 07:15:15.435312] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1854726 has claimed it 00:05:31.588 EAL: Cannot find device (10000:00:01.0) 00:05:31.588 EAL: Failed to attach device on primary process 00:05:31.588 passed 00:05:31.588 00:05:31.588 Run Summary: Type Total Ran Passed Failed Inactive 00:05:31.588 suites 1 1 n/a 0 0 00:05:31.588 tests 1 1 1 0 0 00:05:31.588 asserts 25 25 25 0 n/a 00:05:31.588 00:05:31.588 Elapsed time = 0.034 seconds 00:05:31.588 00:05:31.588 real 0m0.055s 00:05:31.588 user 0m0.014s 00:05:31.588 sys 0m0.041s 00:05:31.588 07:15:15 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.588 07:15:15 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:31.588 ************************************ 00:05:31.588 END TEST env_pci 00:05:31.588 ************************************ 00:05:31.588 07:15:15 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:31.588 07:15:15 env -- env/env.sh@15 -- # uname 00:05:31.588 07:15:15 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:31.588 07:15:15 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:31.588 07:15:15 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:31.588 07:15:15 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:31.588 07:15:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.588 07:15:15 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.588 ************************************ 00:05:31.588 START TEST env_dpdk_post_init 00:05:31.588 ************************************ 00:05:31.589 07:15:15 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:31.589 EAL: Detected CPU lcores: 128 00:05:31.589 EAL: Detected NUMA nodes: 2 00:05:31.589 EAL: Detected shared linkage of DPDK 00:05:31.589 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:31.589 EAL: Selected IOVA mode 'VA' 00:05:31.589 EAL: VFIO support initialized 00:05:31.589 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:31.589 EAL: Using IOMMU type 1 (Type 1) 00:05:31.849 EAL: Ignore mapping IO port bar(1) 00:05:31.849 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:32.110 EAL: Ignore mapping IO port bar(1) 00:05:32.110 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:32.371 EAL: Ignore mapping IO port bar(1) 00:05:32.371 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:32.371 EAL: Ignore mapping IO port bar(1) 00:05:32.632 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:32.632 EAL: Ignore mapping IO port bar(1) 00:05:32.893 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:32.893 EAL: Ignore mapping IO port bar(1) 00:05:33.154 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:33.154 EAL: Ignore mapping IO port bar(1) 00:05:33.154 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:33.415 EAL: Ignore mapping IO port bar(1) 00:05:33.415 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:33.722 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:33.722 EAL: Ignore mapping IO port bar(1) 00:05:34.006 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:34.006 EAL: Ignore mapping IO port bar(1) 00:05:34.267 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:34.267 EAL: Ignore mapping IO port bar(1) 00:05:34.267 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:34.529 EAL: Ignore mapping IO port bar(1) 00:05:34.529 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:34.791 EAL: Ignore mapping IO port bar(1) 00:05:34.791 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:35.051 EAL: Ignore mapping IO port bar(1) 00:05:35.051 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:35.051 EAL: Ignore mapping IO port bar(1) 00:05:35.310 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:35.310 EAL: Ignore mapping IO port bar(1) 00:05:35.572 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:35.572 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:35.572 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:35.572 Starting DPDK initialization... 00:05:35.572 Starting SPDK post initialization... 00:05:35.572 SPDK NVMe probe 00:05:35.572 Attaching to 0000:65:00.0 00:05:35.572 Attached to 0000:65:00.0 00:05:35.572 Cleaning up... 00:05:37.498 00:05:37.498 real 0m5.740s 00:05:37.498 user 0m0.109s 00:05:37.498 sys 0m0.175s 00:05:37.498 07:15:21 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.498 07:15:21 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:37.498 ************************************ 00:05:37.498 END TEST env_dpdk_post_init 00:05:37.498 ************************************ 00:05:37.498 07:15:21 env -- env/env.sh@26 -- # uname 00:05:37.498 07:15:21 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:37.498 07:15:21 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:37.498 07:15:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.498 07:15:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.498 07:15:21 env -- common/autotest_common.sh@10 -- # set +x 00:05:37.498 ************************************ 00:05:37.498 START TEST env_mem_callbacks 00:05:37.498 ************************************ 00:05:37.498 07:15:21 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:37.498 EAL: Detected CPU lcores: 128 00:05:37.498 EAL: Detected NUMA nodes: 2 00:05:37.498 EAL: Detected shared linkage of DPDK 00:05:37.498 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:37.498 EAL: Selected IOVA mode 'VA' 00:05:37.498 EAL: VFIO support initialized 00:05:37.498 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:37.498 00:05:37.498 00:05:37.498 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.498 http://cunit.sourceforge.net/ 00:05:37.498 00:05:37.498 00:05:37.498 Suite: memory 00:05:37.498 Test: test ... 00:05:37.498 register 0x200000200000 2097152 00:05:37.498 malloc 3145728 00:05:37.498 register 0x200000400000 4194304 00:05:37.498 buf 0x200000500000 len 3145728 PASSED 00:05:37.498 malloc 64 00:05:37.498 buf 0x2000004fff40 len 64 PASSED 00:05:37.498 malloc 4194304 00:05:37.498 register 0x200000800000 6291456 00:05:37.498 buf 0x200000a00000 len 4194304 PASSED 00:05:37.498 free 0x200000500000 3145728 00:05:37.498 free 0x2000004fff40 64 00:05:37.498 unregister 0x200000400000 4194304 PASSED 00:05:37.498 free 0x200000a00000 4194304 00:05:37.498 unregister 0x200000800000 6291456 PASSED 00:05:37.498 malloc 8388608 00:05:37.498 register 0x200000400000 10485760 00:05:37.498 buf 0x200000600000 len 8388608 PASSED 00:05:37.498 free 0x200000600000 8388608 00:05:37.498 unregister 0x200000400000 10485760 PASSED 00:05:37.498 passed 00:05:37.498 00:05:37.498 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.498 suites 1 1 n/a 0 0 00:05:37.498 tests 1 1 1 0 0 00:05:37.498 asserts 15 15 15 0 n/a 00:05:37.498 00:05:37.498 Elapsed time = 0.006 seconds 00:05:37.498 00:05:37.498 real 0m0.067s 00:05:37.498 user 0m0.020s 00:05:37.498 sys 0m0.047s 00:05:37.498 07:15:21 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.498 07:15:21 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:37.498 ************************************ 00:05:37.498 END TEST env_mem_callbacks 00:05:37.498 ************************************ 00:05:37.498 00:05:37.498 real 0m7.428s 00:05:37.498 user 0m1.010s 00:05:37.498 sys 0m0.968s 00:05:37.498 07:15:21 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.498 07:15:21 env -- common/autotest_common.sh@10 -- # set +x 00:05:37.498 ************************************ 00:05:37.498 END TEST env 00:05:37.498 ************************************ 00:05:37.498 07:15:21 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:37.498 07:15:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.498 07:15:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.498 07:15:21 -- common/autotest_common.sh@10 -- # set +x 00:05:37.498 ************************************ 00:05:37.498 START TEST rpc 00:05:37.498 ************************************ 00:05:37.498 07:15:21 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:37.759 * Looking for test storage... 00:05:37.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:37.759 07:15:21 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:37.759 07:15:21 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:37.759 07:15:21 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:37.759 07:15:21 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:37.759 07:15:21 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.759 07:15:21 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.759 07:15:21 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.759 07:15:21 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.759 07:15:21 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.759 07:15:21 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.759 07:15:21 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.759 07:15:21 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.759 07:15:21 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.759 07:15:21 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.759 07:15:21 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.759 07:15:21 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:37.759 07:15:21 rpc -- scripts/common.sh@345 -- # : 1 00:05:37.759 07:15:21 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.759 07:15:21 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.759 07:15:21 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:37.759 07:15:21 rpc -- scripts/common.sh@353 -- # local d=1 00:05:37.759 07:15:21 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.759 07:15:21 rpc -- scripts/common.sh@355 -- # echo 1 00:05:37.759 07:15:21 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.759 07:15:21 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:37.759 07:15:21 rpc -- scripts/common.sh@353 -- # local d=2 00:05:37.759 07:15:21 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.759 07:15:21 rpc -- scripts/common.sh@355 -- # echo 2 00:05:37.759 07:15:21 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.759 07:15:21 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.759 07:15:21 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.759 07:15:21 rpc -- scripts/common.sh@368 -- # return 0 00:05:37.759 07:15:21 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.759 07:15:21 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:37.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.759 --rc genhtml_branch_coverage=1 00:05:37.759 --rc genhtml_function_coverage=1 00:05:37.759 --rc genhtml_legend=1 00:05:37.759 --rc geninfo_all_blocks=1 00:05:37.759 --rc geninfo_unexecuted_blocks=1 00:05:37.759 00:05:37.759 ' 00:05:37.759 07:15:21 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:37.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.759 --rc genhtml_branch_coverage=1 00:05:37.759 --rc genhtml_function_coverage=1 00:05:37.759 --rc genhtml_legend=1 00:05:37.759 --rc geninfo_all_blocks=1 00:05:37.759 --rc geninfo_unexecuted_blocks=1 00:05:37.759 00:05:37.759 ' 00:05:37.759 07:15:21 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:37.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.759 --rc genhtml_branch_coverage=1 00:05:37.759 --rc genhtml_function_coverage=1 00:05:37.759 --rc genhtml_legend=1 00:05:37.759 --rc geninfo_all_blocks=1 00:05:37.759 --rc geninfo_unexecuted_blocks=1 00:05:37.759 00:05:37.759 ' 00:05:37.759 07:15:21 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:37.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.759 --rc genhtml_branch_coverage=1 00:05:37.759 --rc genhtml_function_coverage=1 00:05:37.759 --rc genhtml_legend=1 00:05:37.759 --rc geninfo_all_blocks=1 00:05:37.759 --rc geninfo_unexecuted_blocks=1 00:05:37.759 00:05:37.759 ' 00:05:37.759 07:15:21 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1856184 00:05:37.759 07:15:21 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:37.759 07:15:21 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1856184 00:05:37.759 07:15:21 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:37.759 07:15:21 rpc -- common/autotest_common.sh@835 -- # '[' -z 1856184 ']' 00:05:37.759 07:15:21 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.759 07:15:21 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.759 07:15:21 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.759 07:15:21 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.759 07:15:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.759 [2024-11-26 07:15:21.820570] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:05:37.759 [2024-11-26 07:15:21.820635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1856184 ] 00:05:38.020 [2024-11-26 07:15:21.899883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.020 [2024-11-26 07:15:21.935489] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:38.020 [2024-11-26 07:15:21.935520] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1856184' to capture a snapshot of events at runtime. 00:05:38.020 [2024-11-26 07:15:21.935527] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:38.020 [2024-11-26 07:15:21.935534] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:38.020 [2024-11-26 07:15:21.935541] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1856184 for offline analysis/debug. 00:05:38.020 [2024-11-26 07:15:21.936113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.020 07:15:22 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.020 07:15:22 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:38.020 07:15:22 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:38.020 07:15:22 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:38.020 07:15:22 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:38.020 07:15:22 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:38.020 07:15:22 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.020 07:15:22 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.020 07:15:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.282 ************************************ 00:05:38.282 START TEST rpc_integrity 00:05:38.282 ************************************ 00:05:38.282 07:15:22 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:38.282 07:15:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:38.282 07:15:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.282 07:15:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.282 07:15:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.282 07:15:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:38.282 07:15:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:38.282 07:15:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:38.282 07:15:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:38.282 07:15:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.282 07:15:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.282 07:15:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.282 07:15:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:38.282 07:15:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:38.282 07:15:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.282 07:15:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.282 07:15:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.282 07:15:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:38.282 { 00:05:38.282 "name": "Malloc0", 00:05:38.282 "aliases": [ 00:05:38.282 "2423639f-9a0a-4376-9ebb-89ceb45ec240" 00:05:38.282 ], 00:05:38.282 "product_name": "Malloc disk", 00:05:38.282 "block_size": 512, 00:05:38.282 "num_blocks": 16384, 00:05:38.282 "uuid": "2423639f-9a0a-4376-9ebb-89ceb45ec240", 00:05:38.282 "assigned_rate_limits": { 00:05:38.282 "rw_ios_per_sec": 0, 00:05:38.282 "rw_mbytes_per_sec": 0, 00:05:38.282 "r_mbytes_per_sec": 0, 00:05:38.282 "w_mbytes_per_sec": 0 00:05:38.282 }, 00:05:38.282 "claimed": false, 00:05:38.282 "zoned": false, 00:05:38.282 "supported_io_types": { 00:05:38.282 "read": true, 00:05:38.282 "write": true, 00:05:38.282 "unmap": true, 00:05:38.282 "flush": true, 00:05:38.282 "reset": true, 00:05:38.282 "nvme_admin": false, 00:05:38.282 "nvme_io": false, 00:05:38.282 "nvme_io_md": false, 00:05:38.282 "write_zeroes": true, 00:05:38.282 "zcopy": true, 00:05:38.282 "get_zone_info": false, 00:05:38.282 "zone_management": false, 00:05:38.282 "zone_append": false, 00:05:38.282 "compare": false, 00:05:38.282 "compare_and_write": false, 00:05:38.282 "abort": true, 00:05:38.282 "seek_hole": false, 00:05:38.282 "seek_data": false, 00:05:38.282 "copy": true, 00:05:38.282 "nvme_iov_md": false 00:05:38.282 }, 00:05:38.282 "memory_domains": [ 00:05:38.282 { 00:05:38.282 "dma_device_id": "system", 00:05:38.282 "dma_device_type": 1 00:05:38.282 }, 00:05:38.282 { 00:05:38.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.282 "dma_device_type": 2 00:05:38.282 } 00:05:38.282 ], 00:05:38.282 "driver_specific": {} 00:05:38.282 } 00:05:38.282 ]' 00:05:38.283 07:15:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:38.283 07:15:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:38.283 07:15:22 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:38.283 07:15:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.283 07:15:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.283 [2024-11-26 07:15:22.273209] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:38.283 [2024-11-26 07:15:22.273238] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:38.283 [2024-11-26 07:15:22.273250] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf4cb10 00:05:38.283 [2024-11-26 07:15:22.273258] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:38.283 [2024-11-26 07:15:22.274622] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:38.283 [2024-11-26 07:15:22.274642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:38.283 Passthru0 00:05:38.283 07:15:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.283 07:15:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:38.283 07:15:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.283 07:15:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.283 07:15:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.283 07:15:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:38.283 { 00:05:38.283 "name": "Malloc0", 00:05:38.283 "aliases": [ 00:05:38.283 "2423639f-9a0a-4376-9ebb-89ceb45ec240" 00:05:38.283 ], 00:05:38.283 "product_name": "Malloc disk", 00:05:38.283 "block_size": 512, 00:05:38.283 "num_blocks": 16384, 00:05:38.283 "uuid": "2423639f-9a0a-4376-9ebb-89ceb45ec240", 00:05:38.283 "assigned_rate_limits": { 00:05:38.283 "rw_ios_per_sec": 0, 00:05:38.283 "rw_mbytes_per_sec": 0, 00:05:38.283 "r_mbytes_per_sec": 0, 00:05:38.283 "w_mbytes_per_sec": 0 00:05:38.283 }, 00:05:38.283 "claimed": true, 00:05:38.283 "claim_type": "exclusive_write", 00:05:38.283 "zoned": false, 00:05:38.283 "supported_io_types": { 00:05:38.283 "read": true, 00:05:38.283 "write": true, 00:05:38.283 "unmap": true, 00:05:38.283 "flush": true, 00:05:38.283 "reset": true, 00:05:38.283 "nvme_admin": false, 00:05:38.283 "nvme_io": false, 00:05:38.283 "nvme_io_md": false, 00:05:38.283 "write_zeroes": true, 00:05:38.283 "zcopy": true, 00:05:38.283 "get_zone_info": false, 00:05:38.283 "zone_management": false, 00:05:38.283 "zone_append": false, 00:05:38.283 "compare": false, 00:05:38.283 "compare_and_write": false, 00:05:38.283 "abort": true, 00:05:38.283 "seek_hole": false, 00:05:38.283 "seek_data": false, 00:05:38.283 "copy": true, 00:05:38.283 "nvme_iov_md": false 00:05:38.283 }, 00:05:38.283 "memory_domains": [ 00:05:38.283 { 00:05:38.283 "dma_device_id": "system", 00:05:38.283 "dma_device_type": 1 00:05:38.283 }, 00:05:38.283 { 00:05:38.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.283 "dma_device_type": 2 00:05:38.283 } 00:05:38.283 ], 00:05:38.283 "driver_specific": {} 00:05:38.283 }, 00:05:38.283 { 00:05:38.283 "name": "Passthru0", 00:05:38.283 "aliases": [ 00:05:38.283 "48202b7e-962e-5136-80e1-8d1dc979455d" 00:05:38.283 ], 00:05:38.283 "product_name": "passthru", 00:05:38.283 "block_size": 512, 00:05:38.283 "num_blocks": 16384, 00:05:38.283 "uuid": "48202b7e-962e-5136-80e1-8d1dc979455d", 00:05:38.283 "assigned_rate_limits": { 00:05:38.283 "rw_ios_per_sec": 0, 00:05:38.283 "rw_mbytes_per_sec": 0, 00:05:38.283 "r_mbytes_per_sec": 0, 00:05:38.283 "w_mbytes_per_sec": 0 00:05:38.283 }, 00:05:38.283 "claimed": false, 00:05:38.283 "zoned": false, 00:05:38.283 "supported_io_types": { 00:05:38.283 "read": true, 00:05:38.283 "write": true, 00:05:38.283 "unmap": true, 00:05:38.283 "flush": true, 00:05:38.283 "reset": true, 00:05:38.283 "nvme_admin": false, 00:05:38.283 "nvme_io": false, 00:05:38.283 "nvme_io_md": false, 00:05:38.283 "write_zeroes": true, 00:05:38.283 "zcopy": true, 00:05:38.283 "get_zone_info": false, 00:05:38.283 "zone_management": false, 00:05:38.283 "zone_append": false, 00:05:38.283 "compare": false, 00:05:38.283 "compare_and_write": false, 00:05:38.283 "abort": true, 00:05:38.283 "seek_hole": false, 00:05:38.283 "seek_data": false, 00:05:38.283 "copy": true, 00:05:38.283 "nvme_iov_md": false 00:05:38.283 }, 00:05:38.283 "memory_domains": [ 00:05:38.283 { 00:05:38.283 "dma_device_id": "system", 00:05:38.283 "dma_device_type": 1 00:05:38.283 }, 00:05:38.283 { 00:05:38.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.283 "dma_device_type": 2 00:05:38.283 } 00:05:38.283 ], 00:05:38.283 "driver_specific": { 00:05:38.283 "passthru": { 00:05:38.283 "name": "Passthru0", 00:05:38.283 "base_bdev_name": "Malloc0" 00:05:38.283 } 00:05:38.283 } 00:05:38.283 } 00:05:38.283 ]' 00:05:38.283 07:15:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:38.283 07:15:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:38.283 07:15:22 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:38.283 07:15:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.283 07:15:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.283 07:15:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.283 07:15:22 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:38.283 07:15:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.283 07:15:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.283 07:15:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.283 07:15:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:38.283 07:15:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.283 07:15:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.283 07:15:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.283 07:15:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:38.283 07:15:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:38.544 07:15:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:38.544 00:05:38.544 real 0m0.273s 00:05:38.544 user 0m0.171s 00:05:38.544 sys 0m0.036s 00:05:38.544 07:15:22 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.544 07:15:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.544 ************************************ 00:05:38.544 END TEST rpc_integrity 00:05:38.544 ************************************ 00:05:38.544 07:15:22 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:38.544 07:15:22 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.544 07:15:22 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.544 07:15:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.544 ************************************ 00:05:38.544 START TEST rpc_plugins 00:05:38.544 ************************************ 00:05:38.544 07:15:22 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:38.544 07:15:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:38.544 07:15:22 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.544 07:15:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:38.544 07:15:22 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.544 07:15:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:38.544 07:15:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:38.545 07:15:22 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.545 07:15:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:38.545 07:15:22 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.545 07:15:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:38.545 { 00:05:38.545 "name": "Malloc1", 00:05:38.545 "aliases": [ 00:05:38.545 "db7e16bc-192a-4cb0-9af7-89098a2106c5" 00:05:38.545 ], 00:05:38.545 "product_name": "Malloc disk", 00:05:38.545 "block_size": 4096, 00:05:38.545 "num_blocks": 256, 00:05:38.545 "uuid": "db7e16bc-192a-4cb0-9af7-89098a2106c5", 00:05:38.545 "assigned_rate_limits": { 00:05:38.545 "rw_ios_per_sec": 0, 00:05:38.545 "rw_mbytes_per_sec": 0, 00:05:38.545 "r_mbytes_per_sec": 0, 00:05:38.545 "w_mbytes_per_sec": 0 00:05:38.545 }, 00:05:38.545 "claimed": false, 00:05:38.545 "zoned": false, 00:05:38.545 "supported_io_types": { 00:05:38.545 "read": true, 00:05:38.545 "write": true, 00:05:38.545 "unmap": true, 00:05:38.545 "flush": true, 00:05:38.545 "reset": true, 00:05:38.545 "nvme_admin": false, 00:05:38.545 "nvme_io": false, 00:05:38.545 "nvme_io_md": false, 00:05:38.545 "write_zeroes": true, 00:05:38.545 "zcopy": true, 00:05:38.545 "get_zone_info": false, 00:05:38.545 "zone_management": false, 00:05:38.545 "zone_append": false, 00:05:38.545 "compare": false, 00:05:38.545 "compare_and_write": false, 00:05:38.545 "abort": true, 00:05:38.545 "seek_hole": false, 00:05:38.545 "seek_data": false, 00:05:38.545 "copy": true, 00:05:38.545 "nvme_iov_md": false 00:05:38.545 }, 00:05:38.545 "memory_domains": [ 00:05:38.545 { 00:05:38.545 "dma_device_id": "system", 00:05:38.545 "dma_device_type": 1 00:05:38.545 }, 00:05:38.545 { 00:05:38.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.545 "dma_device_type": 2 00:05:38.545 } 00:05:38.545 ], 00:05:38.545 "driver_specific": {} 00:05:38.545 } 00:05:38.545 ]' 00:05:38.545 07:15:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:38.545 07:15:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:38.545 07:15:22 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:38.545 07:15:22 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.545 07:15:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:38.545 07:15:22 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.545 07:15:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:38.545 07:15:22 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.545 07:15:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:38.545 07:15:22 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.545 07:15:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:38.545 07:15:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:38.545 07:15:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:38.545 00:05:38.545 real 0m0.101s 00:05:38.545 user 0m0.048s 00:05:38.545 sys 0m0.016s 00:05:38.545 07:15:22 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.545 07:15:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:38.545 ************************************ 00:05:38.545 END TEST rpc_plugins 00:05:38.545 ************************************ 00:05:38.545 07:15:22 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:38.545 07:15:22 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.545 07:15:22 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.545 07:15:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.805 ************************************ 00:05:38.805 START TEST rpc_trace_cmd_test 00:05:38.805 ************************************ 00:05:38.805 07:15:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:38.805 07:15:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:38.805 07:15:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:38.805 07:15:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.805 07:15:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:38.805 07:15:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.805 07:15:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:38.805 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1856184", 00:05:38.805 "tpoint_group_mask": "0x8", 00:05:38.805 "iscsi_conn": { 00:05:38.805 "mask": "0x2", 00:05:38.805 "tpoint_mask": "0x0" 00:05:38.805 }, 00:05:38.805 "scsi": { 00:05:38.805 "mask": "0x4", 00:05:38.805 "tpoint_mask": "0x0" 00:05:38.805 }, 00:05:38.805 "bdev": { 00:05:38.805 "mask": "0x8", 00:05:38.805 "tpoint_mask": "0xffffffffffffffff" 00:05:38.805 }, 00:05:38.805 "nvmf_rdma": { 00:05:38.805 "mask": "0x10", 00:05:38.805 "tpoint_mask": "0x0" 00:05:38.805 }, 00:05:38.805 "nvmf_tcp": { 00:05:38.805 "mask": "0x20", 00:05:38.805 "tpoint_mask": "0x0" 00:05:38.805 }, 00:05:38.805 "ftl": { 00:05:38.805 "mask": "0x40", 00:05:38.805 "tpoint_mask": "0x0" 00:05:38.805 }, 00:05:38.805 "blobfs": { 00:05:38.805 "mask": "0x80", 00:05:38.805 "tpoint_mask": "0x0" 00:05:38.805 }, 00:05:38.805 "dsa": { 00:05:38.805 "mask": "0x200", 00:05:38.805 "tpoint_mask": "0x0" 00:05:38.805 }, 00:05:38.805 "thread": { 00:05:38.805 "mask": "0x400", 00:05:38.805 "tpoint_mask": "0x0" 00:05:38.805 }, 00:05:38.805 "nvme_pcie": { 00:05:38.805 "mask": "0x800", 00:05:38.805 "tpoint_mask": "0x0" 00:05:38.805 }, 00:05:38.805 "iaa": { 00:05:38.805 "mask": "0x1000", 00:05:38.805 "tpoint_mask": "0x0" 00:05:38.805 }, 00:05:38.805 "nvme_tcp": { 00:05:38.805 "mask": "0x2000", 00:05:38.805 "tpoint_mask": "0x0" 00:05:38.805 }, 00:05:38.805 "bdev_nvme": { 00:05:38.805 "mask": "0x4000", 00:05:38.805 "tpoint_mask": "0x0" 00:05:38.805 }, 00:05:38.805 "sock": { 00:05:38.805 "mask": "0x8000", 00:05:38.805 "tpoint_mask": "0x0" 00:05:38.805 }, 00:05:38.805 "blob": { 00:05:38.805 "mask": "0x10000", 00:05:38.805 "tpoint_mask": "0x0" 00:05:38.805 }, 00:05:38.805 "bdev_raid": { 00:05:38.805 "mask": "0x20000", 00:05:38.805 "tpoint_mask": "0x0" 00:05:38.805 }, 00:05:38.805 "scheduler": { 00:05:38.805 "mask": "0x40000", 00:05:38.805 "tpoint_mask": "0x0" 00:05:38.805 } 00:05:38.805 }' 00:05:38.805 07:15:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:38.805 07:15:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:38.805 07:15:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:38.805 07:15:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:38.805 07:15:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:38.805 07:15:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:38.805 07:15:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:38.805 07:15:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:38.805 07:15:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:38.805 07:15:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:38.805 00:05:38.805 real 0m0.246s 00:05:38.805 user 0m0.208s 00:05:38.805 sys 0m0.030s 00:05:38.805 07:15:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.805 07:15:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:38.805 ************************************ 00:05:38.805 END TEST rpc_trace_cmd_test 00:05:38.805 ************************************ 00:05:39.066 07:15:22 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:39.066 07:15:22 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:39.066 07:15:22 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:39.066 07:15:22 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.066 07:15:22 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.066 07:15:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.066 ************************************ 00:05:39.066 START TEST rpc_daemon_integrity 00:05:39.066 ************************************ 00:05:39.066 07:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:39.066 07:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:39.066 07:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.066 07:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.066 07:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.066 07:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:39.066 07:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:39.066 07:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:39.066 07:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:39.066 07:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.066 07:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.066 07:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.066 07:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:39.066 07:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:39.066 07:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.066 07:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.066 07:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.066 07:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:39.066 { 00:05:39.066 "name": "Malloc2", 00:05:39.066 "aliases": [ 00:05:39.066 "17ab3d32-0a8d-4c7b-8d14-34a052eb5181" 00:05:39.066 ], 00:05:39.066 "product_name": "Malloc disk", 00:05:39.066 "block_size": 512, 00:05:39.066 "num_blocks": 16384, 00:05:39.066 "uuid": "17ab3d32-0a8d-4c7b-8d14-34a052eb5181", 00:05:39.066 "assigned_rate_limits": { 00:05:39.066 "rw_ios_per_sec": 0, 00:05:39.066 "rw_mbytes_per_sec": 0, 00:05:39.066 "r_mbytes_per_sec": 0, 00:05:39.066 "w_mbytes_per_sec": 0 00:05:39.066 }, 00:05:39.066 "claimed": false, 00:05:39.066 "zoned": false, 00:05:39.066 "supported_io_types": { 00:05:39.066 "read": true, 00:05:39.066 "write": true, 00:05:39.066 "unmap": true, 00:05:39.066 "flush": true, 00:05:39.066 "reset": true, 00:05:39.066 "nvme_admin": false, 00:05:39.066 "nvme_io": false, 00:05:39.066 "nvme_io_md": false, 00:05:39.066 "write_zeroes": true, 00:05:39.066 "zcopy": true, 00:05:39.066 "get_zone_info": false, 00:05:39.066 "zone_management": false, 00:05:39.066 "zone_append": false, 00:05:39.066 "compare": false, 00:05:39.066 "compare_and_write": false, 00:05:39.066 "abort": true, 00:05:39.066 "seek_hole": false, 00:05:39.066 "seek_data": false, 00:05:39.066 "copy": true, 00:05:39.066 "nvme_iov_md": false 00:05:39.066 }, 00:05:39.066 "memory_domains": [ 00:05:39.066 { 00:05:39.066 "dma_device_id": "system", 00:05:39.066 "dma_device_type": 1 00:05:39.066 }, 00:05:39.066 { 00:05:39.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.066 "dma_device_type": 2 00:05:39.066 } 00:05:39.066 ], 00:05:39.066 "driver_specific": {} 00:05:39.066 } 00:05:39.066 ]' 00:05:39.066 07:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:39.066 07:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:39.066 07:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:39.066 07:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.066 07:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.066 [2024-11-26 07:15:23.143573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:39.066 [2024-11-26 07:15:23.143602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:39.066 [2024-11-26 07:15:23.143616] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfdd380 00:05:39.066 [2024-11-26 07:15:23.143623] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:39.066 [2024-11-26 07:15:23.144884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:39.066 [2024-11-26 07:15:23.144905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:39.066 Passthru0 00:05:39.066 07:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.066 07:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:39.066 07:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.066 07:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.066 07:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.066 07:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:39.066 { 00:05:39.066 "name": "Malloc2", 00:05:39.066 "aliases": [ 00:05:39.066 "17ab3d32-0a8d-4c7b-8d14-34a052eb5181" 00:05:39.066 ], 00:05:39.066 "product_name": "Malloc disk", 00:05:39.066 "block_size": 512, 00:05:39.066 "num_blocks": 16384, 00:05:39.066 "uuid": "17ab3d32-0a8d-4c7b-8d14-34a052eb5181", 00:05:39.066 "assigned_rate_limits": { 00:05:39.066 "rw_ios_per_sec": 0, 00:05:39.066 "rw_mbytes_per_sec": 0, 00:05:39.066 "r_mbytes_per_sec": 0, 00:05:39.066 "w_mbytes_per_sec": 0 00:05:39.066 }, 00:05:39.066 "claimed": true, 00:05:39.066 "claim_type": "exclusive_write", 00:05:39.066 "zoned": false, 00:05:39.066 "supported_io_types": { 00:05:39.066 "read": true, 00:05:39.066 "write": true, 00:05:39.066 "unmap": true, 00:05:39.066 "flush": true, 00:05:39.066 "reset": true, 00:05:39.066 "nvme_admin": false, 00:05:39.066 "nvme_io": false, 00:05:39.066 "nvme_io_md": false, 00:05:39.066 "write_zeroes": true, 00:05:39.066 "zcopy": true, 00:05:39.066 "get_zone_info": false, 00:05:39.066 "zone_management": false, 00:05:39.066 "zone_append": false, 00:05:39.066 "compare": false, 00:05:39.066 "compare_and_write": false, 00:05:39.066 "abort": true, 00:05:39.066 "seek_hole": false, 00:05:39.066 "seek_data": false, 00:05:39.066 "copy": true, 00:05:39.066 "nvme_iov_md": false 00:05:39.066 }, 00:05:39.066 "memory_domains": [ 00:05:39.066 { 00:05:39.066 "dma_device_id": "system", 00:05:39.066 "dma_device_type": 1 00:05:39.066 }, 00:05:39.066 { 00:05:39.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.066 "dma_device_type": 2 00:05:39.066 } 00:05:39.066 ], 00:05:39.066 "driver_specific": {} 00:05:39.066 }, 00:05:39.066 { 00:05:39.066 "name": "Passthru0", 00:05:39.066 "aliases": [ 00:05:39.066 "2aff5490-1d51-52fd-9585-56ed333e62f7" 00:05:39.066 ], 00:05:39.066 "product_name": "passthru", 00:05:39.066 "block_size": 512, 00:05:39.066 "num_blocks": 16384, 00:05:39.066 "uuid": "2aff5490-1d51-52fd-9585-56ed333e62f7", 00:05:39.066 "assigned_rate_limits": { 00:05:39.066 "rw_ios_per_sec": 0, 00:05:39.066 "rw_mbytes_per_sec": 0, 00:05:39.066 "r_mbytes_per_sec": 0, 00:05:39.066 "w_mbytes_per_sec": 0 00:05:39.066 }, 00:05:39.066 "claimed": false, 00:05:39.066 "zoned": false, 00:05:39.066 "supported_io_types": { 00:05:39.066 "read": true, 00:05:39.066 "write": true, 00:05:39.066 "unmap": true, 00:05:39.066 "flush": true, 00:05:39.066 "reset": true, 00:05:39.066 "nvme_admin": false, 00:05:39.066 "nvme_io": false, 00:05:39.066 "nvme_io_md": false, 00:05:39.066 "write_zeroes": true, 00:05:39.066 "zcopy": true, 00:05:39.066 "get_zone_info": false, 00:05:39.066 "zone_management": false, 00:05:39.066 "zone_append": false, 00:05:39.066 "compare": false, 00:05:39.066 "compare_and_write": false, 00:05:39.066 "abort": true, 00:05:39.066 "seek_hole": false, 00:05:39.066 "seek_data": false, 00:05:39.066 "copy": true, 00:05:39.066 "nvme_iov_md": false 00:05:39.066 }, 00:05:39.066 "memory_domains": [ 00:05:39.066 { 00:05:39.066 "dma_device_id": "system", 00:05:39.066 "dma_device_type": 1 00:05:39.066 }, 00:05:39.066 { 00:05:39.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.066 "dma_device_type": 2 00:05:39.066 } 00:05:39.066 ], 00:05:39.066 "driver_specific": { 00:05:39.066 "passthru": { 00:05:39.066 "name": "Passthru0", 00:05:39.066 "base_bdev_name": "Malloc2" 00:05:39.066 } 00:05:39.066 } 00:05:39.066 } 00:05:39.066 ]' 00:05:39.066 07:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:39.326 07:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:39.326 07:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:39.327 07:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.327 07:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.327 07:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.327 07:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:39.327 07:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.327 07:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.327 07:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.327 07:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:39.327 07:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.327 07:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.327 07:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.327 07:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:39.327 07:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:39.327 07:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:39.327 00:05:39.327 real 0m0.303s 00:05:39.327 user 0m0.200s 00:05:39.327 sys 0m0.038s 00:05:39.327 07:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.327 07:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.327 ************************************ 00:05:39.327 END TEST rpc_daemon_integrity 00:05:39.327 ************************************ 00:05:39.327 07:15:23 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:39.327 07:15:23 rpc -- rpc/rpc.sh@84 -- # killprocess 1856184 00:05:39.327 07:15:23 rpc -- common/autotest_common.sh@954 -- # '[' -z 1856184 ']' 00:05:39.327 07:15:23 rpc -- common/autotest_common.sh@958 -- # kill -0 1856184 00:05:39.327 07:15:23 rpc -- common/autotest_common.sh@959 -- # uname 00:05:39.327 07:15:23 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.327 07:15:23 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1856184 00:05:39.327 07:15:23 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.327 07:15:23 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.327 07:15:23 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1856184' 00:05:39.327 killing process with pid 1856184 00:05:39.327 07:15:23 rpc -- common/autotest_common.sh@973 -- # kill 1856184 00:05:39.327 07:15:23 rpc -- common/autotest_common.sh@978 -- # wait 1856184 00:05:39.587 00:05:39.587 real 0m2.045s 00:05:39.587 user 0m2.716s 00:05:39.587 sys 0m0.687s 00:05:39.587 07:15:23 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.587 07:15:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.587 ************************************ 00:05:39.587 END TEST rpc 00:05:39.587 ************************************ 00:05:39.587 07:15:23 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:39.587 07:15:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.587 07:15:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.587 07:15:23 -- common/autotest_common.sh@10 -- # set +x 00:05:39.587 ************************************ 00:05:39.587 START TEST skip_rpc 00:05:39.587 ************************************ 00:05:39.587 07:15:23 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:39.850 * Looking for test storage... 00:05:39.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:39.850 07:15:23 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:39.850 07:15:23 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:39.850 07:15:23 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:39.850 07:15:23 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:39.850 07:15:23 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.850 07:15:23 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.850 07:15:23 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.850 07:15:23 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.850 07:15:23 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.850 07:15:23 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.850 07:15:23 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.850 07:15:23 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.850 07:15:23 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.850 07:15:23 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.850 07:15:23 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.850 07:15:23 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:39.850 07:15:23 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:39.850 07:15:23 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.850 07:15:23 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.850 07:15:23 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:39.850 07:15:23 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:39.850 07:15:23 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.850 07:15:23 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:39.850 07:15:23 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.850 07:15:23 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:39.850 07:15:23 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:39.850 07:15:23 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.850 07:15:23 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:39.850 07:15:23 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.850 07:15:23 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.850 07:15:23 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.850 07:15:23 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:39.850 07:15:23 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.850 07:15:23 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:39.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.850 --rc genhtml_branch_coverage=1 00:05:39.850 --rc genhtml_function_coverage=1 00:05:39.850 --rc genhtml_legend=1 00:05:39.850 --rc geninfo_all_blocks=1 00:05:39.850 --rc geninfo_unexecuted_blocks=1 00:05:39.850 00:05:39.850 ' 00:05:39.850 07:15:23 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:39.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.850 --rc genhtml_branch_coverage=1 00:05:39.850 --rc genhtml_function_coverage=1 00:05:39.850 --rc genhtml_legend=1 00:05:39.850 --rc geninfo_all_blocks=1 00:05:39.850 --rc geninfo_unexecuted_blocks=1 00:05:39.850 00:05:39.850 ' 00:05:39.850 07:15:23 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:39.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.850 --rc genhtml_branch_coverage=1 00:05:39.850 --rc genhtml_function_coverage=1 00:05:39.850 --rc genhtml_legend=1 00:05:39.850 --rc geninfo_all_blocks=1 00:05:39.850 --rc geninfo_unexecuted_blocks=1 00:05:39.850 00:05:39.850 ' 00:05:39.850 07:15:23 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:39.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.850 --rc genhtml_branch_coverage=1 00:05:39.850 --rc genhtml_function_coverage=1 00:05:39.850 --rc genhtml_legend=1 00:05:39.850 --rc geninfo_all_blocks=1 00:05:39.850 --rc geninfo_unexecuted_blocks=1 00:05:39.850 00:05:39.850 ' 00:05:39.850 07:15:23 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:39.850 07:15:23 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:39.850 07:15:23 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:39.850 07:15:23 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.850 07:15:23 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.850 07:15:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.850 ************************************ 00:05:39.850 START TEST skip_rpc 00:05:39.850 ************************************ 00:05:39.850 07:15:23 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:39.850 07:15:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1856714 00:05:39.850 07:15:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.850 07:15:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:39.850 07:15:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:39.850 [2024-11-26 07:15:23.961540] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:05:39.850 [2024-11-26 07:15:23.961590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1856714 ] 00:05:40.111 [2024-11-26 07:15:24.039815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.111 [2024-11-26 07:15:24.075866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.404 07:15:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:45.404 07:15:28 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:45.404 07:15:28 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:45.404 07:15:28 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:45.404 07:15:28 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:45.404 07:15:28 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:45.404 07:15:28 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:45.404 07:15:28 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:45.404 07:15:28 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.404 07:15:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.404 07:15:28 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:45.404 07:15:28 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:45.404 07:15:28 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:45.404 07:15:28 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:45.404 07:15:28 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:45.404 07:15:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:45.404 07:15:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1856714 00:05:45.404 07:15:28 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1856714 ']' 00:05:45.404 07:15:28 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1856714 00:05:45.404 07:15:28 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:45.404 07:15:28 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.404 07:15:28 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1856714 00:05:45.404 07:15:28 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.404 07:15:28 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.404 07:15:28 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1856714' 00:05:45.404 killing process with pid 1856714 00:05:45.404 07:15:28 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1856714 00:05:45.404 07:15:28 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1856714 00:05:45.404 00:05:45.404 real 0m5.285s 00:05:45.404 user 0m5.096s 00:05:45.404 sys 0m0.238s 00:05:45.404 07:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.404 07:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.404 ************************************ 00:05:45.404 END TEST skip_rpc 00:05:45.404 ************************************ 00:05:45.404 07:15:29 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:45.404 07:15:29 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.404 07:15:29 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.404 07:15:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.404 ************************************ 00:05:45.404 START TEST skip_rpc_with_json 00:05:45.404 ************************************ 00:05:45.404 07:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:45.404 07:15:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:45.404 07:15:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1857751 00:05:45.404 07:15:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:45.404 07:15:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1857751 00:05:45.404 07:15:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:45.404 07:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1857751 ']' 00:05:45.404 07:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.404 07:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.404 07:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.404 07:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.404 07:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:45.404 [2024-11-26 07:15:29.328133] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:05:45.404 [2024-11-26 07:15:29.328187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1857751 ] 00:05:45.404 [2024-11-26 07:15:29.407593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.404 [2024-11-26 07:15:29.443766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.061 07:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.061 07:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:46.061 07:15:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:46.061 07:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.061 07:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:46.061 [2024-11-26 07:15:30.120359] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:46.061 request: 00:05:46.061 { 00:05:46.061 "trtype": "tcp", 00:05:46.061 "method": "nvmf_get_transports", 00:05:46.061 "req_id": 1 00:05:46.061 } 00:05:46.061 Got JSON-RPC error response 00:05:46.061 response: 00:05:46.061 { 00:05:46.061 "code": -19, 00:05:46.061 "message": "No such device" 00:05:46.061 } 00:05:46.061 07:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:46.061 07:15:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:46.061 07:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.061 07:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:46.061 [2024-11-26 07:15:30.128471] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:46.061 07:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.061 07:15:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:46.061 07:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.061 07:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:46.322 07:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.322 07:15:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:46.322 { 00:05:46.322 "subsystems": [ 00:05:46.322 { 00:05:46.322 "subsystem": "fsdev", 00:05:46.322 "config": [ 00:05:46.322 { 00:05:46.322 "method": "fsdev_set_opts", 00:05:46.322 "params": { 00:05:46.322 "fsdev_io_pool_size": 65535, 00:05:46.322 "fsdev_io_cache_size": 256 00:05:46.322 } 00:05:46.322 } 00:05:46.322 ] 00:05:46.322 }, 00:05:46.322 { 00:05:46.322 "subsystem": "vfio_user_target", 00:05:46.322 "config": null 00:05:46.322 }, 00:05:46.322 { 00:05:46.322 "subsystem": "keyring", 00:05:46.322 "config": [] 00:05:46.322 }, 00:05:46.322 { 00:05:46.322 "subsystem": "iobuf", 00:05:46.322 "config": [ 00:05:46.322 { 00:05:46.322 "method": "iobuf_set_options", 00:05:46.322 "params": { 00:05:46.322 "small_pool_count": 8192, 00:05:46.322 "large_pool_count": 1024, 00:05:46.322 "small_bufsize": 8192, 00:05:46.322 "large_bufsize": 135168, 00:05:46.322 "enable_numa": false 00:05:46.322 } 00:05:46.322 } 00:05:46.322 ] 00:05:46.322 }, 00:05:46.322 { 00:05:46.322 "subsystem": "sock", 00:05:46.322 "config": [ 00:05:46.322 { 00:05:46.322 "method": "sock_set_default_impl", 00:05:46.322 "params": { 00:05:46.322 "impl_name": "posix" 00:05:46.322 } 00:05:46.322 }, 00:05:46.322 { 00:05:46.322 "method": "sock_impl_set_options", 00:05:46.322 "params": { 00:05:46.322 "impl_name": "ssl", 00:05:46.322 "recv_buf_size": 4096, 00:05:46.322 "send_buf_size": 4096, 00:05:46.322 "enable_recv_pipe": true, 00:05:46.322 "enable_quickack": false, 00:05:46.322 "enable_placement_id": 0, 00:05:46.322 "enable_zerocopy_send_server": true, 00:05:46.322 "enable_zerocopy_send_client": false, 00:05:46.322 "zerocopy_threshold": 0, 00:05:46.322 "tls_version": 0, 00:05:46.322 "enable_ktls": false 00:05:46.322 } 00:05:46.322 }, 00:05:46.322 { 00:05:46.322 "method": "sock_impl_set_options", 00:05:46.322 "params": { 00:05:46.322 "impl_name": "posix", 00:05:46.322 "recv_buf_size": 2097152, 00:05:46.322 "send_buf_size": 2097152, 00:05:46.322 "enable_recv_pipe": true, 00:05:46.322 "enable_quickack": false, 00:05:46.322 "enable_placement_id": 0, 00:05:46.322 "enable_zerocopy_send_server": true, 00:05:46.322 "enable_zerocopy_send_client": false, 00:05:46.322 "zerocopy_threshold": 0, 00:05:46.322 "tls_version": 0, 00:05:46.322 "enable_ktls": false 00:05:46.322 } 00:05:46.322 } 00:05:46.322 ] 00:05:46.322 }, 00:05:46.322 { 00:05:46.322 "subsystem": "vmd", 00:05:46.322 "config": [] 00:05:46.322 }, 00:05:46.322 { 00:05:46.322 "subsystem": "accel", 00:05:46.322 "config": [ 00:05:46.322 { 00:05:46.322 "method": "accel_set_options", 00:05:46.322 "params": { 00:05:46.322 "small_cache_size": 128, 00:05:46.322 "large_cache_size": 16, 00:05:46.322 "task_count": 2048, 00:05:46.322 "sequence_count": 2048, 00:05:46.322 "buf_count": 2048 00:05:46.322 } 00:05:46.322 } 00:05:46.322 ] 00:05:46.322 }, 00:05:46.322 { 00:05:46.322 "subsystem": "bdev", 00:05:46.322 "config": [ 00:05:46.322 { 00:05:46.322 "method": "bdev_set_options", 00:05:46.322 "params": { 00:05:46.322 "bdev_io_pool_size": 65535, 00:05:46.322 "bdev_io_cache_size": 256, 00:05:46.322 "bdev_auto_examine": true, 00:05:46.322 "iobuf_small_cache_size": 128, 00:05:46.322 "iobuf_large_cache_size": 16 00:05:46.322 } 00:05:46.322 }, 00:05:46.322 { 00:05:46.322 "method": "bdev_raid_set_options", 00:05:46.322 "params": { 00:05:46.322 "process_window_size_kb": 1024, 00:05:46.322 "process_max_bandwidth_mb_sec": 0 00:05:46.322 } 00:05:46.322 }, 00:05:46.322 { 00:05:46.322 "method": "bdev_iscsi_set_options", 00:05:46.322 "params": { 00:05:46.322 "timeout_sec": 30 00:05:46.322 } 00:05:46.322 }, 00:05:46.322 { 00:05:46.322 "method": "bdev_nvme_set_options", 00:05:46.322 "params": { 00:05:46.322 "action_on_timeout": "none", 00:05:46.322 "timeout_us": 0, 00:05:46.322 "timeout_admin_us": 0, 00:05:46.322 "keep_alive_timeout_ms": 10000, 00:05:46.322 "arbitration_burst": 0, 00:05:46.322 "low_priority_weight": 0, 00:05:46.322 "medium_priority_weight": 0, 00:05:46.322 "high_priority_weight": 0, 00:05:46.322 "nvme_adminq_poll_period_us": 10000, 00:05:46.322 "nvme_ioq_poll_period_us": 0, 00:05:46.322 "io_queue_requests": 0, 00:05:46.322 "delay_cmd_submit": true, 00:05:46.322 "transport_retry_count": 4, 00:05:46.322 "bdev_retry_count": 3, 00:05:46.322 "transport_ack_timeout": 0, 00:05:46.322 "ctrlr_loss_timeout_sec": 0, 00:05:46.322 "reconnect_delay_sec": 0, 00:05:46.322 "fast_io_fail_timeout_sec": 0, 00:05:46.322 "disable_auto_failback": false, 00:05:46.322 "generate_uuids": false, 00:05:46.322 "transport_tos": 0, 00:05:46.322 "nvme_error_stat": false, 00:05:46.322 "rdma_srq_size": 0, 00:05:46.322 "io_path_stat": false, 00:05:46.322 "allow_accel_sequence": false, 00:05:46.322 "rdma_max_cq_size": 0, 00:05:46.322 "rdma_cm_event_timeout_ms": 0, 00:05:46.322 "dhchap_digests": [ 00:05:46.322 "sha256", 00:05:46.322 "sha384", 00:05:46.322 "sha512" 00:05:46.322 ], 00:05:46.322 "dhchap_dhgroups": [ 00:05:46.322 "null", 00:05:46.322 "ffdhe2048", 00:05:46.322 "ffdhe3072", 00:05:46.322 "ffdhe4096", 00:05:46.322 "ffdhe6144", 00:05:46.323 "ffdhe8192" 00:05:46.323 ] 00:05:46.323 } 00:05:46.323 }, 00:05:46.323 { 00:05:46.323 "method": "bdev_nvme_set_hotplug", 00:05:46.323 "params": { 00:05:46.323 "period_us": 100000, 00:05:46.323 "enable": false 00:05:46.323 } 00:05:46.323 }, 00:05:46.323 { 00:05:46.323 "method": "bdev_wait_for_examine" 00:05:46.323 } 00:05:46.323 ] 00:05:46.323 }, 00:05:46.323 { 00:05:46.323 "subsystem": "scsi", 00:05:46.323 "config": null 00:05:46.323 }, 00:05:46.323 { 00:05:46.323 "subsystem": "scheduler", 00:05:46.323 "config": [ 00:05:46.323 { 00:05:46.323 "method": "framework_set_scheduler", 00:05:46.323 "params": { 00:05:46.323 "name": "static" 00:05:46.323 } 00:05:46.323 } 00:05:46.323 ] 00:05:46.323 }, 00:05:46.323 { 00:05:46.323 "subsystem": "vhost_scsi", 00:05:46.323 "config": [] 00:05:46.323 }, 00:05:46.323 { 00:05:46.323 "subsystem": "vhost_blk", 00:05:46.323 "config": [] 00:05:46.323 }, 00:05:46.323 { 00:05:46.323 "subsystem": "ublk", 00:05:46.323 "config": [] 00:05:46.323 }, 00:05:46.323 { 00:05:46.323 "subsystem": "nbd", 00:05:46.323 "config": [] 00:05:46.323 }, 00:05:46.323 { 00:05:46.323 "subsystem": "nvmf", 00:05:46.323 "config": [ 00:05:46.323 { 00:05:46.323 "method": "nvmf_set_config", 00:05:46.323 "params": { 00:05:46.323 "discovery_filter": "match_any", 00:05:46.323 "admin_cmd_passthru": { 00:05:46.323 "identify_ctrlr": false 00:05:46.323 }, 00:05:46.323 "dhchap_digests": [ 00:05:46.323 "sha256", 00:05:46.323 "sha384", 00:05:46.323 "sha512" 00:05:46.323 ], 00:05:46.323 "dhchap_dhgroups": [ 00:05:46.323 "null", 00:05:46.323 "ffdhe2048", 00:05:46.323 "ffdhe3072", 00:05:46.323 "ffdhe4096", 00:05:46.323 "ffdhe6144", 00:05:46.323 "ffdhe8192" 00:05:46.323 ] 00:05:46.323 } 00:05:46.323 }, 00:05:46.323 { 00:05:46.323 "method": "nvmf_set_max_subsystems", 00:05:46.323 "params": { 00:05:46.323 "max_subsystems": 1024 00:05:46.323 } 00:05:46.323 }, 00:05:46.323 { 00:05:46.323 "method": "nvmf_set_crdt", 00:05:46.323 "params": { 00:05:46.323 "crdt1": 0, 00:05:46.323 "crdt2": 0, 00:05:46.323 "crdt3": 0 00:05:46.323 } 00:05:46.323 }, 00:05:46.323 { 00:05:46.323 "method": "nvmf_create_transport", 00:05:46.323 "params": { 00:05:46.323 "trtype": "TCP", 00:05:46.323 "max_queue_depth": 128, 00:05:46.323 "max_io_qpairs_per_ctrlr": 127, 00:05:46.323 "in_capsule_data_size": 4096, 00:05:46.323 "max_io_size": 131072, 00:05:46.323 "io_unit_size": 131072, 00:05:46.323 "max_aq_depth": 128, 00:05:46.323 "num_shared_buffers": 511, 00:05:46.323 "buf_cache_size": 4294967295, 00:05:46.323 "dif_insert_or_strip": false, 00:05:46.323 "zcopy": false, 00:05:46.323 "c2h_success": true, 00:05:46.323 "sock_priority": 0, 00:05:46.323 "abort_timeout_sec": 1, 00:05:46.323 "ack_timeout": 0, 00:05:46.323 "data_wr_pool_size": 0 00:05:46.323 } 00:05:46.323 } 00:05:46.323 ] 00:05:46.323 }, 00:05:46.323 { 00:05:46.323 "subsystem": "iscsi", 00:05:46.323 "config": [ 00:05:46.323 { 00:05:46.323 "method": "iscsi_set_options", 00:05:46.323 "params": { 00:05:46.323 "node_base": "iqn.2016-06.io.spdk", 00:05:46.323 "max_sessions": 128, 00:05:46.323 "max_connections_per_session": 2, 00:05:46.323 "max_queue_depth": 64, 00:05:46.323 "default_time2wait": 2, 00:05:46.323 "default_time2retain": 20, 00:05:46.323 "first_burst_length": 8192, 00:05:46.323 "immediate_data": true, 00:05:46.323 "allow_duplicated_isid": false, 00:05:46.323 "error_recovery_level": 0, 00:05:46.323 "nop_timeout": 60, 00:05:46.323 "nop_in_interval": 30, 00:05:46.323 "disable_chap": false, 00:05:46.323 "require_chap": false, 00:05:46.323 "mutual_chap": false, 00:05:46.323 "chap_group": 0, 00:05:46.323 "max_large_datain_per_connection": 64, 00:05:46.323 "max_r2t_per_connection": 4, 00:05:46.323 "pdu_pool_size": 36864, 00:05:46.323 "immediate_data_pool_size": 16384, 00:05:46.323 "data_out_pool_size": 2048 00:05:46.323 } 00:05:46.323 } 00:05:46.323 ] 00:05:46.323 } 00:05:46.323 ] 00:05:46.323 } 00:05:46.323 07:15:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:46.323 07:15:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1857751 00:05:46.323 07:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1857751 ']' 00:05:46.323 07:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1857751 00:05:46.323 07:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:46.323 07:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.323 07:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1857751 00:05:46.323 07:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.323 07:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.323 07:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1857751' 00:05:46.323 killing process with pid 1857751 00:05:46.323 07:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1857751 00:05:46.323 07:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1857751 00:05:46.583 07:15:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1858089 00:05:46.583 07:15:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:46.583 07:15:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:51.874 07:15:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1858089 00:05:51.874 07:15:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1858089 ']' 00:05:51.874 07:15:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1858089 00:05:51.874 07:15:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:51.874 07:15:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.874 07:15:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1858089 00:05:51.874 07:15:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.874 07:15:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.874 07:15:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1858089' 00:05:51.874 killing process with pid 1858089 00:05:51.874 07:15:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1858089 00:05:51.874 07:15:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1858089 00:05:51.874 07:15:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:51.874 07:15:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:51.874 00:05:51.874 real 0m6.559s 00:05:51.874 user 0m6.445s 00:05:51.874 sys 0m0.545s 00:05:51.874 07:15:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.874 07:15:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:51.874 ************************************ 00:05:51.874 END TEST skip_rpc_with_json 00:05:51.874 ************************************ 00:05:51.874 07:15:35 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:51.874 07:15:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.874 07:15:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.874 07:15:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.874 ************************************ 00:05:51.874 START TEST skip_rpc_with_delay 00:05:51.874 ************************************ 00:05:51.874 07:15:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:51.874 07:15:35 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:51.874 07:15:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:51.874 07:15:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:51.874 07:15:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.874 07:15:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:51.874 07:15:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.874 07:15:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:51.874 07:15:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.874 07:15:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:51.874 07:15:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.874 07:15:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:51.874 07:15:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:51.874 [2024-11-26 07:15:35.952820] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:51.874 07:15:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:51.874 07:15:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:51.874 07:15:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:51.874 07:15:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:51.874 00:05:51.874 real 0m0.072s 00:05:51.874 user 0m0.048s 00:05:51.874 sys 0m0.024s 00:05:51.874 07:15:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.874 07:15:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:51.874 ************************************ 00:05:51.874 END TEST skip_rpc_with_delay 00:05:51.874 ************************************ 00:05:51.874 07:15:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:51.874 07:15:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:51.874 07:15:36 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:51.874 07:15:36 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.874 07:15:36 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.874 07:15:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.135 ************************************ 00:05:52.135 START TEST exit_on_failed_rpc_init 00:05:52.135 ************************************ 00:05:52.135 07:15:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:52.135 07:15:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1859161 00:05:52.135 07:15:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1859161 00:05:52.135 07:15:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1859161 ']' 00:05:52.135 07:15:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.135 07:15:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.135 07:15:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.135 07:15:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.135 07:15:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:52.135 07:15:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:52.135 [2024-11-26 07:15:36.105205] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:05:52.135 [2024-11-26 07:15:36.105257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1859161 ] 00:05:52.135 [2024-11-26 07:15:36.183627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.135 [2024-11-26 07:15:36.220741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.077 07:15:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.077 07:15:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:53.077 07:15:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.077 07:15:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:53.077 07:15:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:53.077 07:15:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:53.077 07:15:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:53.077 07:15:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:53.077 07:15:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:53.077 07:15:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:53.077 07:15:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:53.077 07:15:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:53.077 07:15:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:53.077 07:15:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:53.077 07:15:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:53.077 [2024-11-26 07:15:36.936690] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:05:53.077 [2024-11-26 07:15:36.936746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1859485 ] 00:05:53.077 [2024-11-26 07:15:37.031668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.077 [2024-11-26 07:15:37.067563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.077 [2024-11-26 07:15:37.067612] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:53.077 [2024-11-26 07:15:37.067622] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:53.077 [2024-11-26 07:15:37.067630] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:53.077 07:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:53.077 07:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:53.077 07:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:53.077 07:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:53.077 07:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:53.077 07:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:53.077 07:15:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:53.077 07:15:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1859161 00:05:53.077 07:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1859161 ']' 00:05:53.077 07:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1859161 00:05:53.077 07:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:53.077 07:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.077 07:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1859161 00:05:53.077 07:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.077 07:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.077 07:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1859161' 00:05:53.077 killing process with pid 1859161 00:05:53.077 07:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1859161 00:05:53.077 07:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1859161 00:05:53.338 00:05:53.338 real 0m1.333s 00:05:53.338 user 0m1.545s 00:05:53.338 sys 0m0.390s 00:05:53.338 07:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.338 07:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:53.338 ************************************ 00:05:53.338 END TEST exit_on_failed_rpc_init 00:05:53.338 ************************************ 00:05:53.338 07:15:37 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:53.338 00:05:53.338 real 0m13.723s 00:05:53.338 user 0m13.339s 00:05:53.338 sys 0m1.491s 00:05:53.338 07:15:37 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.338 07:15:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.338 ************************************ 00:05:53.338 END TEST skip_rpc 00:05:53.338 ************************************ 00:05:53.338 07:15:37 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:53.338 07:15:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.338 07:15:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.338 07:15:37 -- common/autotest_common.sh@10 -- # set +x 00:05:53.599 ************************************ 00:05:53.599 START TEST rpc_client 00:05:53.599 ************************************ 00:05:53.599 07:15:37 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:53.599 * Looking for test storage... 00:05:53.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:53.599 07:15:37 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:53.599 07:15:37 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:53.599 07:15:37 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:53.599 07:15:37 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:53.599 07:15:37 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.599 07:15:37 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.599 07:15:37 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.599 07:15:37 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.599 07:15:37 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.599 07:15:37 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.599 07:15:37 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.599 07:15:37 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.599 07:15:37 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.599 07:15:37 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.599 07:15:37 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.599 07:15:37 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:53.599 07:15:37 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:53.599 07:15:37 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.599 07:15:37 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.599 07:15:37 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:53.599 07:15:37 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:53.599 07:15:37 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.599 07:15:37 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:53.599 07:15:37 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.599 07:15:37 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:53.599 07:15:37 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:53.599 07:15:37 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.599 07:15:37 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:53.599 07:15:37 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.599 07:15:37 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.599 07:15:37 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.599 07:15:37 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:53.599 07:15:37 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.599 07:15:37 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:53.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.599 --rc genhtml_branch_coverage=1 00:05:53.599 --rc genhtml_function_coverage=1 00:05:53.599 --rc genhtml_legend=1 00:05:53.599 --rc geninfo_all_blocks=1 00:05:53.599 --rc geninfo_unexecuted_blocks=1 00:05:53.599 00:05:53.599 ' 00:05:53.599 07:15:37 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:53.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.599 --rc genhtml_branch_coverage=1 00:05:53.599 --rc genhtml_function_coverage=1 00:05:53.599 --rc genhtml_legend=1 00:05:53.599 --rc geninfo_all_blocks=1 00:05:53.599 --rc geninfo_unexecuted_blocks=1 00:05:53.599 00:05:53.599 ' 00:05:53.599 07:15:37 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:53.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.599 --rc genhtml_branch_coverage=1 00:05:53.599 --rc genhtml_function_coverage=1 00:05:53.599 --rc genhtml_legend=1 00:05:53.599 --rc geninfo_all_blocks=1 00:05:53.599 --rc geninfo_unexecuted_blocks=1 00:05:53.599 00:05:53.599 ' 00:05:53.599 07:15:37 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:53.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.599 --rc genhtml_branch_coverage=1 00:05:53.599 --rc genhtml_function_coverage=1 00:05:53.599 --rc genhtml_legend=1 00:05:53.599 --rc geninfo_all_blocks=1 00:05:53.599 --rc geninfo_unexecuted_blocks=1 00:05:53.599 00:05:53.599 ' 00:05:53.599 07:15:37 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:53.599 OK 00:05:53.599 07:15:37 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:53.599 00:05:53.599 real 0m0.221s 00:05:53.599 user 0m0.125s 00:05:53.599 sys 0m0.104s 00:05:53.599 07:15:37 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.599 07:15:37 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:53.599 ************************************ 00:05:53.599 END TEST rpc_client 00:05:53.599 ************************************ 00:05:53.861 07:15:37 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:53.861 07:15:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.861 07:15:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.861 07:15:37 -- common/autotest_common.sh@10 -- # set +x 00:05:53.861 ************************************ 00:05:53.861 START TEST json_config 00:05:53.861 ************************************ 00:05:53.861 07:15:37 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:53.861 07:15:37 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:53.861 07:15:37 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:53.861 07:15:37 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:53.861 07:15:37 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:53.861 07:15:37 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.861 07:15:37 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.861 07:15:37 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.861 07:15:37 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.861 07:15:37 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.861 07:15:37 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.861 07:15:37 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.861 07:15:37 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.861 07:15:37 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.861 07:15:37 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.861 07:15:37 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.861 07:15:37 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:53.861 07:15:37 json_config -- scripts/common.sh@345 -- # : 1 00:05:53.861 07:15:37 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.861 07:15:37 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.861 07:15:37 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:53.861 07:15:37 json_config -- scripts/common.sh@353 -- # local d=1 00:05:53.861 07:15:37 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.861 07:15:37 json_config -- scripts/common.sh@355 -- # echo 1 00:05:53.861 07:15:37 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.861 07:15:37 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:53.861 07:15:37 json_config -- scripts/common.sh@353 -- # local d=2 00:05:53.861 07:15:37 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.861 07:15:37 json_config -- scripts/common.sh@355 -- # echo 2 00:05:53.861 07:15:37 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.861 07:15:37 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.861 07:15:37 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.861 07:15:37 json_config -- scripts/common.sh@368 -- # return 0 00:05:53.861 07:15:37 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.861 07:15:37 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:53.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.861 --rc genhtml_branch_coverage=1 00:05:53.861 --rc genhtml_function_coverage=1 00:05:53.861 --rc genhtml_legend=1 00:05:53.861 --rc geninfo_all_blocks=1 00:05:53.861 --rc geninfo_unexecuted_blocks=1 00:05:53.861 00:05:53.861 ' 00:05:53.861 07:15:37 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:53.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.861 --rc genhtml_branch_coverage=1 00:05:53.862 --rc genhtml_function_coverage=1 00:05:53.862 --rc genhtml_legend=1 00:05:53.862 --rc geninfo_all_blocks=1 00:05:53.862 --rc geninfo_unexecuted_blocks=1 00:05:53.862 00:05:53.862 ' 00:05:53.862 07:15:37 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:53.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.862 --rc genhtml_branch_coverage=1 00:05:53.862 --rc genhtml_function_coverage=1 00:05:53.862 --rc genhtml_legend=1 00:05:53.862 --rc geninfo_all_blocks=1 00:05:53.862 --rc geninfo_unexecuted_blocks=1 00:05:53.862 00:05:53.862 ' 00:05:53.862 07:15:37 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:53.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.862 --rc genhtml_branch_coverage=1 00:05:53.862 --rc genhtml_function_coverage=1 00:05:53.862 --rc genhtml_legend=1 00:05:53.862 --rc geninfo_all_blocks=1 00:05:53.862 --rc geninfo_unexecuted_blocks=1 00:05:53.862 00:05:53.862 ' 00:05:53.862 07:15:37 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:53.862 07:15:37 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:53.862 07:15:37 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:53.862 07:15:37 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:53.862 07:15:37 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:53.862 07:15:37 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:53.862 07:15:37 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:53.862 07:15:37 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:53.862 07:15:37 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:53.862 07:15:37 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:53.862 07:15:37 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:53.862 07:15:37 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:53.862 07:15:37 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:53.862 07:15:37 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:53.862 07:15:37 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:53.862 07:15:37 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:53.862 07:15:37 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:53.862 07:15:37 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:53.862 07:15:37 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:53.862 07:15:37 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:53.862 07:15:37 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:53.862 07:15:37 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:53.862 07:15:37 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:53.862 07:15:37 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.862 07:15:37 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.862 07:15:37 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.862 07:15:37 json_config -- paths/export.sh@5 -- # export PATH 00:05:53.862 07:15:37 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.862 07:15:37 json_config -- nvmf/common.sh@51 -- # : 0 00:05:53.862 07:15:37 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:53.862 07:15:37 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:53.862 07:15:37 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:53.862 07:15:37 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:53.862 07:15:37 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:53.862 07:15:37 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:53.862 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:53.862 07:15:37 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:53.862 07:15:37 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:53.862 07:15:37 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:53.862 07:15:37 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:53.862 07:15:37 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:53.862 07:15:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:53.862 07:15:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:53.862 07:15:37 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:53.862 07:15:37 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:53.862 07:15:37 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:53.862 07:15:37 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:53.862 07:15:37 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:53.862 07:15:37 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:53.862 07:15:37 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:53.862 07:15:37 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:53.862 07:15:37 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:53.862 07:15:37 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:53.862 07:15:37 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:53.862 07:15:37 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:53.862 INFO: JSON configuration test init 00:05:53.862 07:15:37 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:53.862 07:15:37 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:53.862 07:15:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:53.862 07:15:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.124 07:15:37 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:54.124 07:15:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:54.124 07:15:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.124 07:15:38 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:54.124 07:15:38 json_config -- json_config/common.sh@9 -- # local app=target 00:05:54.124 07:15:38 json_config -- json_config/common.sh@10 -- # shift 00:05:54.124 07:15:38 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:54.124 07:15:38 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:54.124 07:15:38 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:54.124 07:15:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:54.124 07:15:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:54.124 07:15:38 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1859660 00:05:54.124 07:15:38 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:54.124 Waiting for target to run... 00:05:54.124 07:15:38 json_config -- json_config/common.sh@25 -- # waitforlisten 1859660 /var/tmp/spdk_tgt.sock 00:05:54.124 07:15:38 json_config -- common/autotest_common.sh@835 -- # '[' -z 1859660 ']' 00:05:54.124 07:15:38 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:54.124 07:15:38 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:54.124 07:15:38 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.124 07:15:38 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:54.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:54.124 07:15:38 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.124 07:15:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.124 [2024-11-26 07:15:38.071210] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:05:54.124 [2024-11-26 07:15:38.071317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1859660 ] 00:05:54.385 [2024-11-26 07:15:38.346937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.385 [2024-11-26 07:15:38.375846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.956 07:15:38 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.956 07:15:38 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:54.956 07:15:38 json_config -- json_config/common.sh@26 -- # echo '' 00:05:54.956 00:05:54.956 07:15:38 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:54.956 07:15:38 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:54.956 07:15:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:54.956 07:15:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.956 07:15:38 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:54.956 07:15:38 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:54.956 07:15:38 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:54.956 07:15:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.956 07:15:38 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:54.956 07:15:38 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:54.956 07:15:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:55.530 07:15:39 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:55.530 07:15:39 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:55.530 07:15:39 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:55.530 07:15:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.530 07:15:39 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:55.530 07:15:39 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:55.530 07:15:39 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:55.530 07:15:39 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:55.530 07:15:39 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:55.530 07:15:39 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:55.530 07:15:39 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:55.530 07:15:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:55.530 07:15:39 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:55.530 07:15:39 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:55.530 07:15:39 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:55.791 07:15:39 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:55.791 07:15:39 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:55.791 07:15:39 json_config -- json_config/json_config.sh@54 -- # sort 00:05:55.791 07:15:39 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:55.791 07:15:39 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:55.791 07:15:39 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:55.791 07:15:39 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:55.791 07:15:39 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:55.791 07:15:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.791 07:15:39 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:55.791 07:15:39 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:55.791 07:15:39 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:55.791 07:15:39 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:55.791 07:15:39 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:55.791 07:15:39 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:55.791 07:15:39 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:55.791 07:15:39 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:55.791 07:15:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.791 07:15:39 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:55.791 07:15:39 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:55.791 07:15:39 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:55.791 07:15:39 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:55.791 07:15:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:55.791 MallocForNvmf0 00:05:55.791 07:15:39 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:55.791 07:15:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:56.051 MallocForNvmf1 00:05:56.051 07:15:40 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:56.051 07:15:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:56.312 [2024-11-26 07:15:40.237764] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:56.312 07:15:40 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:56.312 07:15:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:56.573 07:15:40 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:56.573 07:15:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:56.573 07:15:40 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:56.573 07:15:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:56.834 07:15:40 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:56.834 07:15:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:56.834 [2024-11-26 07:15:40.960079] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:57.095 07:15:40 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:57.095 07:15:40 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:57.095 07:15:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.095 07:15:41 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:57.095 07:15:41 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:57.095 07:15:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.095 07:15:41 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:57.095 07:15:41 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:57.095 07:15:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:57.095 MallocBdevForConfigChangeCheck 00:05:57.356 07:15:41 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:57.356 07:15:41 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:57.356 07:15:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.356 07:15:41 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:57.356 07:15:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:57.618 07:15:41 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:57.618 INFO: shutting down applications... 00:05:57.618 07:15:41 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:57.618 07:15:41 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:57.618 07:15:41 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:57.618 07:15:41 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:57.880 Calling clear_iscsi_subsystem 00:05:57.880 Calling clear_nvmf_subsystem 00:05:57.880 Calling clear_nbd_subsystem 00:05:57.880 Calling clear_ublk_subsystem 00:05:57.880 Calling clear_vhost_blk_subsystem 00:05:57.880 Calling clear_vhost_scsi_subsystem 00:05:57.880 Calling clear_bdev_subsystem 00:05:58.140 07:15:42 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:58.140 07:15:42 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:58.140 07:15:42 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:58.140 07:15:42 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:58.140 07:15:42 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:58.140 07:15:42 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:58.401 07:15:42 json_config -- json_config/json_config.sh@352 -- # break 00:05:58.401 07:15:42 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:58.401 07:15:42 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:58.401 07:15:42 json_config -- json_config/common.sh@31 -- # local app=target 00:05:58.401 07:15:42 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:58.401 07:15:42 json_config -- json_config/common.sh@35 -- # [[ -n 1859660 ]] 00:05:58.401 07:15:42 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1859660 00:05:58.401 07:15:42 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:58.401 07:15:42 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:58.401 07:15:42 json_config -- json_config/common.sh@41 -- # kill -0 1859660 00:05:58.401 07:15:42 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:58.973 07:15:42 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:58.973 07:15:42 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:58.973 07:15:42 json_config -- json_config/common.sh@41 -- # kill -0 1859660 00:05:58.973 07:15:42 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:58.973 07:15:42 json_config -- json_config/common.sh@43 -- # break 00:05:58.973 07:15:42 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:58.973 07:15:42 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:58.973 SPDK target shutdown done 00:05:58.973 07:15:42 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:58.973 INFO: relaunching applications... 00:05:58.973 07:15:42 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:58.973 07:15:42 json_config -- json_config/common.sh@9 -- # local app=target 00:05:58.973 07:15:42 json_config -- json_config/common.sh@10 -- # shift 00:05:58.973 07:15:42 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:58.973 07:15:42 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:58.973 07:15:42 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:58.973 07:15:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:58.973 07:15:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:58.973 07:15:42 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1860768 00:05:58.973 07:15:42 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:58.973 Waiting for target to run... 00:05:58.973 07:15:42 json_config -- json_config/common.sh@25 -- # waitforlisten 1860768 /var/tmp/spdk_tgt.sock 00:05:58.973 07:15:42 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:58.973 07:15:42 json_config -- common/autotest_common.sh@835 -- # '[' -z 1860768 ']' 00:05:58.973 07:15:42 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:58.973 07:15:42 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.973 07:15:42 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:58.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:58.973 07:15:42 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.973 07:15:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.973 [2024-11-26 07:15:42.927937] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:05:58.973 [2024-11-26 07:15:42.927991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1860768 ] 00:05:59.233 [2024-11-26 07:15:43.235514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.233 [2024-11-26 07:15:43.265194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.805 [2024-11-26 07:15:43.787775] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:59.805 [2024-11-26 07:15:43.820158] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:59.805 07:15:43 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.805 07:15:43 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:59.805 07:15:43 json_config -- json_config/common.sh@26 -- # echo '' 00:05:59.805 00:05:59.805 07:15:43 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:59.805 07:15:43 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:59.805 INFO: Checking if target configuration is the same... 00:05:59.805 07:15:43 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:59.805 07:15:43 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:59.805 07:15:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:59.805 + '[' 2 -ne 2 ']' 00:05:59.805 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:59.805 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:59.805 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:59.805 +++ basename /dev/fd/62 00:05:59.805 ++ mktemp /tmp/62.XXX 00:05:59.805 + tmp_file_1=/tmp/62.rGY 00:05:59.805 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:59.805 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:59.805 + tmp_file_2=/tmp/spdk_tgt_config.json.vLz 00:05:59.805 + ret=0 00:05:59.805 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:00.066 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:00.328 + diff -u /tmp/62.rGY /tmp/spdk_tgt_config.json.vLz 00:06:00.328 + echo 'INFO: JSON config files are the same' 00:06:00.328 INFO: JSON config files are the same 00:06:00.328 + rm /tmp/62.rGY /tmp/spdk_tgt_config.json.vLz 00:06:00.328 + exit 0 00:06:00.328 07:15:44 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:00.328 07:15:44 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:00.328 INFO: changing configuration and checking if this can be detected... 00:06:00.328 07:15:44 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:00.328 07:15:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:00.328 07:15:44 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:00.328 07:15:44 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:00.328 07:15:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:00.328 + '[' 2 -ne 2 ']' 00:06:00.328 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:00.328 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:00.328 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:00.328 +++ basename /dev/fd/62 00:06:00.328 ++ mktemp /tmp/62.XXX 00:06:00.328 + tmp_file_1=/tmp/62.UkL 00:06:00.328 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:00.328 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:00.328 + tmp_file_2=/tmp/spdk_tgt_config.json.sXH 00:06:00.328 + ret=0 00:06:00.328 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:00.590 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:00.851 + diff -u /tmp/62.UkL /tmp/spdk_tgt_config.json.sXH 00:06:00.851 + ret=1 00:06:00.851 + echo '=== Start of file: /tmp/62.UkL ===' 00:06:00.851 + cat /tmp/62.UkL 00:06:00.851 + echo '=== End of file: /tmp/62.UkL ===' 00:06:00.851 + echo '' 00:06:00.851 + echo '=== Start of file: /tmp/spdk_tgt_config.json.sXH ===' 00:06:00.851 + cat /tmp/spdk_tgt_config.json.sXH 00:06:00.851 + echo '=== End of file: /tmp/spdk_tgt_config.json.sXH ===' 00:06:00.851 + echo '' 00:06:00.851 + rm /tmp/62.UkL /tmp/spdk_tgt_config.json.sXH 00:06:00.851 + exit 1 00:06:00.851 07:15:44 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:00.851 INFO: configuration change detected. 00:06:00.851 07:15:44 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:00.851 07:15:44 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:00.851 07:15:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:00.851 07:15:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.851 07:15:44 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:00.851 07:15:44 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:00.851 07:15:44 json_config -- json_config/json_config.sh@324 -- # [[ -n 1860768 ]] 00:06:00.851 07:15:44 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:00.851 07:15:44 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:00.851 07:15:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:00.851 07:15:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.851 07:15:44 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:00.851 07:15:44 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:00.851 07:15:44 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:00.851 07:15:44 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:00.851 07:15:44 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:00.851 07:15:44 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:00.851 07:15:44 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:00.851 07:15:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.851 07:15:44 json_config -- json_config/json_config.sh@330 -- # killprocess 1860768 00:06:00.851 07:15:44 json_config -- common/autotest_common.sh@954 -- # '[' -z 1860768 ']' 00:06:00.851 07:15:44 json_config -- common/autotest_common.sh@958 -- # kill -0 1860768 00:06:00.851 07:15:44 json_config -- common/autotest_common.sh@959 -- # uname 00:06:00.851 07:15:44 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.851 07:15:44 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1860768 00:06:00.851 07:15:44 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.851 07:15:44 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.851 07:15:44 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1860768' 00:06:00.851 killing process with pid 1860768 00:06:00.851 07:15:44 json_config -- common/autotest_common.sh@973 -- # kill 1860768 00:06:00.851 07:15:44 json_config -- common/autotest_common.sh@978 -- # wait 1860768 00:06:01.112 07:15:45 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:01.112 07:15:45 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:01.112 07:15:45 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:01.112 07:15:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.112 07:15:45 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:01.112 07:15:45 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:01.112 INFO: Success 00:06:01.112 00:06:01.112 real 0m7.438s 00:06:01.112 user 0m9.009s 00:06:01.112 sys 0m1.931s 00:06:01.112 07:15:45 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.112 07:15:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.112 ************************************ 00:06:01.112 END TEST json_config 00:06:01.112 ************************************ 00:06:01.374 07:15:45 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:01.374 07:15:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.374 07:15:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.374 07:15:45 -- common/autotest_common.sh@10 -- # set +x 00:06:01.374 ************************************ 00:06:01.374 START TEST json_config_extra_key 00:06:01.374 ************************************ 00:06:01.374 07:15:45 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:01.374 07:15:45 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:01.374 07:15:45 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:01.374 07:15:45 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:01.374 07:15:45 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:01.374 07:15:45 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.374 07:15:45 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.374 07:15:45 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.374 07:15:45 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.374 07:15:45 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.374 07:15:45 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.374 07:15:45 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.374 07:15:45 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.374 07:15:45 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.374 07:15:45 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.374 07:15:45 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.374 07:15:45 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:01.374 07:15:45 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:01.374 07:15:45 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.374 07:15:45 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.374 07:15:45 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:01.374 07:15:45 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:01.374 07:15:45 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.374 07:15:45 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:01.374 07:15:45 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.374 07:15:45 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:01.374 07:15:45 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:01.374 07:15:45 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.374 07:15:45 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:01.374 07:15:45 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.374 07:15:45 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.374 07:15:45 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.374 07:15:45 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:01.374 07:15:45 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.374 07:15:45 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:01.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.374 --rc genhtml_branch_coverage=1 00:06:01.374 --rc genhtml_function_coverage=1 00:06:01.374 --rc genhtml_legend=1 00:06:01.374 --rc geninfo_all_blocks=1 00:06:01.374 --rc geninfo_unexecuted_blocks=1 00:06:01.375 00:06:01.375 ' 00:06:01.375 07:15:45 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:01.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.375 --rc genhtml_branch_coverage=1 00:06:01.375 --rc genhtml_function_coverage=1 00:06:01.375 --rc genhtml_legend=1 00:06:01.375 --rc geninfo_all_blocks=1 00:06:01.375 --rc geninfo_unexecuted_blocks=1 00:06:01.375 00:06:01.375 ' 00:06:01.375 07:15:45 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:01.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.375 --rc genhtml_branch_coverage=1 00:06:01.375 --rc genhtml_function_coverage=1 00:06:01.375 --rc genhtml_legend=1 00:06:01.375 --rc geninfo_all_blocks=1 00:06:01.375 --rc geninfo_unexecuted_blocks=1 00:06:01.375 00:06:01.375 ' 00:06:01.375 07:15:45 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:01.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.375 --rc genhtml_branch_coverage=1 00:06:01.375 --rc genhtml_function_coverage=1 00:06:01.375 --rc genhtml_legend=1 00:06:01.375 --rc geninfo_all_blocks=1 00:06:01.375 --rc geninfo_unexecuted_blocks=1 00:06:01.375 00:06:01.375 ' 00:06:01.375 07:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:01.375 07:15:45 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:01.375 07:15:45 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:01.375 07:15:45 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:01.375 07:15:45 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:01.375 07:15:45 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:01.375 07:15:45 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:01.375 07:15:45 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:01.375 07:15:45 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:01.375 07:15:45 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:01.375 07:15:45 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:01.375 07:15:45 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:01.375 07:15:45 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:01.375 07:15:45 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:01.375 07:15:45 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:01.375 07:15:45 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:01.375 07:15:45 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:01.375 07:15:45 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:01.375 07:15:45 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:01.375 07:15:45 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:01.375 07:15:45 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:01.375 07:15:45 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:01.375 07:15:45 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:01.375 07:15:45 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.375 07:15:45 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.375 07:15:45 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.375 07:15:45 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:01.375 07:15:45 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.375 07:15:45 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:01.375 07:15:45 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:01.375 07:15:45 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:01.375 07:15:45 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:01.375 07:15:45 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:01.375 07:15:45 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:01.375 07:15:45 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:01.375 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:01.375 07:15:45 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:01.375 07:15:45 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:01.375 07:15:45 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:01.375 07:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:01.375 07:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:01.375 07:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:01.375 07:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:01.375 07:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:01.375 07:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:01.375 07:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:01.375 07:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:01.375 07:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:01.375 07:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:01.375 07:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:01.375 INFO: launching applications... 00:06:01.375 07:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:01.375 07:15:45 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:01.375 07:15:45 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:01.375 07:15:45 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:01.375 07:15:45 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:01.375 07:15:45 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:01.375 07:15:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:01.375 07:15:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:01.375 07:15:45 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1861556 00:06:01.375 07:15:45 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:01.375 Waiting for target to run... 00:06:01.375 07:15:45 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1861556 /var/tmp/spdk_tgt.sock 00:06:01.375 07:15:45 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1861556 ']' 00:06:01.375 07:15:45 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:01.375 07:15:45 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:01.375 07:15:45 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.375 07:15:45 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:01.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:01.375 07:15:45 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.375 07:15:45 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:01.637 [2024-11-26 07:15:45.562400] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:06:01.637 [2024-11-26 07:15:45.562482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1861556 ] 00:06:01.899 [2024-11-26 07:15:45.840816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.899 [2024-11-26 07:15:45.871161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.472 07:15:46 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.472 07:15:46 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:02.472 07:15:46 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:02.472 00:06:02.472 07:15:46 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:02.472 INFO: shutting down applications... 00:06:02.472 07:15:46 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:02.472 07:15:46 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:02.472 07:15:46 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:02.472 07:15:46 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1861556 ]] 00:06:02.472 07:15:46 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1861556 00:06:02.472 07:15:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:02.472 07:15:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:02.472 07:15:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1861556 00:06:02.472 07:15:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:02.734 07:15:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:02.734 07:15:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:02.734 07:15:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1861556 00:06:02.734 07:15:46 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:02.734 07:15:46 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:02.734 07:15:46 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:02.734 07:15:46 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:02.734 SPDK target shutdown done 00:06:02.734 07:15:46 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:02.734 Success 00:06:02.734 00:06:02.734 real 0m1.564s 00:06:02.734 user 0m1.220s 00:06:02.734 sys 0m0.382s 00:06:02.734 07:15:46 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.734 07:15:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:02.734 ************************************ 00:06:02.734 END TEST json_config_extra_key 00:06:02.734 ************************************ 00:06:02.995 07:15:46 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:02.995 07:15:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.995 07:15:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.995 07:15:46 -- common/autotest_common.sh@10 -- # set +x 00:06:02.995 ************************************ 00:06:02.995 START TEST alias_rpc 00:06:02.995 ************************************ 00:06:02.995 07:15:46 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:02.995 * Looking for test storage... 00:06:02.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:02.995 07:15:47 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:02.995 07:15:47 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:02.995 07:15:47 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:02.995 07:15:47 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:02.995 07:15:47 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.995 07:15:47 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.995 07:15:47 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.995 07:15:47 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.995 07:15:47 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.995 07:15:47 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.995 07:15:47 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.995 07:15:47 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.995 07:15:47 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.995 07:15:47 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.995 07:15:47 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.995 07:15:47 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:02.995 07:15:47 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:02.995 07:15:47 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.995 07:15:47 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.995 07:15:47 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:02.995 07:15:47 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:02.995 07:15:47 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.995 07:15:47 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:02.995 07:15:47 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.995 07:15:47 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:02.995 07:15:47 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:02.995 07:15:47 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.995 07:15:47 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:02.995 07:15:47 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.995 07:15:47 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.995 07:15:47 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.995 07:15:47 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:02.995 07:15:47 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.995 07:15:47 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:02.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.995 --rc genhtml_branch_coverage=1 00:06:02.995 --rc genhtml_function_coverage=1 00:06:02.995 --rc genhtml_legend=1 00:06:02.995 --rc geninfo_all_blocks=1 00:06:02.995 --rc geninfo_unexecuted_blocks=1 00:06:02.995 00:06:02.995 ' 00:06:03.255 07:15:47 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:03.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.256 --rc genhtml_branch_coverage=1 00:06:03.256 --rc genhtml_function_coverage=1 00:06:03.256 --rc genhtml_legend=1 00:06:03.256 --rc geninfo_all_blocks=1 00:06:03.256 --rc geninfo_unexecuted_blocks=1 00:06:03.256 00:06:03.256 ' 00:06:03.256 07:15:47 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:03.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.256 --rc genhtml_branch_coverage=1 00:06:03.256 --rc genhtml_function_coverage=1 00:06:03.256 --rc genhtml_legend=1 00:06:03.256 --rc geninfo_all_blocks=1 00:06:03.256 --rc geninfo_unexecuted_blocks=1 00:06:03.256 00:06:03.256 ' 00:06:03.256 07:15:47 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:03.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.256 --rc genhtml_branch_coverage=1 00:06:03.256 --rc genhtml_function_coverage=1 00:06:03.256 --rc genhtml_legend=1 00:06:03.256 --rc geninfo_all_blocks=1 00:06:03.256 --rc geninfo_unexecuted_blocks=1 00:06:03.256 00:06:03.256 ' 00:06:03.256 07:15:47 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:03.256 07:15:47 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1861944 00:06:03.256 07:15:47 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1861944 00:06:03.256 07:15:47 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:03.256 07:15:47 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1861944 ']' 00:06:03.256 07:15:47 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.256 07:15:47 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.256 07:15:47 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.256 07:15:47 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.256 07:15:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.256 [2024-11-26 07:15:47.186063] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:06:03.256 [2024-11-26 07:15:47.186119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1861944 ] 00:06:03.256 [2024-11-26 07:15:47.265545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.256 [2024-11-26 07:15:47.301672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.198 07:15:47 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.198 07:15:47 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:04.198 07:15:47 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:04.198 07:15:48 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1861944 00:06:04.198 07:15:48 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1861944 ']' 00:06:04.198 07:15:48 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1861944 00:06:04.198 07:15:48 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:04.198 07:15:48 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.198 07:15:48 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1861944 00:06:04.198 07:15:48 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.198 07:15:48 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.198 07:15:48 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1861944' 00:06:04.198 killing process with pid 1861944 00:06:04.198 07:15:48 alias_rpc -- common/autotest_common.sh@973 -- # kill 1861944 00:06:04.198 07:15:48 alias_rpc -- common/autotest_common.sh@978 -- # wait 1861944 00:06:04.459 00:06:04.459 real 0m1.539s 00:06:04.459 user 0m1.695s 00:06:04.459 sys 0m0.425s 00:06:04.459 07:15:48 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.459 07:15:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.459 ************************************ 00:06:04.459 END TEST alias_rpc 00:06:04.459 ************************************ 00:06:04.459 07:15:48 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:04.459 07:15:48 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:04.459 07:15:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.459 07:15:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.459 07:15:48 -- common/autotest_common.sh@10 -- # set +x 00:06:04.459 ************************************ 00:06:04.459 START TEST spdkcli_tcp 00:06:04.459 ************************************ 00:06:04.459 07:15:48 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:04.720 * Looking for test storage... 00:06:04.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:04.720 07:15:48 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:04.720 07:15:48 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:04.720 07:15:48 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:04.720 07:15:48 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:04.720 07:15:48 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.720 07:15:48 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.720 07:15:48 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.720 07:15:48 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.720 07:15:48 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.720 07:15:48 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.720 07:15:48 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.720 07:15:48 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.720 07:15:48 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.720 07:15:48 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.720 07:15:48 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.720 07:15:48 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:04.720 07:15:48 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:04.720 07:15:48 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.720 07:15:48 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.720 07:15:48 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:04.720 07:15:48 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:04.720 07:15:48 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.720 07:15:48 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:04.720 07:15:48 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.720 07:15:48 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:04.720 07:15:48 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:04.720 07:15:48 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.720 07:15:48 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:04.720 07:15:48 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.720 07:15:48 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.720 07:15:48 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.720 07:15:48 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:04.720 07:15:48 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.720 07:15:48 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:04.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.720 --rc genhtml_branch_coverage=1 00:06:04.720 --rc genhtml_function_coverage=1 00:06:04.720 --rc genhtml_legend=1 00:06:04.720 --rc geninfo_all_blocks=1 00:06:04.720 --rc geninfo_unexecuted_blocks=1 00:06:04.720 00:06:04.720 ' 00:06:04.721 07:15:48 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:04.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.721 --rc genhtml_branch_coverage=1 00:06:04.721 --rc genhtml_function_coverage=1 00:06:04.721 --rc genhtml_legend=1 00:06:04.721 --rc geninfo_all_blocks=1 00:06:04.721 --rc geninfo_unexecuted_blocks=1 00:06:04.721 00:06:04.721 ' 00:06:04.721 07:15:48 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:04.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.721 --rc genhtml_branch_coverage=1 00:06:04.721 --rc genhtml_function_coverage=1 00:06:04.721 --rc genhtml_legend=1 00:06:04.721 --rc geninfo_all_blocks=1 00:06:04.721 --rc geninfo_unexecuted_blocks=1 00:06:04.721 00:06:04.721 ' 00:06:04.721 07:15:48 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:04.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.721 --rc genhtml_branch_coverage=1 00:06:04.721 --rc genhtml_function_coverage=1 00:06:04.721 --rc genhtml_legend=1 00:06:04.721 --rc geninfo_all_blocks=1 00:06:04.721 --rc geninfo_unexecuted_blocks=1 00:06:04.721 00:06:04.721 ' 00:06:04.721 07:15:48 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:04.721 07:15:48 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:04.721 07:15:48 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:04.721 07:15:48 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:04.721 07:15:48 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:04.721 07:15:48 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:04.721 07:15:48 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:04.721 07:15:48 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:04.721 07:15:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:04.721 07:15:48 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1862346 00:06:04.721 07:15:48 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1862346 00:06:04.721 07:15:48 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:04.721 07:15:48 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1862346 ']' 00:06:04.721 07:15:48 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.721 07:15:48 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.721 07:15:48 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.721 07:15:48 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.721 07:15:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:04.721 [2024-11-26 07:15:48.788495] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:06:04.721 [2024-11-26 07:15:48.788552] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1862346 ] 00:06:04.982 [2024-11-26 07:15:48.867163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.982 [2024-11-26 07:15:48.904608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.982 [2024-11-26 07:15:48.904609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.551 07:15:49 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.551 07:15:49 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:05.551 07:15:49 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1862360 00:06:05.551 07:15:49 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:05.551 07:15:49 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:05.812 [ 00:06:05.812 "bdev_malloc_delete", 00:06:05.812 "bdev_malloc_create", 00:06:05.812 "bdev_null_resize", 00:06:05.812 "bdev_null_delete", 00:06:05.812 "bdev_null_create", 00:06:05.812 "bdev_nvme_cuse_unregister", 00:06:05.812 "bdev_nvme_cuse_register", 00:06:05.812 "bdev_opal_new_user", 00:06:05.812 "bdev_opal_set_lock_state", 00:06:05.812 "bdev_opal_delete", 00:06:05.812 "bdev_opal_get_info", 00:06:05.812 "bdev_opal_create", 00:06:05.812 "bdev_nvme_opal_revert", 00:06:05.812 "bdev_nvme_opal_init", 00:06:05.812 "bdev_nvme_send_cmd", 00:06:05.812 "bdev_nvme_set_keys", 00:06:05.812 "bdev_nvme_get_path_iostat", 00:06:05.812 "bdev_nvme_get_mdns_discovery_info", 00:06:05.812 "bdev_nvme_stop_mdns_discovery", 00:06:05.812 "bdev_nvme_start_mdns_discovery", 00:06:05.812 "bdev_nvme_set_multipath_policy", 00:06:05.812 "bdev_nvme_set_preferred_path", 00:06:05.812 "bdev_nvme_get_io_paths", 00:06:05.812 "bdev_nvme_remove_error_injection", 00:06:05.812 "bdev_nvme_add_error_injection", 00:06:05.812 "bdev_nvme_get_discovery_info", 00:06:05.812 "bdev_nvme_stop_discovery", 00:06:05.812 "bdev_nvme_start_discovery", 00:06:05.812 "bdev_nvme_get_controller_health_info", 00:06:05.812 "bdev_nvme_disable_controller", 00:06:05.812 "bdev_nvme_enable_controller", 00:06:05.812 "bdev_nvme_reset_controller", 00:06:05.812 "bdev_nvme_get_transport_statistics", 00:06:05.812 "bdev_nvme_apply_firmware", 00:06:05.812 "bdev_nvme_detach_controller", 00:06:05.812 "bdev_nvme_get_controllers", 00:06:05.812 "bdev_nvme_attach_controller", 00:06:05.812 "bdev_nvme_set_hotplug", 00:06:05.812 "bdev_nvme_set_options", 00:06:05.812 "bdev_passthru_delete", 00:06:05.812 "bdev_passthru_create", 00:06:05.812 "bdev_lvol_set_parent_bdev", 00:06:05.812 "bdev_lvol_set_parent", 00:06:05.812 "bdev_lvol_check_shallow_copy", 00:06:05.812 "bdev_lvol_start_shallow_copy", 00:06:05.812 "bdev_lvol_grow_lvstore", 00:06:05.812 "bdev_lvol_get_lvols", 00:06:05.812 "bdev_lvol_get_lvstores", 00:06:05.812 "bdev_lvol_delete", 00:06:05.812 "bdev_lvol_set_read_only", 00:06:05.812 "bdev_lvol_resize", 00:06:05.812 "bdev_lvol_decouple_parent", 00:06:05.812 "bdev_lvol_inflate", 00:06:05.812 "bdev_lvol_rename", 00:06:05.812 "bdev_lvol_clone_bdev", 00:06:05.812 "bdev_lvol_clone", 00:06:05.812 "bdev_lvol_snapshot", 00:06:05.812 "bdev_lvol_create", 00:06:05.812 "bdev_lvol_delete_lvstore", 00:06:05.812 "bdev_lvol_rename_lvstore", 00:06:05.812 "bdev_lvol_create_lvstore", 00:06:05.812 "bdev_raid_set_options", 00:06:05.812 "bdev_raid_remove_base_bdev", 00:06:05.812 "bdev_raid_add_base_bdev", 00:06:05.812 "bdev_raid_delete", 00:06:05.812 "bdev_raid_create", 00:06:05.812 "bdev_raid_get_bdevs", 00:06:05.812 "bdev_error_inject_error", 00:06:05.812 "bdev_error_delete", 00:06:05.812 "bdev_error_create", 00:06:05.812 "bdev_split_delete", 00:06:05.812 "bdev_split_create", 00:06:05.812 "bdev_delay_delete", 00:06:05.812 "bdev_delay_create", 00:06:05.812 "bdev_delay_update_latency", 00:06:05.812 "bdev_zone_block_delete", 00:06:05.812 "bdev_zone_block_create", 00:06:05.812 "blobfs_create", 00:06:05.812 "blobfs_detect", 00:06:05.812 "blobfs_set_cache_size", 00:06:05.812 "bdev_aio_delete", 00:06:05.812 "bdev_aio_rescan", 00:06:05.812 "bdev_aio_create", 00:06:05.812 "bdev_ftl_set_property", 00:06:05.812 "bdev_ftl_get_properties", 00:06:05.812 "bdev_ftl_get_stats", 00:06:05.812 "bdev_ftl_unmap", 00:06:05.812 "bdev_ftl_unload", 00:06:05.812 "bdev_ftl_delete", 00:06:05.812 "bdev_ftl_load", 00:06:05.812 "bdev_ftl_create", 00:06:05.812 "bdev_virtio_attach_controller", 00:06:05.812 "bdev_virtio_scsi_get_devices", 00:06:05.812 "bdev_virtio_detach_controller", 00:06:05.812 "bdev_virtio_blk_set_hotplug", 00:06:05.812 "bdev_iscsi_delete", 00:06:05.812 "bdev_iscsi_create", 00:06:05.812 "bdev_iscsi_set_options", 00:06:05.812 "accel_error_inject_error", 00:06:05.812 "ioat_scan_accel_module", 00:06:05.812 "dsa_scan_accel_module", 00:06:05.812 "iaa_scan_accel_module", 00:06:05.812 "vfu_virtio_create_fs_endpoint", 00:06:05.812 "vfu_virtio_create_scsi_endpoint", 00:06:05.812 "vfu_virtio_scsi_remove_target", 00:06:05.812 "vfu_virtio_scsi_add_target", 00:06:05.812 "vfu_virtio_create_blk_endpoint", 00:06:05.812 "vfu_virtio_delete_endpoint", 00:06:05.812 "keyring_file_remove_key", 00:06:05.812 "keyring_file_add_key", 00:06:05.812 "keyring_linux_set_options", 00:06:05.812 "fsdev_aio_delete", 00:06:05.812 "fsdev_aio_create", 00:06:05.812 "iscsi_get_histogram", 00:06:05.812 "iscsi_enable_histogram", 00:06:05.812 "iscsi_set_options", 00:06:05.812 "iscsi_get_auth_groups", 00:06:05.812 "iscsi_auth_group_remove_secret", 00:06:05.812 "iscsi_auth_group_add_secret", 00:06:05.812 "iscsi_delete_auth_group", 00:06:05.812 "iscsi_create_auth_group", 00:06:05.812 "iscsi_set_discovery_auth", 00:06:05.812 "iscsi_get_options", 00:06:05.812 "iscsi_target_node_request_logout", 00:06:05.812 "iscsi_target_node_set_redirect", 00:06:05.812 "iscsi_target_node_set_auth", 00:06:05.813 "iscsi_target_node_add_lun", 00:06:05.813 "iscsi_get_stats", 00:06:05.813 "iscsi_get_connections", 00:06:05.813 "iscsi_portal_group_set_auth", 00:06:05.813 "iscsi_start_portal_group", 00:06:05.813 "iscsi_delete_portal_group", 00:06:05.813 "iscsi_create_portal_group", 00:06:05.813 "iscsi_get_portal_groups", 00:06:05.813 "iscsi_delete_target_node", 00:06:05.813 "iscsi_target_node_remove_pg_ig_maps", 00:06:05.813 "iscsi_target_node_add_pg_ig_maps", 00:06:05.813 "iscsi_create_target_node", 00:06:05.813 "iscsi_get_target_nodes", 00:06:05.813 "iscsi_delete_initiator_group", 00:06:05.813 "iscsi_initiator_group_remove_initiators", 00:06:05.813 "iscsi_initiator_group_add_initiators", 00:06:05.813 "iscsi_create_initiator_group", 00:06:05.813 "iscsi_get_initiator_groups", 00:06:05.813 "nvmf_set_crdt", 00:06:05.813 "nvmf_set_config", 00:06:05.813 "nvmf_set_max_subsystems", 00:06:05.813 "nvmf_stop_mdns_prr", 00:06:05.813 "nvmf_publish_mdns_prr", 00:06:05.813 "nvmf_subsystem_get_listeners", 00:06:05.813 "nvmf_subsystem_get_qpairs", 00:06:05.813 "nvmf_subsystem_get_controllers", 00:06:05.813 "nvmf_get_stats", 00:06:05.813 "nvmf_get_transports", 00:06:05.813 "nvmf_create_transport", 00:06:05.813 "nvmf_get_targets", 00:06:05.813 "nvmf_delete_target", 00:06:05.813 "nvmf_create_target", 00:06:05.813 "nvmf_subsystem_allow_any_host", 00:06:05.813 "nvmf_subsystem_set_keys", 00:06:05.813 "nvmf_subsystem_remove_host", 00:06:05.813 "nvmf_subsystem_add_host", 00:06:05.813 "nvmf_ns_remove_host", 00:06:05.813 "nvmf_ns_add_host", 00:06:05.813 "nvmf_subsystem_remove_ns", 00:06:05.813 "nvmf_subsystem_set_ns_ana_group", 00:06:05.813 "nvmf_subsystem_add_ns", 00:06:05.813 "nvmf_subsystem_listener_set_ana_state", 00:06:05.813 "nvmf_discovery_get_referrals", 00:06:05.813 "nvmf_discovery_remove_referral", 00:06:05.813 "nvmf_discovery_add_referral", 00:06:05.813 "nvmf_subsystem_remove_listener", 00:06:05.813 "nvmf_subsystem_add_listener", 00:06:05.813 "nvmf_delete_subsystem", 00:06:05.813 "nvmf_create_subsystem", 00:06:05.813 "nvmf_get_subsystems", 00:06:05.813 "env_dpdk_get_mem_stats", 00:06:05.813 "nbd_get_disks", 00:06:05.813 "nbd_stop_disk", 00:06:05.813 "nbd_start_disk", 00:06:05.813 "ublk_recover_disk", 00:06:05.813 "ublk_get_disks", 00:06:05.813 "ublk_stop_disk", 00:06:05.813 "ublk_start_disk", 00:06:05.813 "ublk_destroy_target", 00:06:05.813 "ublk_create_target", 00:06:05.813 "virtio_blk_create_transport", 00:06:05.813 "virtio_blk_get_transports", 00:06:05.813 "vhost_controller_set_coalescing", 00:06:05.813 "vhost_get_controllers", 00:06:05.813 "vhost_delete_controller", 00:06:05.813 "vhost_create_blk_controller", 00:06:05.813 "vhost_scsi_controller_remove_target", 00:06:05.813 "vhost_scsi_controller_add_target", 00:06:05.813 "vhost_start_scsi_controller", 00:06:05.813 "vhost_create_scsi_controller", 00:06:05.813 "thread_set_cpumask", 00:06:05.813 "scheduler_set_options", 00:06:05.813 "framework_get_governor", 00:06:05.813 "framework_get_scheduler", 00:06:05.813 "framework_set_scheduler", 00:06:05.813 "framework_get_reactors", 00:06:05.813 "thread_get_io_channels", 00:06:05.813 "thread_get_pollers", 00:06:05.813 "thread_get_stats", 00:06:05.813 "framework_monitor_context_switch", 00:06:05.813 "spdk_kill_instance", 00:06:05.813 "log_enable_timestamps", 00:06:05.813 "log_get_flags", 00:06:05.813 "log_clear_flag", 00:06:05.813 "log_set_flag", 00:06:05.813 "log_get_level", 00:06:05.813 "log_set_level", 00:06:05.813 "log_get_print_level", 00:06:05.813 "log_set_print_level", 00:06:05.813 "framework_enable_cpumask_locks", 00:06:05.813 "framework_disable_cpumask_locks", 00:06:05.813 "framework_wait_init", 00:06:05.813 "framework_start_init", 00:06:05.813 "scsi_get_devices", 00:06:05.813 "bdev_get_histogram", 00:06:05.813 "bdev_enable_histogram", 00:06:05.813 "bdev_set_qos_limit", 00:06:05.813 "bdev_set_qd_sampling_period", 00:06:05.813 "bdev_get_bdevs", 00:06:05.813 "bdev_reset_iostat", 00:06:05.813 "bdev_get_iostat", 00:06:05.813 "bdev_examine", 00:06:05.813 "bdev_wait_for_examine", 00:06:05.813 "bdev_set_options", 00:06:05.813 "accel_get_stats", 00:06:05.813 "accel_set_options", 00:06:05.813 "accel_set_driver", 00:06:05.813 "accel_crypto_key_destroy", 00:06:05.813 "accel_crypto_keys_get", 00:06:05.813 "accel_crypto_key_create", 00:06:05.813 "accel_assign_opc", 00:06:05.813 "accel_get_module_info", 00:06:05.813 "accel_get_opc_assignments", 00:06:05.813 "vmd_rescan", 00:06:05.813 "vmd_remove_device", 00:06:05.813 "vmd_enable", 00:06:05.813 "sock_get_default_impl", 00:06:05.813 "sock_set_default_impl", 00:06:05.813 "sock_impl_set_options", 00:06:05.813 "sock_impl_get_options", 00:06:05.813 "iobuf_get_stats", 00:06:05.813 "iobuf_set_options", 00:06:05.813 "keyring_get_keys", 00:06:05.813 "vfu_tgt_set_base_path", 00:06:05.813 "framework_get_pci_devices", 00:06:05.813 "framework_get_config", 00:06:05.813 "framework_get_subsystems", 00:06:05.813 "fsdev_set_opts", 00:06:05.813 "fsdev_get_opts", 00:06:05.813 "trace_get_info", 00:06:05.813 "trace_get_tpoint_group_mask", 00:06:05.813 "trace_disable_tpoint_group", 00:06:05.813 "trace_enable_tpoint_group", 00:06:05.813 "trace_clear_tpoint_mask", 00:06:05.813 "trace_set_tpoint_mask", 00:06:05.813 "notify_get_notifications", 00:06:05.813 "notify_get_types", 00:06:05.813 "spdk_get_version", 00:06:05.813 "rpc_get_methods" 00:06:05.813 ] 00:06:05.813 07:15:49 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:05.813 07:15:49 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:05.813 07:15:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:05.813 07:15:49 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:05.813 07:15:49 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1862346 00:06:05.813 07:15:49 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1862346 ']' 00:06:05.813 07:15:49 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1862346 00:06:05.813 07:15:49 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:05.813 07:15:49 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.813 07:15:49 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1862346 00:06:05.813 07:15:49 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.813 07:15:49 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.813 07:15:49 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1862346' 00:06:05.813 killing process with pid 1862346 00:06:05.813 07:15:49 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1862346 00:06:05.813 07:15:49 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1862346 00:06:06.074 00:06:06.074 real 0m1.542s 00:06:06.074 user 0m2.851s 00:06:06.074 sys 0m0.424s 00:06:06.074 07:15:50 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.074 07:15:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:06.074 ************************************ 00:06:06.074 END TEST spdkcli_tcp 00:06:06.074 ************************************ 00:06:06.075 07:15:50 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:06.075 07:15:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.075 07:15:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.075 07:15:50 -- common/autotest_common.sh@10 -- # set +x 00:06:06.075 ************************************ 00:06:06.075 START TEST dpdk_mem_utility 00:06:06.075 ************************************ 00:06:06.075 07:15:50 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:06.337 * Looking for test storage... 00:06:06.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:06.337 07:15:50 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:06.337 07:15:50 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:06.337 07:15:50 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:06.337 07:15:50 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:06.337 07:15:50 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.337 07:15:50 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.337 07:15:50 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.337 07:15:50 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.337 07:15:50 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.338 07:15:50 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.338 07:15:50 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.338 07:15:50 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.338 07:15:50 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.338 07:15:50 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.338 07:15:50 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.338 07:15:50 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:06.338 07:15:50 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:06.338 07:15:50 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.338 07:15:50 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.338 07:15:50 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:06.338 07:15:50 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:06.338 07:15:50 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.338 07:15:50 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:06.338 07:15:50 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.338 07:15:50 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:06.338 07:15:50 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:06.338 07:15:50 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.338 07:15:50 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:06.338 07:15:50 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.338 07:15:50 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.338 07:15:50 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.338 07:15:50 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:06.338 07:15:50 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.338 07:15:50 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:06.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.338 --rc genhtml_branch_coverage=1 00:06:06.338 --rc genhtml_function_coverage=1 00:06:06.338 --rc genhtml_legend=1 00:06:06.338 --rc geninfo_all_blocks=1 00:06:06.338 --rc geninfo_unexecuted_blocks=1 00:06:06.338 00:06:06.338 ' 00:06:06.338 07:15:50 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:06.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.338 --rc genhtml_branch_coverage=1 00:06:06.338 --rc genhtml_function_coverage=1 00:06:06.338 --rc genhtml_legend=1 00:06:06.338 --rc geninfo_all_blocks=1 00:06:06.338 --rc geninfo_unexecuted_blocks=1 00:06:06.338 00:06:06.338 ' 00:06:06.338 07:15:50 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:06.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.338 --rc genhtml_branch_coverage=1 00:06:06.338 --rc genhtml_function_coverage=1 00:06:06.338 --rc genhtml_legend=1 00:06:06.338 --rc geninfo_all_blocks=1 00:06:06.338 --rc geninfo_unexecuted_blocks=1 00:06:06.338 00:06:06.338 ' 00:06:06.338 07:15:50 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:06.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.338 --rc genhtml_branch_coverage=1 00:06:06.338 --rc genhtml_function_coverage=1 00:06:06.338 --rc genhtml_legend=1 00:06:06.338 --rc geninfo_all_blocks=1 00:06:06.338 --rc geninfo_unexecuted_blocks=1 00:06:06.338 00:06:06.338 ' 00:06:06.338 07:15:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:06.338 07:15:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1862761 00:06:06.338 07:15:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1862761 00:06:06.338 07:15:50 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1862761 ']' 00:06:06.338 07:15:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:06.338 07:15:50 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.338 07:15:50 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.338 07:15:50 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.338 07:15:50 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.338 07:15:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:06.338 [2024-11-26 07:15:50.410119] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:06:06.338 [2024-11-26 07:15:50.410184] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1862761 ] 00:06:06.600 [2024-11-26 07:15:50.492936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.600 [2024-11-26 07:15:50.534532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.172 07:15:51 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.172 07:15:51 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:07.172 07:15:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:07.172 07:15:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:07.172 07:15:51 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.172 07:15:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:07.172 { 00:06:07.172 "filename": "/tmp/spdk_mem_dump.txt" 00:06:07.172 } 00:06:07.172 07:15:51 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.172 07:15:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:07.172 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:07.172 1 heaps totaling size 810.000000 MiB 00:06:07.172 size: 810.000000 MiB heap id: 0 00:06:07.172 end heaps---------- 00:06:07.172 9 mempools totaling size 595.772034 MiB 00:06:07.172 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:07.172 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:07.172 size: 92.545471 MiB name: bdev_io_1862761 00:06:07.172 size: 50.003479 MiB name: msgpool_1862761 00:06:07.172 size: 36.509338 MiB name: fsdev_io_1862761 00:06:07.172 size: 21.763794 MiB name: PDU_Pool 00:06:07.172 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:07.172 size: 4.133484 MiB name: evtpool_1862761 00:06:07.172 size: 0.026123 MiB name: Session_Pool 00:06:07.172 end mempools------- 00:06:07.172 6 memzones totaling size 4.142822 MiB 00:06:07.172 size: 1.000366 MiB name: RG_ring_0_1862761 00:06:07.172 size: 1.000366 MiB name: RG_ring_1_1862761 00:06:07.172 size: 1.000366 MiB name: RG_ring_4_1862761 00:06:07.172 size: 1.000366 MiB name: RG_ring_5_1862761 00:06:07.172 size: 0.125366 MiB name: RG_ring_2_1862761 00:06:07.173 size: 0.015991 MiB name: RG_ring_3_1862761 00:06:07.173 end memzones------- 00:06:07.173 07:15:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:07.435 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:07.435 list of free elements. size: 10.862488 MiB 00:06:07.435 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:07.435 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:07.435 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:07.435 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:07.435 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:07.435 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:07.435 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:07.435 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:07.435 element at address: 0x20001a600000 with size: 0.582886 MiB 00:06:07.435 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:07.435 element at address: 0x20000a600000 with size: 0.490723 MiB 00:06:07.435 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:07.435 element at address: 0x200003e00000 with size: 0.481934 MiB 00:06:07.435 element at address: 0x200027a00000 with size: 0.410034 MiB 00:06:07.435 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:07.435 list of standard malloc elements. size: 199.218628 MiB 00:06:07.435 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:07.435 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:07.435 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:07.435 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:07.435 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:07.435 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:07.435 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:07.435 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:07.435 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:07.435 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:07.435 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:07.435 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:07.435 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:07.435 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:07.435 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:07.435 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:07.435 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:07.435 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:07.435 element at address: 0x20000085f300 with size: 0.000183 MiB 00:06:07.435 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:07.435 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:07.435 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:07.435 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:07.435 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:07.435 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:07.435 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:07.435 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:07.435 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:07.435 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:07.435 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:07.435 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:07.435 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:07.435 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:07.435 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:07.435 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:07.435 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:07.435 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:07.435 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:07.435 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:07.435 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:06:07.435 element at address: 0x200027a69040 with size: 0.000183 MiB 00:06:07.435 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:06:07.435 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:07.435 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:07.435 list of memzone associated elements. size: 599.918884 MiB 00:06:07.435 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:07.435 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:07.435 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:07.435 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:07.435 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:07.435 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_1862761_0 00:06:07.435 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:07.436 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1862761_0 00:06:07.436 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:07.436 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1862761_0 00:06:07.436 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:07.436 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:07.436 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:07.436 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:07.436 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:07.436 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1862761_0 00:06:07.436 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:07.436 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1862761 00:06:07.436 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:07.436 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1862761 00:06:07.436 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:07.436 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:07.436 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:07.436 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:07.436 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:07.436 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:07.436 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:07.436 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:07.436 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:07.436 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1862761 00:06:07.436 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:07.436 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1862761 00:06:07.436 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:07.436 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1862761 00:06:07.436 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:07.436 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1862761 00:06:07.436 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:07.436 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1862761 00:06:07.436 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:07.436 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1862761 00:06:07.436 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:07.436 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:07.436 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:07.436 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:07.436 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:07.436 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:07.436 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:07.436 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1862761 00:06:07.436 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:06:07.436 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1862761 00:06:07.436 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:07.436 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:07.436 element at address: 0x200027a69100 with size: 0.023743 MiB 00:06:07.436 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:07.436 element at address: 0x20000085b100 with size: 0.016113 MiB 00:06:07.436 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1862761 00:06:07.436 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:06:07.436 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:07.436 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:07.436 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1862761 00:06:07.436 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:07.436 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1862761 00:06:07.436 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:07.436 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1862761 00:06:07.436 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:06:07.436 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:07.436 07:15:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:07.436 07:15:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1862761 00:06:07.436 07:15:51 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1862761 ']' 00:06:07.436 07:15:51 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1862761 00:06:07.436 07:15:51 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:07.436 07:15:51 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.436 07:15:51 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1862761 00:06:07.436 07:15:51 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.436 07:15:51 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.436 07:15:51 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1862761' 00:06:07.436 killing process with pid 1862761 00:06:07.436 07:15:51 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1862761 00:06:07.436 07:15:51 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1862761 00:06:07.698 00:06:07.698 real 0m1.443s 00:06:07.698 user 0m1.526s 00:06:07.699 sys 0m0.426s 00:06:07.699 07:15:51 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.699 07:15:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:07.699 ************************************ 00:06:07.699 END TEST dpdk_mem_utility 00:06:07.699 ************************************ 00:06:07.699 07:15:51 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:07.699 07:15:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.699 07:15:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.699 07:15:51 -- common/autotest_common.sh@10 -- # set +x 00:06:07.699 ************************************ 00:06:07.699 START TEST event 00:06:07.699 ************************************ 00:06:07.699 07:15:51 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:07.699 * Looking for test storage... 00:06:07.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:07.699 07:15:51 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:07.699 07:15:51 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:07.699 07:15:51 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:07.960 07:15:51 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:07.960 07:15:51 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.960 07:15:51 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.960 07:15:51 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.960 07:15:51 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.960 07:15:51 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.960 07:15:51 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.960 07:15:51 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.960 07:15:51 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.960 07:15:51 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.960 07:15:51 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.960 07:15:51 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.960 07:15:51 event -- scripts/common.sh@344 -- # case "$op" in 00:06:07.960 07:15:51 event -- scripts/common.sh@345 -- # : 1 00:06:07.960 07:15:51 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.960 07:15:51 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.960 07:15:51 event -- scripts/common.sh@365 -- # decimal 1 00:06:07.960 07:15:51 event -- scripts/common.sh@353 -- # local d=1 00:06:07.960 07:15:51 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.960 07:15:51 event -- scripts/common.sh@355 -- # echo 1 00:06:07.960 07:15:51 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.960 07:15:51 event -- scripts/common.sh@366 -- # decimal 2 00:06:07.960 07:15:51 event -- scripts/common.sh@353 -- # local d=2 00:06:07.960 07:15:51 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.960 07:15:51 event -- scripts/common.sh@355 -- # echo 2 00:06:07.960 07:15:51 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.960 07:15:51 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.960 07:15:51 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.960 07:15:51 event -- scripts/common.sh@368 -- # return 0 00:06:07.960 07:15:51 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.960 07:15:51 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:07.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.960 --rc genhtml_branch_coverage=1 00:06:07.960 --rc genhtml_function_coverage=1 00:06:07.960 --rc genhtml_legend=1 00:06:07.960 --rc geninfo_all_blocks=1 00:06:07.960 --rc geninfo_unexecuted_blocks=1 00:06:07.960 00:06:07.960 ' 00:06:07.960 07:15:51 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:07.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.960 --rc genhtml_branch_coverage=1 00:06:07.960 --rc genhtml_function_coverage=1 00:06:07.960 --rc genhtml_legend=1 00:06:07.960 --rc geninfo_all_blocks=1 00:06:07.960 --rc geninfo_unexecuted_blocks=1 00:06:07.960 00:06:07.960 ' 00:06:07.960 07:15:51 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:07.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.960 --rc genhtml_branch_coverage=1 00:06:07.960 --rc genhtml_function_coverage=1 00:06:07.961 --rc genhtml_legend=1 00:06:07.961 --rc geninfo_all_blocks=1 00:06:07.961 --rc geninfo_unexecuted_blocks=1 00:06:07.961 00:06:07.961 ' 00:06:07.961 07:15:51 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:07.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.961 --rc genhtml_branch_coverage=1 00:06:07.961 --rc genhtml_function_coverage=1 00:06:07.961 --rc genhtml_legend=1 00:06:07.961 --rc geninfo_all_blocks=1 00:06:07.961 --rc geninfo_unexecuted_blocks=1 00:06:07.961 00:06:07.961 ' 00:06:07.961 07:15:51 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:07.961 07:15:51 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:07.961 07:15:51 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:07.961 07:15:51 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:07.961 07:15:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.961 07:15:51 event -- common/autotest_common.sh@10 -- # set +x 00:06:07.961 ************************************ 00:06:07.961 START TEST event_perf 00:06:07.961 ************************************ 00:06:07.961 07:15:51 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:07.961 Running I/O for 1 seconds...[2024-11-26 07:15:51.943935] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:06:07.961 [2024-11-26 07:15:51.944043] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1863079 ] 00:06:07.961 [2024-11-26 07:15:52.031883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:07.961 [2024-11-26 07:15:52.077937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.961 [2024-11-26 07:15:52.078053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.961 [2024-11-26 07:15:52.078212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.961 Running I/O for 1 seconds...[2024-11-26 07:15:52.078213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:09.345 00:06:09.345 lcore 0: 183194 00:06:09.345 lcore 1: 183195 00:06:09.345 lcore 2: 183193 00:06:09.345 lcore 3: 183196 00:06:09.345 done. 00:06:09.345 00:06:09.345 real 0m1.191s 00:06:09.345 user 0m4.110s 00:06:09.345 sys 0m0.079s 00:06:09.345 07:15:53 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.345 07:15:53 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:09.345 ************************************ 00:06:09.345 END TEST event_perf 00:06:09.345 ************************************ 00:06:09.345 07:15:53 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:09.345 07:15:53 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:09.345 07:15:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.345 07:15:53 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.345 ************************************ 00:06:09.345 START TEST event_reactor 00:06:09.345 ************************************ 00:06:09.345 07:15:53 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:09.345 [2024-11-26 07:15:53.211129] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:06:09.345 [2024-11-26 07:15:53.211233] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1863220 ] 00:06:09.345 [2024-11-26 07:15:53.293856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.345 [2024-11-26 07:15:53.330869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.289 test_start 00:06:10.289 oneshot 00:06:10.289 tick 100 00:06:10.289 tick 100 00:06:10.289 tick 250 00:06:10.289 tick 100 00:06:10.289 tick 100 00:06:10.289 tick 250 00:06:10.290 tick 100 00:06:10.290 tick 500 00:06:10.290 tick 100 00:06:10.290 tick 100 00:06:10.290 tick 250 00:06:10.290 tick 100 00:06:10.290 tick 100 00:06:10.290 test_end 00:06:10.290 00:06:10.290 real 0m1.174s 00:06:10.290 user 0m1.106s 00:06:10.290 sys 0m0.064s 00:06:10.290 07:15:54 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.290 07:15:54 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:10.290 ************************************ 00:06:10.290 END TEST event_reactor 00:06:10.290 ************************************ 00:06:10.290 07:15:54 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:10.290 07:15:54 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:10.290 07:15:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.290 07:15:54 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.550 ************************************ 00:06:10.550 START TEST event_reactor_perf 00:06:10.550 ************************************ 00:06:10.550 07:15:54 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:10.550 [2024-11-26 07:15:54.462528] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:06:10.551 [2024-11-26 07:15:54.462623] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1863553 ] 00:06:10.551 [2024-11-26 07:15:54.544768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.551 [2024-11-26 07:15:54.581121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.495 test_start 00:06:11.495 test_end 00:06:11.495 Performance: 366615 events per second 00:06:11.495 00:06:11.495 real 0m1.172s 00:06:11.495 user 0m1.104s 00:06:11.495 sys 0m0.064s 00:06:11.495 07:15:55 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.495 07:15:55 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:11.495 ************************************ 00:06:11.495 END TEST event_reactor_perf 00:06:11.495 ************************************ 00:06:11.757 07:15:55 event -- event/event.sh@49 -- # uname -s 00:06:11.757 07:15:55 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:11.757 07:15:55 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:11.757 07:15:55 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.757 07:15:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.757 07:15:55 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.757 ************************************ 00:06:11.757 START TEST event_scheduler 00:06:11.757 ************************************ 00:06:11.757 07:15:55 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:11.757 * Looking for test storage... 00:06:11.757 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:11.757 07:15:55 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:11.757 07:15:55 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:11.757 07:15:55 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:11.757 07:15:55 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:11.757 07:15:55 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.757 07:15:55 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.757 07:15:55 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.757 07:15:55 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.757 07:15:55 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.757 07:15:55 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.757 07:15:55 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.757 07:15:55 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.757 07:15:55 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.757 07:15:55 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.757 07:15:55 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.757 07:15:55 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:11.757 07:15:55 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:11.757 07:15:55 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.757 07:15:55 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.757 07:15:55 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:11.757 07:15:55 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:11.757 07:15:55 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.757 07:15:55 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:11.757 07:15:55 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.757 07:15:55 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:12.017 07:15:55 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:12.018 07:15:55 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.018 07:15:55 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:12.018 07:15:55 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.018 07:15:55 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.018 07:15:55 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.018 07:15:55 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:12.018 07:15:55 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.018 07:15:55 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:12.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.018 --rc genhtml_branch_coverage=1 00:06:12.018 --rc genhtml_function_coverage=1 00:06:12.018 --rc genhtml_legend=1 00:06:12.018 --rc geninfo_all_blocks=1 00:06:12.018 --rc geninfo_unexecuted_blocks=1 00:06:12.018 00:06:12.018 ' 00:06:12.018 07:15:55 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:12.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.018 --rc genhtml_branch_coverage=1 00:06:12.018 --rc genhtml_function_coverage=1 00:06:12.018 --rc genhtml_legend=1 00:06:12.018 --rc geninfo_all_blocks=1 00:06:12.018 --rc geninfo_unexecuted_blocks=1 00:06:12.018 00:06:12.018 ' 00:06:12.018 07:15:55 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:12.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.018 --rc genhtml_branch_coverage=1 00:06:12.018 --rc genhtml_function_coverage=1 00:06:12.018 --rc genhtml_legend=1 00:06:12.018 --rc geninfo_all_blocks=1 00:06:12.018 --rc geninfo_unexecuted_blocks=1 00:06:12.018 00:06:12.018 ' 00:06:12.018 07:15:55 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:12.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.018 --rc genhtml_branch_coverage=1 00:06:12.018 --rc genhtml_function_coverage=1 00:06:12.018 --rc genhtml_legend=1 00:06:12.018 --rc geninfo_all_blocks=1 00:06:12.018 --rc geninfo_unexecuted_blocks=1 00:06:12.018 00:06:12.018 ' 00:06:12.018 07:15:55 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:12.018 07:15:55 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1863942 00:06:12.018 07:15:55 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:12.018 07:15:55 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1863942 00:06:12.018 07:15:55 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:12.018 07:15:55 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1863942 ']' 00:06:12.018 07:15:55 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.018 07:15:55 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.018 07:15:55 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.018 07:15:55 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.018 07:15:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:12.018 [2024-11-26 07:15:55.951528] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:06:12.018 [2024-11-26 07:15:55.951604] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1863942 ] 00:06:12.018 [2024-11-26 07:15:56.021744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:12.018 [2024-11-26 07:15:56.060886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.018 [2024-11-26 07:15:56.061047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.018 [2024-11-26 07:15:56.061204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.018 [2024-11-26 07:15:56.061206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.962 07:15:56 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.962 07:15:56 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:12.962 07:15:56 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:12.962 07:15:56 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.962 07:15:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:12.962 [2024-11-26 07:15:56.763378] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:12.962 [2024-11-26 07:15:56.763394] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:12.962 [2024-11-26 07:15:56.763401] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:12.962 [2024-11-26 07:15:56.763405] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:12.962 [2024-11-26 07:15:56.763409] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:12.962 07:15:56 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.962 07:15:56 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:12.962 07:15:56 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.962 07:15:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:12.962 [2024-11-26 07:15:56.819380] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:12.962 07:15:56 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.962 07:15:56 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:12.962 07:15:56 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.962 07:15:56 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.962 07:15:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:12.962 ************************************ 00:06:12.962 START TEST scheduler_create_thread 00:06:12.962 ************************************ 00:06:12.962 07:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:12.962 07:15:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:12.962 07:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.962 07:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.962 2 00:06:12.962 07:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.962 07:15:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:12.962 07:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.962 07:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.962 3 00:06:12.962 07:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.962 07:15:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:12.962 07:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.962 07:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.962 4 00:06:12.962 07:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.962 07:15:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:12.962 07:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.962 07:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.962 5 00:06:12.962 07:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.962 07:15:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:12.962 07:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.962 07:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.962 6 00:06:12.962 07:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.962 07:15:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:12.962 07:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.962 07:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.962 7 00:06:12.962 07:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.962 07:15:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:12.962 07:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.962 07:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.962 8 00:06:12.962 07:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.962 07:15:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:12.962 07:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.962 07:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.962 9 00:06:12.962 07:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.962 07:15:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:12.962 07:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.962 07:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.534 10 00:06:13.534 07:15:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.534 07:15:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:13.534 07:15:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.534 07:15:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.920 07:15:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.920 07:15:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:14.920 07:15:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:14.920 07:15:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.920 07:15:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.492 07:15:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.492 07:15:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:15.492 07:15:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.492 07:15:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.435 07:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.435 07:16:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:16.435 07:16:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:16.435 07:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.435 07:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.006 07:16:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.006 00:06:17.006 real 0m4.225s 00:06:17.006 user 0m0.029s 00:06:17.006 sys 0m0.003s 00:06:17.006 07:16:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.006 07:16:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.006 ************************************ 00:06:17.006 END TEST scheduler_create_thread 00:06:17.006 ************************************ 00:06:17.006 07:16:01 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:17.006 07:16:01 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1863942 00:06:17.006 07:16:01 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1863942 ']' 00:06:17.006 07:16:01 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1863942 00:06:17.006 07:16:01 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:17.006 07:16:01 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.006 07:16:01 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1863942 00:06:17.268 07:16:01 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:17.268 07:16:01 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:17.268 07:16:01 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1863942' 00:06:17.268 killing process with pid 1863942 00:06:17.268 07:16:01 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1863942 00:06:17.268 07:16:01 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1863942 00:06:17.268 [2024-11-26 07:16:01.360614] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:17.529 00:06:17.529 real 0m5.823s 00:06:17.529 user 0m12.981s 00:06:17.529 sys 0m0.401s 00:06:17.529 07:16:01 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.529 07:16:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:17.529 ************************************ 00:06:17.529 END TEST event_scheduler 00:06:17.529 ************************************ 00:06:17.529 07:16:01 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:17.529 07:16:01 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:17.529 07:16:01 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.529 07:16:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.529 07:16:01 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.529 ************************************ 00:06:17.529 START TEST app_repeat 00:06:17.529 ************************************ 00:06:17.529 07:16:01 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:17.529 07:16:01 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.529 07:16:01 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.529 07:16:01 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:17.529 07:16:01 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.529 07:16:01 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:17.529 07:16:01 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:17.529 07:16:01 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:17.529 07:16:01 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1865026 00:06:17.529 07:16:01 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:17.529 07:16:01 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:17.529 07:16:01 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1865026' 00:06:17.529 Process app_repeat pid: 1865026 00:06:17.529 07:16:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:17.529 07:16:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:17.529 spdk_app_start Round 0 00:06:17.529 07:16:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1865026 /var/tmp/spdk-nbd.sock 00:06:17.529 07:16:01 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1865026 ']' 00:06:17.529 07:16:01 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:17.529 07:16:01 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.529 07:16:01 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:17.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:17.529 07:16:01 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.529 07:16:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:17.529 [2024-11-26 07:16:01.642240] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:06:17.529 [2024-11-26 07:16:01.642307] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1865026 ] 00:06:17.790 [2024-11-26 07:16:01.726156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.790 [2024-11-26 07:16:01.766622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.790 [2024-11-26 07:16:01.766626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.790 07:16:01 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.790 07:16:01 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:17.790 07:16:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.051 Malloc0 00:06:18.051 07:16:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.312 Malloc1 00:06:18.312 07:16:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.312 07:16:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.312 07:16:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.312 07:16:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:18.312 07:16:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.312 07:16:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:18.312 07:16:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.312 07:16:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.312 07:16:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.312 07:16:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:18.312 07:16:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.312 07:16:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:18.312 07:16:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:18.312 07:16:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:18.312 07:16:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.312 07:16:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:18.312 /dev/nbd0 00:06:18.312 07:16:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:18.312 07:16:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:18.312 07:16:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:18.312 07:16:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:18.312 07:16:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.312 07:16:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.312 07:16:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:18.312 07:16:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:18.312 07:16:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.312 07:16:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.312 07:16:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:18.312 1+0 records in 00:06:18.312 1+0 records out 00:06:18.312 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241322 s, 17.0 MB/s 00:06:18.574 07:16:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:18.574 07:16:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:18.574 07:16:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:18.574 07:16:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.574 07:16:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:18.574 07:16:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.574 07:16:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.574 07:16:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:18.574 /dev/nbd1 00:06:18.574 07:16:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:18.574 07:16:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:18.574 07:16:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:18.574 07:16:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:18.574 07:16:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.574 07:16:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.574 07:16:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:18.574 07:16:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:18.574 07:16:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.574 07:16:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.574 07:16:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:18.574 1+0 records in 00:06:18.574 1+0 records out 00:06:18.574 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226632 s, 18.1 MB/s 00:06:18.574 07:16:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:18.574 07:16:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:18.574 07:16:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:18.574 07:16:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.574 07:16:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:18.574 07:16:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.574 07:16:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.574 07:16:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.574 07:16:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.574 07:16:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.835 07:16:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:18.835 { 00:06:18.835 "nbd_device": "/dev/nbd0", 00:06:18.835 "bdev_name": "Malloc0" 00:06:18.835 }, 00:06:18.835 { 00:06:18.836 "nbd_device": "/dev/nbd1", 00:06:18.836 "bdev_name": "Malloc1" 00:06:18.836 } 00:06:18.836 ]' 00:06:18.836 07:16:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:18.836 { 00:06:18.836 "nbd_device": "/dev/nbd0", 00:06:18.836 "bdev_name": "Malloc0" 00:06:18.836 }, 00:06:18.836 { 00:06:18.836 "nbd_device": "/dev/nbd1", 00:06:18.836 "bdev_name": "Malloc1" 00:06:18.836 } 00:06:18.836 ]' 00:06:18.836 07:16:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.836 07:16:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:18.836 /dev/nbd1' 00:06:18.836 07:16:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:18.836 /dev/nbd1' 00:06:18.836 07:16:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.836 07:16:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:18.836 07:16:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:18.836 07:16:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:18.836 07:16:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:18.836 07:16:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:18.836 07:16:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.836 07:16:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.836 07:16:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:18.836 07:16:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:18.836 07:16:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:18.836 07:16:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:18.836 256+0 records in 00:06:18.836 256+0 records out 00:06:18.836 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121854 s, 86.1 MB/s 00:06:18.836 07:16:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.836 07:16:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:18.836 256+0 records in 00:06:18.836 256+0 records out 00:06:18.836 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166183 s, 63.1 MB/s 00:06:18.836 07:16:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.836 07:16:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:19.121 256+0 records in 00:06:19.121 256+0 records out 00:06:19.121 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0287885 s, 36.4 MB/s 00:06:19.121 07:16:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:19.122 07:16:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.122 07:16:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.122 07:16:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:19.122 07:16:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.122 07:16:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:19.122 07:16:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:19.122 07:16:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.122 07:16:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:19.122 07:16:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.122 07:16:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:19.122 07:16:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.122 07:16:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:19.122 07:16:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.122 07:16:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.122 07:16:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:19.122 07:16:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:19.122 07:16:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.122 07:16:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:19.122 07:16:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:19.122 07:16:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:19.122 07:16:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:19.122 07:16:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.122 07:16:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.122 07:16:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:19.122 07:16:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:19.122 07:16:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.122 07:16:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.122 07:16:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:19.383 07:16:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:19.383 07:16:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:19.383 07:16:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:19.383 07:16:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.383 07:16:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.383 07:16:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:19.383 07:16:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:19.383 07:16:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.383 07:16:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.383 07:16:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.383 07:16:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.645 07:16:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:19.645 07:16:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:19.645 07:16:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.645 07:16:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:19.645 07:16:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:19.645 07:16:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.645 07:16:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:19.645 07:16:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:19.645 07:16:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:19.645 07:16:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:19.645 07:16:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:19.645 07:16:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:19.645 07:16:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:19.906 07:16:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:19.906 [2024-11-26 07:16:03.912117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:19.906 [2024-11-26 07:16:03.948750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.906 [2024-11-26 07:16:03.948752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.906 [2024-11-26 07:16:03.980623] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:19.906 [2024-11-26 07:16:03.980658] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:23.207 07:16:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:23.207 07:16:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:23.207 spdk_app_start Round 1 00:06:23.207 07:16:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1865026 /var/tmp/spdk-nbd.sock 00:06:23.207 07:16:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1865026 ']' 00:06:23.207 07:16:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:23.207 07:16:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.207 07:16:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:23.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:23.207 07:16:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.207 07:16:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:23.207 07:16:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.207 07:16:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:23.207 07:16:06 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.207 Malloc0 00:06:23.207 07:16:07 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.207 Malloc1 00:06:23.207 07:16:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.207 07:16:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.207 07:16:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.207 07:16:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:23.207 07:16:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.207 07:16:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:23.207 07:16:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.207 07:16:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.207 07:16:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.207 07:16:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:23.207 07:16:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.207 07:16:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:23.207 07:16:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:23.207 07:16:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:23.207 07:16:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.207 07:16:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:23.527 /dev/nbd0 00:06:23.527 07:16:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:23.527 07:16:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:23.527 07:16:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:23.527 07:16:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:23.527 07:16:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:23.527 07:16:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:23.527 07:16:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:23.527 07:16:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:23.527 07:16:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:23.527 07:16:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:23.527 07:16:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.527 1+0 records in 00:06:23.527 1+0 records out 00:06:23.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027275 s, 15.0 MB/s 00:06:23.527 07:16:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:23.527 07:16:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:23.527 07:16:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:23.527 07:16:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:23.527 07:16:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:23.527 07:16:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.527 07:16:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.527 07:16:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:23.822 /dev/nbd1 00:06:23.822 07:16:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:23.822 07:16:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:23.822 07:16:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:23.822 07:16:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:23.822 07:16:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:23.822 07:16:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:23.822 07:16:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:23.822 07:16:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:23.822 07:16:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:23.822 07:16:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:23.822 07:16:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.822 1+0 records in 00:06:23.822 1+0 records out 00:06:23.822 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029041 s, 14.1 MB/s 00:06:23.822 07:16:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:23.822 07:16:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:23.822 07:16:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:23.822 07:16:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:23.822 07:16:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:23.822 07:16:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.822 07:16:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.822 07:16:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.822 07:16:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.822 07:16:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.822 07:16:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:23.822 { 00:06:23.822 "nbd_device": "/dev/nbd0", 00:06:23.822 "bdev_name": "Malloc0" 00:06:23.822 }, 00:06:23.822 { 00:06:23.822 "nbd_device": "/dev/nbd1", 00:06:23.822 "bdev_name": "Malloc1" 00:06:23.822 } 00:06:23.822 ]' 00:06:24.100 07:16:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:24.101 { 00:06:24.101 "nbd_device": "/dev/nbd0", 00:06:24.101 "bdev_name": "Malloc0" 00:06:24.101 }, 00:06:24.101 { 00:06:24.101 "nbd_device": "/dev/nbd1", 00:06:24.101 "bdev_name": "Malloc1" 00:06:24.101 } 00:06:24.101 ]' 00:06:24.101 07:16:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.101 07:16:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:24.101 /dev/nbd1' 00:06:24.101 07:16:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:24.101 /dev/nbd1' 00:06:24.101 07:16:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.101 07:16:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:24.101 07:16:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:24.101 07:16:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:24.101 07:16:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:24.101 07:16:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:24.101 07:16:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.101 07:16:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.101 07:16:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:24.101 07:16:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:24.101 07:16:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:24.101 07:16:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:24.101 256+0 records in 00:06:24.101 256+0 records out 00:06:24.101 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121443 s, 86.3 MB/s 00:06:24.101 07:16:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.101 07:16:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:24.101 256+0 records in 00:06:24.101 256+0 records out 00:06:24.101 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0165168 s, 63.5 MB/s 00:06:24.101 07:16:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.101 07:16:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:24.101 256+0 records in 00:06:24.101 256+0 records out 00:06:24.101 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0179721 s, 58.3 MB/s 00:06:24.101 07:16:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:24.101 07:16:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.101 07:16:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.101 07:16:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:24.101 07:16:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:24.101 07:16:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:24.101 07:16:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:24.101 07:16:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.101 07:16:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:24.101 07:16:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.101 07:16:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:24.101 07:16:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:24.101 07:16:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:24.101 07:16:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.101 07:16:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.101 07:16:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:24.101 07:16:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:24.101 07:16:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.101 07:16:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:24.378 07:16:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:24.378 07:16:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:24.378 07:16:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:24.378 07:16:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.378 07:16:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.378 07:16:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:24.378 07:16:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:24.378 07:16:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.378 07:16:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.378 07:16:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:24.378 07:16:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:24.378 07:16:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:24.378 07:16:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:24.378 07:16:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.378 07:16:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.378 07:16:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:24.378 07:16:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:24.378 07:16:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.378 07:16:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.378 07:16:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.378 07:16:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.639 07:16:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:24.639 07:16:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:24.639 07:16:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.639 07:16:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:24.639 07:16:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:24.639 07:16:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.639 07:16:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:24.639 07:16:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:24.639 07:16:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:24.640 07:16:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:24.640 07:16:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:24.640 07:16:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:24.640 07:16:08 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:24.900 07:16:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:24.900 [2024-11-26 07:16:08.961167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:24.900 [2024-11-26 07:16:08.997459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.900 [2024-11-26 07:16:08.997461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.901 [2024-11-26 07:16:09.030124] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:24.901 [2024-11-26 07:16:09.030160] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:28.200 07:16:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:28.200 07:16:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:28.200 spdk_app_start Round 2 00:06:28.200 07:16:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1865026 /var/tmp/spdk-nbd.sock 00:06:28.200 07:16:11 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1865026 ']' 00:06:28.200 07:16:11 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:28.200 07:16:11 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.200 07:16:11 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:28.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:28.200 07:16:11 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.200 07:16:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:28.200 07:16:12 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.200 07:16:12 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:28.200 07:16:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:28.200 Malloc0 00:06:28.200 07:16:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:28.461 Malloc1 00:06:28.461 07:16:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.461 07:16:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.461 07:16:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.461 07:16:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:28.461 07:16:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.461 07:16:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:28.461 07:16:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.461 07:16:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.461 07:16:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.461 07:16:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:28.461 07:16:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.461 07:16:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:28.461 07:16:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:28.461 07:16:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:28.461 07:16:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.461 07:16:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:28.461 /dev/nbd0 00:06:28.461 07:16:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:28.461 07:16:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:28.461 07:16:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:28.461 07:16:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:28.462 07:16:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:28.462 07:16:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:28.462 07:16:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:28.462 07:16:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:28.462 07:16:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:28.462 07:16:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:28.462 07:16:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.462 1+0 records in 00:06:28.462 1+0 records out 00:06:28.462 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218264 s, 18.8 MB/s 00:06:28.462 07:16:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:28.462 07:16:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:28.462 07:16:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:28.462 07:16:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:28.462 07:16:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:28.462 07:16:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.462 07:16:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.462 07:16:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:28.724 /dev/nbd1 00:06:28.724 07:16:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:28.724 07:16:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:28.724 07:16:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:28.724 07:16:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:28.724 07:16:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:28.724 07:16:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:28.724 07:16:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:28.724 07:16:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:28.724 07:16:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:28.724 07:16:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:28.724 07:16:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.724 1+0 records in 00:06:28.724 1+0 records out 00:06:28.724 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221322 s, 18.5 MB/s 00:06:28.724 07:16:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:28.724 07:16:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:28.724 07:16:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:28.724 07:16:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:28.724 07:16:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:28.724 07:16:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.724 07:16:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.724 07:16:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:28.724 07:16:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.724 07:16:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.986 07:16:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:28.986 { 00:06:28.986 "nbd_device": "/dev/nbd0", 00:06:28.986 "bdev_name": "Malloc0" 00:06:28.986 }, 00:06:28.986 { 00:06:28.986 "nbd_device": "/dev/nbd1", 00:06:28.986 "bdev_name": "Malloc1" 00:06:28.986 } 00:06:28.986 ]' 00:06:28.986 07:16:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:28.986 { 00:06:28.986 "nbd_device": "/dev/nbd0", 00:06:28.986 "bdev_name": "Malloc0" 00:06:28.986 }, 00:06:28.986 { 00:06:28.986 "nbd_device": "/dev/nbd1", 00:06:28.986 "bdev_name": "Malloc1" 00:06:28.986 } 00:06:28.986 ]' 00:06:28.986 07:16:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.986 07:16:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:28.986 /dev/nbd1' 00:06:28.986 07:16:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:28.986 /dev/nbd1' 00:06:28.986 07:16:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.986 07:16:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:28.986 07:16:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:28.986 07:16:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:28.986 07:16:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:28.986 07:16:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:28.986 07:16:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.986 07:16:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.986 07:16:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:28.986 07:16:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:28.986 07:16:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:28.986 07:16:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:28.986 256+0 records in 00:06:28.986 256+0 records out 00:06:28.986 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117726 s, 89.1 MB/s 00:06:28.986 07:16:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.986 07:16:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:28.986 256+0 records in 00:06:28.986 256+0 records out 00:06:28.986 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0167781 s, 62.5 MB/s 00:06:28.986 07:16:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.986 07:16:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:28.986 256+0 records in 00:06:28.986 256+0 records out 00:06:28.986 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0187168 s, 56.0 MB/s 00:06:28.986 07:16:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:28.986 07:16:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.986 07:16:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.986 07:16:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:28.986 07:16:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:28.986 07:16:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:28.986 07:16:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:28.986 07:16:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.986 07:16:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:28.986 07:16:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.986 07:16:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:28.986 07:16:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:29.248 07:16:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:29.248 07:16:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.248 07:16:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.248 07:16:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:29.248 07:16:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:29.248 07:16:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.248 07:16:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:29.248 07:16:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:29.248 07:16:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:29.248 07:16:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:29.248 07:16:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.248 07:16:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.248 07:16:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:29.248 07:16:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:29.248 07:16:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.248 07:16:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.248 07:16:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:29.510 07:16:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:29.510 07:16:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:29.510 07:16:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:29.510 07:16:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.510 07:16:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.510 07:16:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:29.510 07:16:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:29.510 07:16:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.510 07:16:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.510 07:16:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.510 07:16:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.771 07:16:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:29.771 07:16:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:29.771 07:16:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.771 07:16:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:29.771 07:16:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:29.771 07:16:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.771 07:16:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:29.771 07:16:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:29.771 07:16:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:29.771 07:16:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:29.771 07:16:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:29.771 07:16:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:29.771 07:16:13 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:30.031 07:16:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:30.031 [2024-11-26 07:16:14.029459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:30.031 [2024-11-26 07:16:14.065898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.031 [2024-11-26 07:16:14.065930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.031 [2024-11-26 07:16:14.097841] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:30.031 [2024-11-26 07:16:14.097888] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:33.325 07:16:16 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1865026 /var/tmp/spdk-nbd.sock 00:06:33.325 07:16:16 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1865026 ']' 00:06:33.325 07:16:16 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:33.325 07:16:16 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.325 07:16:16 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:33.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:33.325 07:16:16 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.325 07:16:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:33.325 07:16:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.325 07:16:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:33.326 07:16:17 event.app_repeat -- event/event.sh@39 -- # killprocess 1865026 00:06:33.326 07:16:17 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1865026 ']' 00:06:33.326 07:16:17 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1865026 00:06:33.326 07:16:17 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:33.326 07:16:17 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.326 07:16:17 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1865026 00:06:33.326 07:16:17 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.326 07:16:17 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.326 07:16:17 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1865026' 00:06:33.326 killing process with pid 1865026 00:06:33.326 07:16:17 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1865026 00:06:33.326 07:16:17 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1865026 00:06:33.326 spdk_app_start is called in Round 0. 00:06:33.326 Shutdown signal received, stop current app iteration 00:06:33.326 Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 reinitialization... 00:06:33.326 spdk_app_start is called in Round 1. 00:06:33.326 Shutdown signal received, stop current app iteration 00:06:33.326 Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 reinitialization... 00:06:33.326 spdk_app_start is called in Round 2. 00:06:33.326 Shutdown signal received, stop current app iteration 00:06:33.326 Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 reinitialization... 00:06:33.326 spdk_app_start is called in Round 3. 00:06:33.326 Shutdown signal received, stop current app iteration 00:06:33.326 07:16:17 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:33.326 07:16:17 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:33.326 00:06:33.326 real 0m15.630s 00:06:33.326 user 0m34.001s 00:06:33.326 sys 0m2.250s 00:06:33.326 07:16:17 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.326 07:16:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:33.326 ************************************ 00:06:33.326 END TEST app_repeat 00:06:33.326 ************************************ 00:06:33.326 07:16:17 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:33.326 07:16:17 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:33.326 07:16:17 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.326 07:16:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.326 07:16:17 event -- common/autotest_common.sh@10 -- # set +x 00:06:33.326 ************************************ 00:06:33.326 START TEST cpu_locks 00:06:33.326 ************************************ 00:06:33.326 07:16:17 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:33.326 * Looking for test storage... 00:06:33.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:33.326 07:16:17 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:33.326 07:16:17 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:33.326 07:16:17 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:33.587 07:16:17 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:33.587 07:16:17 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.587 07:16:17 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.587 07:16:17 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.587 07:16:17 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.587 07:16:17 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.587 07:16:17 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.587 07:16:17 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.587 07:16:17 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.587 07:16:17 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.587 07:16:17 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.587 07:16:17 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.587 07:16:17 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:33.587 07:16:17 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:33.587 07:16:17 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.587 07:16:17 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.587 07:16:17 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:33.587 07:16:17 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:33.587 07:16:17 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.587 07:16:17 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:33.587 07:16:17 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.587 07:16:17 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:33.587 07:16:17 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:33.587 07:16:17 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.587 07:16:17 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:33.587 07:16:17 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.587 07:16:17 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.587 07:16:17 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.587 07:16:17 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:33.587 07:16:17 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.587 07:16:17 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:33.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.587 --rc genhtml_branch_coverage=1 00:06:33.587 --rc genhtml_function_coverage=1 00:06:33.587 --rc genhtml_legend=1 00:06:33.587 --rc geninfo_all_blocks=1 00:06:33.587 --rc geninfo_unexecuted_blocks=1 00:06:33.587 00:06:33.587 ' 00:06:33.587 07:16:17 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:33.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.587 --rc genhtml_branch_coverage=1 00:06:33.587 --rc genhtml_function_coverage=1 00:06:33.587 --rc genhtml_legend=1 00:06:33.587 --rc geninfo_all_blocks=1 00:06:33.587 --rc geninfo_unexecuted_blocks=1 00:06:33.587 00:06:33.587 ' 00:06:33.587 07:16:17 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:33.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.587 --rc genhtml_branch_coverage=1 00:06:33.587 --rc genhtml_function_coverage=1 00:06:33.587 --rc genhtml_legend=1 00:06:33.587 --rc geninfo_all_blocks=1 00:06:33.587 --rc geninfo_unexecuted_blocks=1 00:06:33.587 00:06:33.587 ' 00:06:33.587 07:16:17 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:33.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.587 --rc genhtml_branch_coverage=1 00:06:33.587 --rc genhtml_function_coverage=1 00:06:33.587 --rc genhtml_legend=1 00:06:33.587 --rc geninfo_all_blocks=1 00:06:33.587 --rc geninfo_unexecuted_blocks=1 00:06:33.587 00:06:33.587 ' 00:06:33.587 07:16:17 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:33.587 07:16:17 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:33.587 07:16:17 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:33.587 07:16:17 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:33.587 07:16:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.587 07:16:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.587 07:16:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.587 ************************************ 00:06:33.587 START TEST default_locks 00:06:33.587 ************************************ 00:06:33.587 07:16:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:33.587 07:16:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1868595 00:06:33.588 07:16:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1868595 00:06:33.588 07:16:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:33.588 07:16:17 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1868595 ']' 00:06:33.588 07:16:17 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.588 07:16:17 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.588 07:16:17 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.588 07:16:17 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.588 07:16:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.588 [2024-11-26 07:16:17.608613] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:06:33.588 [2024-11-26 07:16:17.608662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1868595 ] 00:06:33.588 [2024-11-26 07:16:17.686834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.849 [2024-11-26 07:16:17.723869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.421 07:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.421 07:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:34.421 07:16:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1868595 00:06:34.421 07:16:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1868595 00:06:34.421 07:16:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.994 lslocks: write error 00:06:34.994 07:16:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1868595 00:06:34.994 07:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1868595 ']' 00:06:34.994 07:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1868595 00:06:34.994 07:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:34.994 07:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.994 07:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1868595 00:06:34.994 07:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.994 07:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.994 07:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1868595' 00:06:34.994 killing process with pid 1868595 00:06:34.994 07:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1868595 00:06:34.994 07:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1868595 00:06:35.254 07:16:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1868595 00:06:35.254 07:16:19 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:35.254 07:16:19 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1868595 00:06:35.254 07:16:19 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:35.254 07:16:19 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.254 07:16:19 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:35.254 07:16:19 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.254 07:16:19 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1868595 00:06:35.254 07:16:19 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1868595 ']' 00:06:35.254 07:16:19 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.254 07:16:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.254 07:16:19 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.254 07:16:19 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.254 07:16:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.254 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1868595) - No such process 00:06:35.254 ERROR: process (pid: 1868595) is no longer running 00:06:35.254 07:16:19 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.254 07:16:19 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:35.254 07:16:19 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:35.254 07:16:19 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:35.254 07:16:19 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:35.254 07:16:19 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:35.254 07:16:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:35.254 07:16:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:35.254 07:16:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:35.254 07:16:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:35.254 00:06:35.254 real 0m1.616s 00:06:35.254 user 0m1.749s 00:06:35.254 sys 0m0.527s 00:06:35.254 07:16:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.254 07:16:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.254 ************************************ 00:06:35.254 END TEST default_locks 00:06:35.254 ************************************ 00:06:35.254 07:16:19 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:35.254 07:16:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.254 07:16:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.254 07:16:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.254 ************************************ 00:06:35.254 START TEST default_locks_via_rpc 00:06:35.254 ************************************ 00:06:35.254 07:16:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:35.254 07:16:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1868967 00:06:35.254 07:16:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1868967 00:06:35.254 07:16:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.254 07:16:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1868967 ']' 00:06:35.254 07:16:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.254 07:16:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.254 07:16:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.254 07:16:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.254 07:16:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.254 [2024-11-26 07:16:19.299292] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:06:35.254 [2024-11-26 07:16:19.299348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1868967 ] 00:06:35.254 [2024-11-26 07:16:19.380752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.514 [2024-11-26 07:16:19.421773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.086 07:16:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.086 07:16:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:36.086 07:16:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:36.086 07:16:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.086 07:16:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.086 07:16:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.086 07:16:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:36.086 07:16:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:36.086 07:16:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:36.086 07:16:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:36.086 07:16:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:36.086 07:16:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.086 07:16:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.086 07:16:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.086 07:16:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1868967 00:06:36.086 07:16:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1868967 00:06:36.086 07:16:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.658 07:16:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1868967 00:06:36.658 07:16:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1868967 ']' 00:06:36.658 07:16:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1868967 00:06:36.658 07:16:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:36.658 07:16:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.658 07:16:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1868967 00:06:36.658 07:16:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.658 07:16:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.658 07:16:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1868967' 00:06:36.658 killing process with pid 1868967 00:06:36.658 07:16:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1868967 00:06:36.658 07:16:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1868967 00:06:36.658 00:06:36.658 real 0m1.530s 00:06:36.658 user 0m1.644s 00:06:36.658 sys 0m0.526s 00:06:36.658 07:16:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.658 07:16:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.658 ************************************ 00:06:36.658 END TEST default_locks_via_rpc 00:06:36.658 ************************************ 00:06:36.919 07:16:20 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:36.919 07:16:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.919 07:16:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.919 07:16:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.919 ************************************ 00:06:36.919 START TEST non_locking_app_on_locked_coremask 00:06:36.919 ************************************ 00:06:36.919 07:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:36.919 07:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1869320 00:06:36.919 07:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1869320 /var/tmp/spdk.sock 00:06:36.919 07:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:36.919 07:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1869320 ']' 00:06:36.919 07:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.919 07:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.919 07:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.919 07:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.919 07:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.919 [2024-11-26 07:16:20.901685] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:06:36.919 [2024-11-26 07:16:20.901741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1869320 ] 00:06:36.919 [2024-11-26 07:16:20.981852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.919 [2024-11-26 07:16:21.023352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.861 07:16:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.861 07:16:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:37.861 07:16:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1869346 00:06:37.861 07:16:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1869346 /var/tmp/spdk2.sock 00:06:37.861 07:16:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:37.861 07:16:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1869346 ']' 00:06:37.861 07:16:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.861 07:16:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.861 07:16:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.861 07:16:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.861 07:16:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.861 [2024-11-26 07:16:21.730593] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:06:37.861 [2024-11-26 07:16:21.730647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1869346 ] 00:06:37.861 [2024-11-26 07:16:21.852583] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:37.861 [2024-11-26 07:16:21.852618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.861 [2024-11-26 07:16:21.925259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.435 07:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.435 07:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:38.435 07:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1869320 00:06:38.435 07:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1869320 00:06:38.435 07:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.008 lslocks: write error 00:06:39.008 07:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1869320 00:06:39.008 07:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1869320 ']' 00:06:39.008 07:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1869320 00:06:39.008 07:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:39.008 07:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.008 07:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1869320 00:06:39.008 07:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.008 07:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.008 07:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1869320' 00:06:39.008 killing process with pid 1869320 00:06:39.008 07:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1869320 00:06:39.008 07:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1869320 00:06:39.580 07:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1869346 00:06:39.580 07:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1869346 ']' 00:06:39.580 07:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1869346 00:06:39.580 07:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:39.580 07:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.580 07:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1869346 00:06:39.580 07:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.580 07:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.580 07:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1869346' 00:06:39.580 killing process with pid 1869346 00:06:39.580 07:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1869346 00:06:39.580 07:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1869346 00:06:39.842 00:06:39.842 real 0m2.950s 00:06:39.842 user 0m3.235s 00:06:39.842 sys 0m0.914s 00:06:39.842 07:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.842 07:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.842 ************************************ 00:06:39.842 END TEST non_locking_app_on_locked_coremask 00:06:39.842 ************************************ 00:06:39.842 07:16:23 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:39.842 07:16:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.842 07:16:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.842 07:16:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.842 ************************************ 00:06:39.842 START TEST locking_app_on_unlocked_coremask 00:06:39.842 ************************************ 00:06:39.842 07:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:39.842 07:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1869862 00:06:39.842 07:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1869862 /var/tmp/spdk.sock 00:06:39.842 07:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:39.842 07:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1869862 ']' 00:06:39.842 07:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.842 07:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.842 07:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.842 07:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.842 07:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.842 [2024-11-26 07:16:23.929518] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:06:39.842 [2024-11-26 07:16:23.929574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1869862 ] 00:06:40.104 [2024-11-26 07:16:24.012347] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:40.104 [2024-11-26 07:16:24.012382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.104 [2024-11-26 07:16:24.054065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.698 07:16:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.698 07:16:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:40.698 07:16:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:40.698 07:16:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1870053 00:06:40.698 07:16:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1870053 /var/tmp/spdk2.sock 00:06:40.698 07:16:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1870053 ']' 00:06:40.698 07:16:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.698 07:16:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.698 07:16:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.698 07:16:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.698 07:16:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.698 [2024-11-26 07:16:24.747426] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:06:40.698 [2024-11-26 07:16:24.747478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1870053 ] 00:06:40.959 [2024-11-26 07:16:24.867750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.959 [2024-11-26 07:16:24.939905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.531 07:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.531 07:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:41.531 07:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1870053 00:06:41.531 07:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:41.531 07:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1870053 00:06:42.524 lslocks: write error 00:06:42.524 07:16:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1869862 00:06:42.524 07:16:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1869862 ']' 00:06:42.524 07:16:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1869862 00:06:42.524 07:16:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:42.524 07:16:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.524 07:16:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1869862 00:06:42.524 07:16:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.524 07:16:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.524 07:16:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1869862' 00:06:42.524 killing process with pid 1869862 00:06:42.524 07:16:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1869862 00:06:42.524 07:16:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1869862 00:06:42.786 07:16:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1870053 00:06:42.786 07:16:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1870053 ']' 00:06:42.786 07:16:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1870053 00:06:42.786 07:16:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:42.787 07:16:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.787 07:16:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1870053 00:06:42.787 07:16:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.787 07:16:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.787 07:16:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1870053' 00:06:42.787 killing process with pid 1870053 00:06:42.787 07:16:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1870053 00:06:42.787 07:16:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1870053 00:06:43.048 00:06:43.048 real 0m3.118s 00:06:43.048 user 0m3.432s 00:06:43.048 sys 0m0.941s 00:06:43.048 07:16:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.048 07:16:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.048 ************************************ 00:06:43.048 END TEST locking_app_on_unlocked_coremask 00:06:43.048 ************************************ 00:06:43.048 07:16:27 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:43.048 07:16:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.048 07:16:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.048 07:16:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.048 ************************************ 00:06:43.048 START TEST locking_app_on_locked_coremask 00:06:43.048 ************************************ 00:06:43.048 07:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:43.048 07:16:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1870491 00:06:43.048 07:16:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1870491 /var/tmp/spdk.sock 00:06:43.048 07:16:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:43.048 07:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1870491 ']' 00:06:43.048 07:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.048 07:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.048 07:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.048 07:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.048 07:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.048 [2024-11-26 07:16:27.124273] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:06:43.048 [2024-11-26 07:16:27.124324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1870491 ] 00:06:43.309 [2024-11-26 07:16:27.201639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.309 [2024-11-26 07:16:27.237613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.881 07:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.881 07:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:43.881 07:16:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:43.881 07:16:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1870765 00:06:43.881 07:16:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1870765 /var/tmp/spdk2.sock 00:06:43.881 07:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:43.881 07:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1870765 /var/tmp/spdk2.sock 00:06:43.881 07:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:43.881 07:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:43.881 07:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:43.881 07:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:43.881 07:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1870765 /var/tmp/spdk2.sock 00:06:43.881 07:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1870765 ']' 00:06:43.881 07:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.881 07:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.881 07:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.881 07:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.881 07:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.881 [2024-11-26 07:16:27.970929] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:06:43.881 [2024-11-26 07:16:27.970986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1870765 ] 00:06:44.142 [2024-11-26 07:16:28.093976] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1870491 has claimed it. 00:06:44.142 [2024-11-26 07:16:28.094022] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:44.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1870765) - No such process 00:06:44.714 ERROR: process (pid: 1870765) is no longer running 00:06:44.714 07:16:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.714 07:16:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:44.714 07:16:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:44.714 07:16:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:44.714 07:16:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:44.714 07:16:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:44.714 07:16:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1870491 00:06:44.714 07:16:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1870491 00:06:44.714 07:16:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:45.286 lslocks: write error 00:06:45.286 07:16:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1870491 00:06:45.286 07:16:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1870491 ']' 00:06:45.286 07:16:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1870491 00:06:45.286 07:16:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:45.286 07:16:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.286 07:16:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1870491 00:06:45.286 07:16:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.286 07:16:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.286 07:16:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1870491' 00:06:45.286 killing process with pid 1870491 00:06:45.286 07:16:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1870491 00:06:45.286 07:16:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1870491 00:06:45.546 00:06:45.546 real 0m2.391s 00:06:45.546 user 0m2.661s 00:06:45.546 sys 0m0.676s 00:06:45.546 07:16:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.546 07:16:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.547 ************************************ 00:06:45.547 END TEST locking_app_on_locked_coremask 00:06:45.547 ************************************ 00:06:45.547 07:16:29 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:45.547 07:16:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.547 07:16:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.547 07:16:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.547 ************************************ 00:06:45.547 START TEST locking_overlapped_coremask 00:06:45.547 ************************************ 00:06:45.547 07:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:45.547 07:16:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1871148 00:06:45.547 07:16:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1871148 /var/tmp/spdk.sock 00:06:45.547 07:16:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:45.547 07:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1871148 ']' 00:06:45.547 07:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.547 07:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.547 07:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.547 07:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.547 07:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.547 [2024-11-26 07:16:29.593831] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:06:45.547 [2024-11-26 07:16:29.593888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1871148 ] 00:06:45.547 [2024-11-26 07:16:29.677728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:45.806 [2024-11-26 07:16:29.716123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.806 [2024-11-26 07:16:29.716238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.806 [2024-11-26 07:16:29.716240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.379 07:16:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.379 07:16:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:46.379 07:16:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1871214 00:06:46.379 07:16:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1871214 /var/tmp/spdk2.sock 00:06:46.379 07:16:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:46.379 07:16:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:46.379 07:16:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1871214 /var/tmp/spdk2.sock 00:06:46.379 07:16:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:46.379 07:16:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.379 07:16:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:46.379 07:16:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.379 07:16:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1871214 /var/tmp/spdk2.sock 00:06:46.379 07:16:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1871214 ']' 00:06:46.379 07:16:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.379 07:16:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.379 07:16:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.379 07:16:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.379 07:16:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.379 [2024-11-26 07:16:30.426453] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:06:46.379 [2024-11-26 07:16:30.426510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1871214 ] 00:06:46.640 [2024-11-26 07:16:30.523852] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1871148 has claimed it. 00:06:46.640 [2024-11-26 07:16:30.523892] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:47.210 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1871214) - No such process 00:06:47.210 ERROR: process (pid: 1871214) is no longer running 00:06:47.210 07:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.210 07:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:47.210 07:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:47.210 07:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:47.210 07:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:47.210 07:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:47.210 07:16:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:47.210 07:16:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:47.210 07:16:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:47.210 07:16:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:47.210 07:16:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1871148 00:06:47.210 07:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1871148 ']' 00:06:47.210 07:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1871148 00:06:47.211 07:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:47.211 07:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.211 07:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1871148 00:06:47.211 07:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.211 07:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.211 07:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1871148' 00:06:47.211 killing process with pid 1871148 00:06:47.211 07:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1871148 00:06:47.211 07:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1871148 00:06:47.211 00:06:47.211 real 0m1.782s 00:06:47.211 user 0m5.109s 00:06:47.211 sys 0m0.397s 00:06:47.211 07:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.211 07:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.211 ************************************ 00:06:47.211 END TEST locking_overlapped_coremask 00:06:47.211 ************************************ 00:06:47.471 07:16:31 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:47.471 07:16:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.471 07:16:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.471 07:16:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.471 ************************************ 00:06:47.471 START TEST locking_overlapped_coremask_via_rpc 00:06:47.471 ************************************ 00:06:47.472 07:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:47.472 07:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1871516 00:06:47.472 07:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1871516 /var/tmp/spdk.sock 00:06:47.472 07:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:47.472 07:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1871516 ']' 00:06:47.472 07:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.472 07:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.472 07:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.472 07:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.472 07:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.472 [2024-11-26 07:16:31.458390] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:06:47.472 [2024-11-26 07:16:31.458443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1871516 ] 00:06:47.472 [2024-11-26 07:16:31.536950] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:47.472 [2024-11-26 07:16:31.536978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:47.472 [2024-11-26 07:16:31.578254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.472 [2024-11-26 07:16:31.578371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.472 [2024-11-26 07:16:31.578374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.414 07:16:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.414 07:16:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:48.414 07:16:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1871693 00:06:48.414 07:16:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1871693 /var/tmp/spdk2.sock 00:06:48.414 07:16:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1871693 ']' 00:06:48.414 07:16:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:48.414 07:16:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.414 07:16:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.414 07:16:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.414 07:16:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.414 07:16:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.414 [2024-11-26 07:16:32.308154] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:06:48.414 [2024-11-26 07:16:32.308206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1871693 ] 00:06:48.414 [2024-11-26 07:16:32.408581] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:48.414 [2024-11-26 07:16:32.408605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:48.414 [2024-11-26 07:16:32.467803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:48.414 [2024-11-26 07:16:32.467959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.414 [2024-11-26 07:16:32.467961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:48.987 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.987 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:48.987 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:48.987 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.987 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.987 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.987 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:48.987 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:48.987 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:48.987 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:48.987 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.987 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:48.987 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.987 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:48.987 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.987 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.987 [2024-11-26 07:16:33.104919] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1871516 has claimed it. 00:06:48.987 request: 00:06:48.987 { 00:06:48.987 "method": "framework_enable_cpumask_locks", 00:06:48.987 "req_id": 1 00:06:48.987 } 00:06:48.987 Got JSON-RPC error response 00:06:48.987 response: 00:06:48.987 { 00:06:48.987 "code": -32603, 00:06:48.987 "message": "Failed to claim CPU core: 2" 00:06:48.987 } 00:06:48.987 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:48.987 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:48.987 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:48.987 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:48.987 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:48.987 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1871516 /var/tmp/spdk.sock 00:06:48.987 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1871516 ']' 00:06:48.987 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.987 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.987 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.987 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.987 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.248 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.248 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:49.248 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1871693 /var/tmp/spdk2.sock 00:06:49.248 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1871693 ']' 00:06:49.248 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.248 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.248 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.248 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.248 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.509 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.509 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:49.509 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:49.509 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:49.509 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:49.509 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:49.509 00:06:49.509 real 0m2.084s 00:06:49.509 user 0m0.862s 00:06:49.509 sys 0m0.150s 00:06:49.509 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.509 07:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.509 ************************************ 00:06:49.509 END TEST locking_overlapped_coremask_via_rpc 00:06:49.509 ************************************ 00:06:49.509 07:16:33 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:49.509 07:16:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1871516 ]] 00:06:49.509 07:16:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1871516 00:06:49.509 07:16:33 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1871516 ']' 00:06:49.509 07:16:33 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1871516 00:06:49.509 07:16:33 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:49.509 07:16:33 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.509 07:16:33 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1871516 00:06:49.509 07:16:33 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:49.509 07:16:33 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:49.509 07:16:33 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1871516' 00:06:49.509 killing process with pid 1871516 00:06:49.509 07:16:33 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1871516 00:06:49.509 07:16:33 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1871516 00:06:49.771 07:16:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1871693 ]] 00:06:49.771 07:16:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1871693 00:06:49.771 07:16:33 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1871693 ']' 00:06:49.771 07:16:33 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1871693 00:06:49.771 07:16:33 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:49.771 07:16:33 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.771 07:16:33 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1871693 00:06:49.771 07:16:33 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:49.771 07:16:33 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:49.771 07:16:33 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1871693' 00:06:49.771 killing process with pid 1871693 00:06:49.771 07:16:33 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1871693 00:06:49.771 07:16:33 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1871693 00:06:50.034 07:16:34 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:50.034 07:16:34 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:50.034 07:16:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1871516 ]] 00:06:50.034 07:16:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1871516 00:06:50.034 07:16:34 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1871516 ']' 00:06:50.034 07:16:34 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1871516 00:06:50.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1871516) - No such process 00:06:50.034 07:16:34 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1871516 is not found' 00:06:50.034 Process with pid 1871516 is not found 00:06:50.034 07:16:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1871693 ]] 00:06:50.034 07:16:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1871693 00:06:50.034 07:16:34 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1871693 ']' 00:06:50.034 07:16:34 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1871693 00:06:50.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1871693) - No such process 00:06:50.034 07:16:34 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1871693 is not found' 00:06:50.034 Process with pid 1871693 is not found 00:06:50.034 07:16:34 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:50.034 00:06:50.034 real 0m16.746s 00:06:50.034 user 0m28.822s 00:06:50.034 sys 0m5.066s 00:06:50.034 07:16:34 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.034 07:16:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.034 ************************************ 00:06:50.034 END TEST cpu_locks 00:06:50.034 ************************************ 00:06:50.034 00:06:50.034 real 0m42.419s 00:06:50.034 user 1m22.411s 00:06:50.034 sys 0m8.357s 00:06:50.034 07:16:34 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.034 07:16:34 event -- common/autotest_common.sh@10 -- # set +x 00:06:50.034 ************************************ 00:06:50.034 END TEST event 00:06:50.034 ************************************ 00:06:50.034 07:16:34 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:50.034 07:16:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.034 07:16:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.034 07:16:34 -- common/autotest_common.sh@10 -- # set +x 00:06:50.297 ************************************ 00:06:50.297 START TEST thread 00:06:50.297 ************************************ 00:06:50.297 07:16:34 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:50.297 * Looking for test storage... 00:06:50.297 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:50.297 07:16:34 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:50.297 07:16:34 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:50.297 07:16:34 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:50.297 07:16:34 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:50.297 07:16:34 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.297 07:16:34 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.297 07:16:34 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.297 07:16:34 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.297 07:16:34 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.297 07:16:34 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.297 07:16:34 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.297 07:16:34 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.297 07:16:34 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.297 07:16:34 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.297 07:16:34 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.297 07:16:34 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:50.297 07:16:34 thread -- scripts/common.sh@345 -- # : 1 00:06:50.297 07:16:34 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.297 07:16:34 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.297 07:16:34 thread -- scripts/common.sh@365 -- # decimal 1 00:06:50.297 07:16:34 thread -- scripts/common.sh@353 -- # local d=1 00:06:50.297 07:16:34 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.297 07:16:34 thread -- scripts/common.sh@355 -- # echo 1 00:06:50.297 07:16:34 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.297 07:16:34 thread -- scripts/common.sh@366 -- # decimal 2 00:06:50.297 07:16:34 thread -- scripts/common.sh@353 -- # local d=2 00:06:50.297 07:16:34 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.297 07:16:34 thread -- scripts/common.sh@355 -- # echo 2 00:06:50.297 07:16:34 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.297 07:16:34 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.297 07:16:34 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.297 07:16:34 thread -- scripts/common.sh@368 -- # return 0 00:06:50.297 07:16:34 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.297 07:16:34 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:50.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.297 --rc genhtml_branch_coverage=1 00:06:50.297 --rc genhtml_function_coverage=1 00:06:50.297 --rc genhtml_legend=1 00:06:50.297 --rc geninfo_all_blocks=1 00:06:50.297 --rc geninfo_unexecuted_blocks=1 00:06:50.297 00:06:50.297 ' 00:06:50.297 07:16:34 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:50.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.297 --rc genhtml_branch_coverage=1 00:06:50.297 --rc genhtml_function_coverage=1 00:06:50.297 --rc genhtml_legend=1 00:06:50.297 --rc geninfo_all_blocks=1 00:06:50.297 --rc geninfo_unexecuted_blocks=1 00:06:50.297 00:06:50.297 ' 00:06:50.297 07:16:34 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:50.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.297 --rc genhtml_branch_coverage=1 00:06:50.297 --rc genhtml_function_coverage=1 00:06:50.297 --rc genhtml_legend=1 00:06:50.297 --rc geninfo_all_blocks=1 00:06:50.297 --rc geninfo_unexecuted_blocks=1 00:06:50.297 00:06:50.297 ' 00:06:50.297 07:16:34 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:50.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.297 --rc genhtml_branch_coverage=1 00:06:50.297 --rc genhtml_function_coverage=1 00:06:50.298 --rc genhtml_legend=1 00:06:50.298 --rc geninfo_all_blocks=1 00:06:50.298 --rc geninfo_unexecuted_blocks=1 00:06:50.298 00:06:50.298 ' 00:06:50.298 07:16:34 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:50.298 07:16:34 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:50.298 07:16:34 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.298 07:16:34 thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.298 ************************************ 00:06:50.298 START TEST thread_poller_perf 00:06:50.298 ************************************ 00:06:50.298 07:16:34 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:50.298 [2024-11-26 07:16:34.428246] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:06:50.558 [2024-11-26 07:16:34.428359] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1872300 ] 00:06:50.558 [2024-11-26 07:16:34.515885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.558 [2024-11-26 07:16:34.557564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.559 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:51.500 [2024-11-26T06:16:35.637Z] ====================================== 00:06:51.500 [2024-11-26T06:16:35.637Z] busy:2413074062 (cyc) 00:06:51.500 [2024-11-26T06:16:35.637Z] total_run_count: 288000 00:06:51.500 [2024-11-26T06:16:35.637Z] tsc_hz: 2400000000 (cyc) 00:06:51.500 [2024-11-26T06:16:35.637Z] ====================================== 00:06:51.500 [2024-11-26T06:16:35.637Z] poller_cost: 8378 (cyc), 3490 (nsec) 00:06:51.500 00:06:51.500 real 0m1.194s 00:06:51.500 user 0m1.107s 00:06:51.500 sys 0m0.082s 00:06:51.500 07:16:35 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.500 07:16:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:51.500 ************************************ 00:06:51.500 END TEST thread_poller_perf 00:06:51.500 ************************************ 00:06:51.761 07:16:35 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:51.761 07:16:35 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:51.761 07:16:35 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.762 07:16:35 thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.762 ************************************ 00:06:51.762 START TEST thread_poller_perf 00:06:51.762 ************************************ 00:06:51.762 07:16:35 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:51.762 [2024-11-26 07:16:35.701932] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:06:51.762 [2024-11-26 07:16:35.702026] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1872499 ] 00:06:51.762 [2024-11-26 07:16:35.787276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.762 [2024-11-26 07:16:35.825148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.762 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:53.146 [2024-11-26T06:16:37.283Z] ====================================== 00:06:53.146 [2024-11-26T06:16:37.283Z] busy:2402030726 (cyc) 00:06:53.146 [2024-11-26T06:16:37.283Z] total_run_count: 3815000 00:06:53.146 [2024-11-26T06:16:37.283Z] tsc_hz: 2400000000 (cyc) 00:06:53.146 [2024-11-26T06:16:37.283Z] ====================================== 00:06:53.146 [2024-11-26T06:16:37.283Z] poller_cost: 629 (cyc), 262 (nsec) 00:06:53.146 00:06:53.146 real 0m1.179s 00:06:53.146 user 0m1.102s 00:06:53.146 sys 0m0.073s 00:06:53.146 07:16:36 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.146 07:16:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:53.146 ************************************ 00:06:53.146 END TEST thread_poller_perf 00:06:53.146 ************************************ 00:06:53.147 07:16:36 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:53.147 00:06:53.147 real 0m2.727s 00:06:53.147 user 0m2.377s 00:06:53.147 sys 0m0.365s 00:06:53.147 07:16:36 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.147 07:16:36 thread -- common/autotest_common.sh@10 -- # set +x 00:06:53.147 ************************************ 00:06:53.147 END TEST thread 00:06:53.147 ************************************ 00:06:53.147 07:16:36 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:53.147 07:16:36 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:53.147 07:16:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.147 07:16:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.147 07:16:36 -- common/autotest_common.sh@10 -- # set +x 00:06:53.147 ************************************ 00:06:53.147 START TEST app_cmdline 00:06:53.147 ************************************ 00:06:53.147 07:16:36 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:53.147 * Looking for test storage... 00:06:53.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:53.147 07:16:37 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:53.147 07:16:37 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:53.147 07:16:37 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:53.147 07:16:37 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:53.147 07:16:37 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.147 07:16:37 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.147 07:16:37 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.147 07:16:37 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.147 07:16:37 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.147 07:16:37 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.147 07:16:37 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.147 07:16:37 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.147 07:16:37 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.147 07:16:37 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.147 07:16:37 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.147 07:16:37 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:53.147 07:16:37 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:53.147 07:16:37 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.147 07:16:37 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.147 07:16:37 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:53.147 07:16:37 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:53.147 07:16:37 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.147 07:16:37 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:53.147 07:16:37 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.147 07:16:37 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:53.147 07:16:37 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:53.147 07:16:37 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.147 07:16:37 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:53.147 07:16:37 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.147 07:16:37 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.147 07:16:37 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.147 07:16:37 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:53.147 07:16:37 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.147 07:16:37 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:53.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.147 --rc genhtml_branch_coverage=1 00:06:53.147 --rc genhtml_function_coverage=1 00:06:53.147 --rc genhtml_legend=1 00:06:53.147 --rc geninfo_all_blocks=1 00:06:53.147 --rc geninfo_unexecuted_blocks=1 00:06:53.147 00:06:53.147 ' 00:06:53.147 07:16:37 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:53.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.147 --rc genhtml_branch_coverage=1 00:06:53.147 --rc genhtml_function_coverage=1 00:06:53.147 --rc genhtml_legend=1 00:06:53.147 --rc geninfo_all_blocks=1 00:06:53.147 --rc geninfo_unexecuted_blocks=1 00:06:53.147 00:06:53.147 ' 00:06:53.147 07:16:37 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:53.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.147 --rc genhtml_branch_coverage=1 00:06:53.147 --rc genhtml_function_coverage=1 00:06:53.147 --rc genhtml_legend=1 00:06:53.147 --rc geninfo_all_blocks=1 00:06:53.147 --rc geninfo_unexecuted_blocks=1 00:06:53.147 00:06:53.147 ' 00:06:53.147 07:16:37 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:53.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.147 --rc genhtml_branch_coverage=1 00:06:53.147 --rc genhtml_function_coverage=1 00:06:53.147 --rc genhtml_legend=1 00:06:53.147 --rc geninfo_all_blocks=1 00:06:53.147 --rc geninfo_unexecuted_blocks=1 00:06:53.147 00:06:53.147 ' 00:06:53.147 07:16:37 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:53.147 07:16:37 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1872775 00:06:53.147 07:16:37 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1872775 00:06:53.147 07:16:37 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1872775 ']' 00:06:53.147 07:16:37 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:53.147 07:16:37 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.147 07:16:37 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.147 07:16:37 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.147 07:16:37 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.147 07:16:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:53.147 [2024-11-26 07:16:37.218792] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:06:53.147 [2024-11-26 07:16:37.218877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1872775 ] 00:06:53.408 [2024-11-26 07:16:37.304731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.408 [2024-11-26 07:16:37.347282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.981 07:16:38 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.981 07:16:38 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:53.981 07:16:38 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:54.242 { 00:06:54.242 "version": "SPDK v25.01-pre git sha1 8afd1c921", 00:06:54.242 "fields": { 00:06:54.242 "major": 25, 00:06:54.242 "minor": 1, 00:06:54.242 "patch": 0, 00:06:54.242 "suffix": "-pre", 00:06:54.242 "commit": "8afd1c921" 00:06:54.242 } 00:06:54.242 } 00:06:54.242 07:16:38 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:54.242 07:16:38 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:54.242 07:16:38 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:54.242 07:16:38 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:54.242 07:16:38 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:54.242 07:16:38 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:54.242 07:16:38 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:54.242 07:16:38 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.242 07:16:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:54.242 07:16:38 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.242 07:16:38 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:54.242 07:16:38 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:54.242 07:16:38 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:54.242 07:16:38 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:54.242 07:16:38 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:54.242 07:16:38 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:54.242 07:16:38 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.242 07:16:38 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:54.242 07:16:38 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.242 07:16:38 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:54.242 07:16:38 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.242 07:16:38 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:54.242 07:16:38 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:54.242 07:16:38 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:54.503 request: 00:06:54.503 { 00:06:54.503 "method": "env_dpdk_get_mem_stats", 00:06:54.503 "req_id": 1 00:06:54.503 } 00:06:54.503 Got JSON-RPC error response 00:06:54.503 response: 00:06:54.503 { 00:06:54.503 "code": -32601, 00:06:54.503 "message": "Method not found" 00:06:54.503 } 00:06:54.503 07:16:38 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:54.503 07:16:38 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:54.503 07:16:38 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:54.503 07:16:38 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:54.503 07:16:38 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1872775 00:06:54.503 07:16:38 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1872775 ']' 00:06:54.503 07:16:38 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1872775 00:06:54.503 07:16:38 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:54.503 07:16:38 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.503 07:16:38 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1872775 00:06:54.503 07:16:38 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:54.503 07:16:38 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:54.503 07:16:38 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1872775' 00:06:54.503 killing process with pid 1872775 00:06:54.503 07:16:38 app_cmdline -- common/autotest_common.sh@973 -- # kill 1872775 00:06:54.503 07:16:38 app_cmdline -- common/autotest_common.sh@978 -- # wait 1872775 00:06:54.784 00:06:54.784 real 0m1.747s 00:06:54.784 user 0m2.105s 00:06:54.784 sys 0m0.460s 00:06:54.784 07:16:38 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.784 07:16:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:54.784 ************************************ 00:06:54.784 END TEST app_cmdline 00:06:54.784 ************************************ 00:06:54.784 07:16:38 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:54.784 07:16:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.784 07:16:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.784 07:16:38 -- common/autotest_common.sh@10 -- # set +x 00:06:54.784 ************************************ 00:06:54.784 START TEST version 00:06:54.784 ************************************ 00:06:54.784 07:16:38 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:54.784 * Looking for test storage... 00:06:54.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:54.784 07:16:38 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:54.784 07:16:38 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:54.784 07:16:38 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:55.045 07:16:38 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:55.045 07:16:38 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.045 07:16:38 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.045 07:16:38 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.045 07:16:38 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.045 07:16:38 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.045 07:16:38 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.045 07:16:38 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.045 07:16:38 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.045 07:16:38 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.046 07:16:38 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.046 07:16:38 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.046 07:16:38 version -- scripts/common.sh@344 -- # case "$op" in 00:06:55.046 07:16:38 version -- scripts/common.sh@345 -- # : 1 00:06:55.046 07:16:38 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.046 07:16:38 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.046 07:16:38 version -- scripts/common.sh@365 -- # decimal 1 00:06:55.046 07:16:38 version -- scripts/common.sh@353 -- # local d=1 00:06:55.046 07:16:38 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.046 07:16:38 version -- scripts/common.sh@355 -- # echo 1 00:06:55.046 07:16:38 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.046 07:16:38 version -- scripts/common.sh@366 -- # decimal 2 00:06:55.046 07:16:38 version -- scripts/common.sh@353 -- # local d=2 00:06:55.046 07:16:38 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.046 07:16:38 version -- scripts/common.sh@355 -- # echo 2 00:06:55.046 07:16:38 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.046 07:16:38 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.046 07:16:38 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.046 07:16:38 version -- scripts/common.sh@368 -- # return 0 00:06:55.046 07:16:38 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.046 07:16:38 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:55.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.046 --rc genhtml_branch_coverage=1 00:06:55.046 --rc genhtml_function_coverage=1 00:06:55.046 --rc genhtml_legend=1 00:06:55.046 --rc geninfo_all_blocks=1 00:06:55.046 --rc geninfo_unexecuted_blocks=1 00:06:55.046 00:06:55.046 ' 00:06:55.046 07:16:38 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:55.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.046 --rc genhtml_branch_coverage=1 00:06:55.046 --rc genhtml_function_coverage=1 00:06:55.046 --rc genhtml_legend=1 00:06:55.046 --rc geninfo_all_blocks=1 00:06:55.046 --rc geninfo_unexecuted_blocks=1 00:06:55.046 00:06:55.046 ' 00:06:55.046 07:16:38 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:55.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.046 --rc genhtml_branch_coverage=1 00:06:55.046 --rc genhtml_function_coverage=1 00:06:55.046 --rc genhtml_legend=1 00:06:55.046 --rc geninfo_all_blocks=1 00:06:55.046 --rc geninfo_unexecuted_blocks=1 00:06:55.046 00:06:55.046 ' 00:06:55.046 07:16:38 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:55.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.046 --rc genhtml_branch_coverage=1 00:06:55.046 --rc genhtml_function_coverage=1 00:06:55.046 --rc genhtml_legend=1 00:06:55.046 --rc geninfo_all_blocks=1 00:06:55.046 --rc geninfo_unexecuted_blocks=1 00:06:55.046 00:06:55.046 ' 00:06:55.046 07:16:38 version -- app/version.sh@17 -- # get_header_version major 00:06:55.046 07:16:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:55.046 07:16:38 version -- app/version.sh@14 -- # cut -f2 00:06:55.046 07:16:38 version -- app/version.sh@14 -- # tr -d '"' 00:06:55.046 07:16:38 version -- app/version.sh@17 -- # major=25 00:06:55.046 07:16:38 version -- app/version.sh@18 -- # get_header_version minor 00:06:55.046 07:16:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:55.046 07:16:38 version -- app/version.sh@14 -- # cut -f2 00:06:55.046 07:16:38 version -- app/version.sh@14 -- # tr -d '"' 00:06:55.046 07:16:38 version -- app/version.sh@18 -- # minor=1 00:06:55.046 07:16:38 version -- app/version.sh@19 -- # get_header_version patch 00:06:55.046 07:16:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:55.046 07:16:38 version -- app/version.sh@14 -- # cut -f2 00:06:55.046 07:16:38 version -- app/version.sh@14 -- # tr -d '"' 00:06:55.046 07:16:39 version -- app/version.sh@19 -- # patch=0 00:06:55.046 07:16:39 version -- app/version.sh@20 -- # get_header_version suffix 00:06:55.046 07:16:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:55.046 07:16:39 version -- app/version.sh@14 -- # cut -f2 00:06:55.046 07:16:39 version -- app/version.sh@14 -- # tr -d '"' 00:06:55.046 07:16:39 version -- app/version.sh@20 -- # suffix=-pre 00:06:55.046 07:16:39 version -- app/version.sh@22 -- # version=25.1 00:06:55.046 07:16:39 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:55.046 07:16:39 version -- app/version.sh@28 -- # version=25.1rc0 00:06:55.046 07:16:39 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:55.046 07:16:39 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:55.046 07:16:39 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:55.046 07:16:39 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:55.046 00:06:55.046 real 0m0.281s 00:06:55.046 user 0m0.181s 00:06:55.046 sys 0m0.150s 00:06:55.046 07:16:39 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.046 07:16:39 version -- common/autotest_common.sh@10 -- # set +x 00:06:55.046 ************************************ 00:06:55.046 END TEST version 00:06:55.046 ************************************ 00:06:55.046 07:16:39 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:55.046 07:16:39 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:55.046 07:16:39 -- spdk/autotest.sh@194 -- # uname -s 00:06:55.046 07:16:39 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:55.046 07:16:39 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:55.046 07:16:39 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:55.046 07:16:39 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:55.046 07:16:39 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:55.046 07:16:39 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:55.046 07:16:39 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:55.046 07:16:39 -- common/autotest_common.sh@10 -- # set +x 00:06:55.046 07:16:39 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:55.046 07:16:39 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:55.046 07:16:39 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:55.046 07:16:39 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:55.046 07:16:39 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:55.046 07:16:39 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:55.046 07:16:39 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:55.046 07:16:39 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:55.046 07:16:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.046 07:16:39 -- common/autotest_common.sh@10 -- # set +x 00:06:55.306 ************************************ 00:06:55.306 START TEST nvmf_tcp 00:06:55.306 ************************************ 00:06:55.306 07:16:39 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:55.306 * Looking for test storage... 00:06:55.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:55.306 07:16:39 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:55.306 07:16:39 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:55.306 07:16:39 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:55.306 07:16:39 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:55.306 07:16:39 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.306 07:16:39 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.306 07:16:39 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.306 07:16:39 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.306 07:16:39 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.306 07:16:39 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.306 07:16:39 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.306 07:16:39 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.306 07:16:39 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.306 07:16:39 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.306 07:16:39 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.306 07:16:39 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:55.306 07:16:39 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:55.306 07:16:39 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.306 07:16:39 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.306 07:16:39 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:55.306 07:16:39 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:55.306 07:16:39 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.306 07:16:39 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:55.306 07:16:39 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.306 07:16:39 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:55.306 07:16:39 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:55.306 07:16:39 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.306 07:16:39 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:55.306 07:16:39 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.306 07:16:39 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.306 07:16:39 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.306 07:16:39 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:55.306 07:16:39 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.306 07:16:39 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:55.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.306 --rc genhtml_branch_coverage=1 00:06:55.306 --rc genhtml_function_coverage=1 00:06:55.306 --rc genhtml_legend=1 00:06:55.306 --rc geninfo_all_blocks=1 00:06:55.306 --rc geninfo_unexecuted_blocks=1 00:06:55.306 00:06:55.306 ' 00:06:55.306 07:16:39 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:55.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.306 --rc genhtml_branch_coverage=1 00:06:55.306 --rc genhtml_function_coverage=1 00:06:55.306 --rc genhtml_legend=1 00:06:55.306 --rc geninfo_all_blocks=1 00:06:55.306 --rc geninfo_unexecuted_blocks=1 00:06:55.306 00:06:55.306 ' 00:06:55.306 07:16:39 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:55.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.306 --rc genhtml_branch_coverage=1 00:06:55.306 --rc genhtml_function_coverage=1 00:06:55.306 --rc genhtml_legend=1 00:06:55.306 --rc geninfo_all_blocks=1 00:06:55.306 --rc geninfo_unexecuted_blocks=1 00:06:55.306 00:06:55.306 ' 00:06:55.306 07:16:39 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:55.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.306 --rc genhtml_branch_coverage=1 00:06:55.306 --rc genhtml_function_coverage=1 00:06:55.306 --rc genhtml_legend=1 00:06:55.306 --rc geninfo_all_blocks=1 00:06:55.306 --rc geninfo_unexecuted_blocks=1 00:06:55.306 00:06:55.306 ' 00:06:55.306 07:16:39 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:55.306 07:16:39 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:55.307 07:16:39 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:55.307 07:16:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:55.307 07:16:39 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.307 07:16:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:55.307 ************************************ 00:06:55.307 START TEST nvmf_target_core 00:06:55.307 ************************************ 00:06:55.307 07:16:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:55.568 * Looking for test storage... 00:06:55.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:55.568 07:16:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:55.568 07:16:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:06:55.568 07:16:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:55.568 07:16:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:55.568 07:16:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.568 07:16:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.568 07:16:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.568 07:16:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.568 07:16:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.568 07:16:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.568 07:16:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.568 07:16:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.568 07:16:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.568 07:16:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.568 07:16:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.568 07:16:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:55.568 07:16:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:55.568 07:16:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.568 07:16:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.568 07:16:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:55.568 07:16:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:55.568 07:16:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.568 07:16:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:55.568 07:16:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.568 07:16:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:55.568 07:16:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:55.568 07:16:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.568 07:16:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:55.568 07:16:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.568 07:16:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.568 07:16:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.568 07:16:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:55.568 07:16:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.568 07:16:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:55.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.568 --rc genhtml_branch_coverage=1 00:06:55.568 --rc genhtml_function_coverage=1 00:06:55.568 --rc genhtml_legend=1 00:06:55.568 --rc geninfo_all_blocks=1 00:06:55.568 --rc geninfo_unexecuted_blocks=1 00:06:55.568 00:06:55.568 ' 00:06:55.568 07:16:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:55.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.568 --rc genhtml_branch_coverage=1 00:06:55.568 --rc genhtml_function_coverage=1 00:06:55.568 --rc genhtml_legend=1 00:06:55.568 --rc geninfo_all_blocks=1 00:06:55.568 --rc geninfo_unexecuted_blocks=1 00:06:55.568 00:06:55.568 ' 00:06:55.568 07:16:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:55.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.568 --rc genhtml_branch_coverage=1 00:06:55.568 --rc genhtml_function_coverage=1 00:06:55.568 --rc genhtml_legend=1 00:06:55.568 --rc geninfo_all_blocks=1 00:06:55.568 --rc geninfo_unexecuted_blocks=1 00:06:55.568 00:06:55.568 ' 00:06:55.568 07:16:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:55.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.568 --rc genhtml_branch_coverage=1 00:06:55.568 --rc genhtml_function_coverage=1 00:06:55.568 --rc genhtml_legend=1 00:06:55.569 --rc geninfo_all_blocks=1 00:06:55.569 --rc geninfo_unexecuted_blocks=1 00:06:55.569 00:06:55.569 ' 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:55.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.569 07:16:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:55.832 ************************************ 00:06:55.832 START TEST nvmf_abort 00:06:55.832 ************************************ 00:06:55.832 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:55.832 * Looking for test storage... 00:06:55.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:55.832 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:55.832 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:06:55.832 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:55.832 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:55.832 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.832 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.832 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.832 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.832 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.832 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.832 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.832 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.832 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.832 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.832 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.832 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:55.832 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:55.832 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.832 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.832 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:55.832 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:55.832 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.832 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:55.832 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.832 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:55.832 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:55.832 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.832 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:55.832 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.832 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.832 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.832 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:55.832 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.832 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:55.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.833 --rc genhtml_branch_coverage=1 00:06:55.833 --rc genhtml_function_coverage=1 00:06:55.833 --rc genhtml_legend=1 00:06:55.833 --rc geninfo_all_blocks=1 00:06:55.833 --rc geninfo_unexecuted_blocks=1 00:06:55.833 00:06:55.833 ' 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:55.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.833 --rc genhtml_branch_coverage=1 00:06:55.833 --rc genhtml_function_coverage=1 00:06:55.833 --rc genhtml_legend=1 00:06:55.833 --rc geninfo_all_blocks=1 00:06:55.833 --rc geninfo_unexecuted_blocks=1 00:06:55.833 00:06:55.833 ' 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:55.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.833 --rc genhtml_branch_coverage=1 00:06:55.833 --rc genhtml_function_coverage=1 00:06:55.833 --rc genhtml_legend=1 00:06:55.833 --rc geninfo_all_blocks=1 00:06:55.833 --rc geninfo_unexecuted_blocks=1 00:06:55.833 00:06:55.833 ' 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:55.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.833 --rc genhtml_branch_coverage=1 00:06:55.833 --rc genhtml_function_coverage=1 00:06:55.833 --rc genhtml_legend=1 00:06:55.833 --rc geninfo_all_blocks=1 00:06:55.833 --rc geninfo_unexecuted_blocks=1 00:06:55.833 00:06:55.833 ' 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:55.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:55.833 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:03.985 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:03.985 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:03.985 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:03.986 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:03.986 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:03.986 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:03.986 Found net devices under 0000:31:00.0: cvl_0_0 00:07:03.986 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:03.986 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:03.986 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:03.986 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:03.986 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:03.986 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:03.986 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:03.986 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:03.986 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:03.986 Found net devices under 0000:31:00.1: cvl_0_1 00:07:03.986 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:03.986 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:03.986 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:03.986 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:03.986 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:03.986 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:03.986 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:03.986 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:03.986 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:03.986 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:03.986 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:03.986 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:03.986 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:03.986 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:03.986 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:03.986 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:03.986 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:03.986 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:03.986 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:03.986 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:03.986 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:04.247 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:04.247 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:04.247 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:04.247 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:04.247 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:04.247 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:04.247 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:04.247 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:04.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:04.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:07:04.247 00:07:04.247 --- 10.0.0.2 ping statistics --- 00:07:04.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.247 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:07:04.247 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:04.508 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:04.508 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:07:04.508 00:07:04.508 --- 10.0.0.1 ping statistics --- 00:07:04.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.508 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:07:04.508 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:04.508 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:04.508 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:04.508 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:04.508 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:04.508 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:04.508 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:04.508 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:04.508 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:04.508 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:04.508 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:04.508 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:04.508 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:04.508 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1877907 00:07:04.508 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1877907 00:07:04.508 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:04.508 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1877907 ']' 00:07:04.508 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.508 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.508 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.508 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.508 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:04.508 [2024-11-26 07:16:48.492383] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:07:04.508 [2024-11-26 07:16:48.492432] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.508 [2024-11-26 07:16:48.594635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:04.508 [2024-11-26 07:16:48.637316] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:04.508 [2024-11-26 07:16:48.637360] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:04.508 [2024-11-26 07:16:48.637368] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:04.508 [2024-11-26 07:16:48.637376] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:04.508 [2024-11-26 07:16:48.637382] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:04.508 [2024-11-26 07:16:48.638968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.769 [2024-11-26 07:16:48.639115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:04.769 [2024-11-26 07:16:48.639116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.404 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.404 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:05.404 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:05.404 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:05.404 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.404 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:05.404 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:05.404 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.404 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.404 [2024-11-26 07:16:49.340664] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:05.404 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.404 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:05.404 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.404 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.404 Malloc0 00:07:05.404 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.404 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:05.404 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.404 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.404 Delay0 00:07:05.404 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.404 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:05.404 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.404 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.404 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.404 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:05.404 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.404 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.404 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.404 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:05.404 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.404 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.404 [2024-11-26 07:16:49.423054] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:05.404 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.404 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:05.404 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.404 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.404 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.404 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:05.664 [2024-11-26 07:16:49.591914] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:08.207 Initializing NVMe Controllers 00:07:08.207 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:08.207 controller IO queue size 128 less than required 00:07:08.207 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:08.207 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:08.207 Initialization complete. Launching workers. 00:07:08.207 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 29026 00:07:08.207 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29091, failed to submit 62 00:07:08.207 success 29030, unsuccessful 61, failed 0 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:08.207 rmmod nvme_tcp 00:07:08.207 rmmod nvme_fabrics 00:07:08.207 rmmod nvme_keyring 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1877907 ']' 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1877907 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1877907 ']' 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1877907 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1877907 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1877907' 00:07:08.207 killing process with pid 1877907 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1877907 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1877907 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:08.207 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.122 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:10.122 00:07:10.122 real 0m14.331s 00:07:10.122 user 0m14.192s 00:07:10.122 sys 0m7.329s 00:07:10.122 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.122 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:10.122 ************************************ 00:07:10.122 END TEST nvmf_abort 00:07:10.122 ************************************ 00:07:10.122 07:16:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:10.122 07:16:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:10.122 07:16:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.122 07:16:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:10.122 ************************************ 00:07:10.122 START TEST nvmf_ns_hotplug_stress 00:07:10.122 ************************************ 00:07:10.122 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:10.122 * Looking for test storage... 00:07:10.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:10.122 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:10.122 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:07:10.122 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:10.384 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:10.384 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.384 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.384 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.384 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.384 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.384 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.384 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.384 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.384 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.384 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.384 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.384 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:10.384 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:10.384 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.384 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.384 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:10.384 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:10.384 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.384 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:10.384 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.384 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:10.384 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:10.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.385 --rc genhtml_branch_coverage=1 00:07:10.385 --rc genhtml_function_coverage=1 00:07:10.385 --rc genhtml_legend=1 00:07:10.385 --rc geninfo_all_blocks=1 00:07:10.385 --rc geninfo_unexecuted_blocks=1 00:07:10.385 00:07:10.385 ' 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:10.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.385 --rc genhtml_branch_coverage=1 00:07:10.385 --rc genhtml_function_coverage=1 00:07:10.385 --rc genhtml_legend=1 00:07:10.385 --rc geninfo_all_blocks=1 00:07:10.385 --rc geninfo_unexecuted_blocks=1 00:07:10.385 00:07:10.385 ' 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:10.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.385 --rc genhtml_branch_coverage=1 00:07:10.385 --rc genhtml_function_coverage=1 00:07:10.385 --rc genhtml_legend=1 00:07:10.385 --rc geninfo_all_blocks=1 00:07:10.385 --rc geninfo_unexecuted_blocks=1 00:07:10.385 00:07:10.385 ' 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:10.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.385 --rc genhtml_branch_coverage=1 00:07:10.385 --rc genhtml_function_coverage=1 00:07:10.385 --rc genhtml_legend=1 00:07:10.385 --rc geninfo_all_blocks=1 00:07:10.385 --rc geninfo_unexecuted_blocks=1 00:07:10.385 00:07:10.385 ' 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:10.385 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:10.385 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:18.594 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:18.594 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:18.594 Found net devices under 0000:31:00.0: cvl_0_0 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:18.594 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:18.595 Found net devices under 0000:31:00.1: cvl_0_1 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:18.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:18.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.690 ms 00:07:18.595 00:07:18.595 --- 10.0.0.2 ping statistics --- 00:07:18.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.595 rtt min/avg/max/mdev = 0.690/0.690/0.690/0.000 ms 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:18.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:18.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:07:18.595 00:07:18.595 --- 10.0.0.1 ping statistics --- 00:07:18.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.595 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:18.595 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:18.856 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:18.856 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:18.856 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:18.856 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:18.856 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1883322 00:07:18.856 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1883322 00:07:18.856 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:18.856 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1883322 ']' 00:07:18.856 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.856 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.856 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.856 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.856 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:18.856 [2024-11-26 07:17:02.805432] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:07:18.856 [2024-11-26 07:17:02.805495] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.856 [2024-11-26 07:17:02.911921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:18.856 [2024-11-26 07:17:02.962232] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:18.856 [2024-11-26 07:17:02.962287] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:18.856 [2024-11-26 07:17:02.962296] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:18.856 [2024-11-26 07:17:02.962304] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:18.856 [2024-11-26 07:17:02.962310] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:18.856 [2024-11-26 07:17:02.964148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.856 [2024-11-26 07:17:02.964314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.856 [2024-11-26 07:17:02.964315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:19.799 07:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.799 07:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:19.799 07:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:19.799 07:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:19.799 07:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:19.799 07:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:19.799 07:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:19.799 07:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:19.799 [2024-11-26 07:17:03.799868] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:19.799 07:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:20.060 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:20.060 [2024-11-26 07:17:04.153141] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:20.060 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:20.320 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:20.579 Malloc0 00:07:20.579 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:20.579 Delay0 00:07:20.840 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.840 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:21.101 NULL1 00:07:21.101 07:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:21.361 07:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1883883 00:07:21.361 07:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:21.361 07:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:21.361 07:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.361 07:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.623 07:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:21.623 07:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:21.883 true 00:07:21.883 07:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:21.883 07:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.883 07:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.144 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:22.144 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:22.405 true 00:07:22.405 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:22.405 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.405 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.666 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:22.666 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:22.926 true 00:07:22.926 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:22.926 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.186 07:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.186 07:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:23.186 07:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:23.447 true 00:07:23.447 07:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:23.447 07:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.707 07:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.707 07:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:23.707 07:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:23.968 true 00:07:23.968 07:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:23.968 07:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.229 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.229 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:24.229 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:24.490 true 00:07:24.490 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:24.490 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.752 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.014 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:25.014 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:25.014 true 00:07:25.014 07:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:25.014 07:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.273 07:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.532 07:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:25.532 07:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:25.532 true 00:07:25.533 07:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:25.533 07:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.794 07:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.056 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:26.056 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:26.056 true 00:07:26.317 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:26.317 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.317 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.578 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:26.578 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:26.839 true 00:07:26.839 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:26.840 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.840 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.100 07:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:27.100 07:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:27.361 true 00:07:27.361 07:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:27.361 07:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.361 07:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.622 07:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:27.622 07:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:27.882 true 00:07:27.882 07:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:27.882 07:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.143 07:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.143 07:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:28.143 07:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:28.404 true 00:07:28.404 07:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:28.405 07:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.666 07:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.666 07:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:28.666 07:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:28.927 true 00:07:28.927 07:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:28.927 07:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.188 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.188 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:29.188 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:29.449 true 00:07:29.449 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:29.449 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.710 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.710 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:29.710 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:29.971 true 00:07:29.971 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:29.971 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.232 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.493 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:30.493 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:30.493 true 00:07:30.493 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:30.493 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.753 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.016 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:31.016 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:31.016 true 00:07:31.016 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:31.016 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.277 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.538 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:31.538 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:31.538 true 00:07:31.538 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:31.538 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.797 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.058 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:32.058 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:32.058 true 00:07:32.058 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:32.058 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.317 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.579 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:32.579 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:32.579 true 00:07:32.840 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:32.840 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.840 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.102 07:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:33.102 07:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:33.364 true 00:07:33.364 07:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:33.364 07:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.364 07:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.624 07:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:33.624 07:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:33.884 true 00:07:33.884 07:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:33.884 07:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.884 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.146 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:34.146 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:34.407 true 00:07:34.407 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:34.407 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.668 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.668 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:34.668 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:34.928 true 00:07:34.928 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:34.928 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.189 07:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.189 07:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:35.189 07:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:35.447 true 00:07:35.447 07:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:35.447 07:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.707 07:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.969 07:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:35.969 07:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:35.969 true 00:07:35.969 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:35.969 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.230 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.491 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:36.491 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:36.491 true 00:07:36.491 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:36.491 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.753 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.014 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:37.014 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:37.014 true 00:07:37.274 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:37.274 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.274 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.535 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:37.535 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:37.795 true 00:07:37.795 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:37.796 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.796 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.057 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:38.057 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:38.317 true 00:07:38.317 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:38.318 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.579 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.579 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:07:38.579 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:07:38.839 true 00:07:38.839 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:38.839 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.099 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.099 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:07:39.099 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:07:39.361 true 00:07:39.361 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:39.361 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.622 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.622 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:07:39.622 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:07:39.883 true 00:07:39.883 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:39.883 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.144 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.405 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:07:40.405 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:07:40.405 true 00:07:40.405 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:40.405 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.665 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.927 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:07:40.927 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:07:40.927 true 00:07:40.927 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:40.927 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.187 07:17:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.449 07:17:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:07:41.449 07:17:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:07:41.449 true 00:07:41.449 07:17:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:41.449 07:17:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.711 07:17:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.972 07:17:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:07:41.972 07:17:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:07:41.972 true 00:07:42.233 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:42.233 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.233 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.493 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:07:42.493 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:07:42.493 true 00:07:42.755 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:42.755 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.755 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.016 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:07:43.016 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:07:43.016 true 00:07:43.277 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:43.277 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.277 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.538 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:07:43.538 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:07:43.538 true 00:07:43.799 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:43.799 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.799 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.060 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:07:44.060 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:07:44.321 true 00:07:44.321 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:44.321 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.321 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.581 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:07:44.581 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:07:44.841 true 00:07:44.841 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:44.841 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.841 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.102 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:07:45.102 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:07:45.362 true 00:07:45.362 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:45.362 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.362 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.622 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:07:45.622 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:07:45.882 true 00:07:45.882 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:45.882 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.882 07:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.142 07:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:07:46.142 07:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:07:46.403 true 00:07:46.403 07:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:46.403 07:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.664 07:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.664 07:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:07:46.664 07:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:07:46.925 true 00:07:46.925 07:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:46.925 07:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.186 07:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.186 07:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:07:47.186 07:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:07:47.446 true 00:07:47.446 07:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:47.446 07:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.707 07:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.707 07:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:07:47.707 07:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:07:47.968 true 00:07:47.968 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:47.968 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.229 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.491 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:07:48.491 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:07:48.491 true 00:07:48.491 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:48.491 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.752 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.013 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:07:49.013 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:07:49.013 true 00:07:49.013 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:49.013 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.274 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.535 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:07:49.535 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:07:49.535 true 00:07:49.535 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:49.535 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.796 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.056 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:07:50.056 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:07:50.056 true 00:07:50.056 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:50.056 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.317 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.578 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:07:50.578 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:07:50.578 true 00:07:50.838 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:50.838 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.838 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.098 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:07:51.098 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:07:51.360 true 00:07:51.360 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:51.360 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.360 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.621 Initializing NVMe Controllers 00:07:51.621 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:51.621 Controller IO queue size 128, less than required. 00:07:51.621 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:51.621 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:51.621 Initialization complete. Launching workers. 00:07:51.621 ======================================================== 00:07:51.621 Latency(us) 00:07:51.621 Device Information : IOPS MiB/s Average min max 00:07:51.621 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30502.63 14.89 4196.27 1373.71 42301.29 00:07:51.621 ======================================================== 00:07:51.621 Total : 30502.63 14.89 4196.27 1373.71 42301.29 00:07:51.621 00:07:51.621 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:07:51.621 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:07:51.881 true 00:07:51.881 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1883883 00:07:51.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1883883) - No such process 00:07:51.881 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1883883 00:07:51.881 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.881 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:52.152 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:52.152 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:52.152 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:52.152 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:52.152 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:52.498 null0 00:07:52.498 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:52.498 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:52.499 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:52.499 null1 00:07:52.499 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:52.499 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:52.499 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:52.829 null2 00:07:52.829 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:52.829 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:52.829 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:52.829 null3 00:07:52.829 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:52.829 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:52.829 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:53.091 null4 00:07:53.091 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:53.091 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:53.091 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:53.352 null5 00:07:53.352 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:53.352 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:53.352 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:53.352 null6 00:07:53.352 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:53.352 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:53.352 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:53.613 null7 00:07:53.613 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:53.613 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:53.613 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:53.613 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:53.613 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:53.613 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:53.613 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:53.613 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:53.613 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:53.613 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:53.613 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.613 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:53.613 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:53.613 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:53.613 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:53.613 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:53.613 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:53.613 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:53.613 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.613 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:53.613 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:53.613 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:53.613 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:53.613 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:53.613 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:53.613 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:53.613 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.613 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:53.613 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:53.613 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:53.613 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:53.613 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:53.613 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:53.613 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:53.613 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:53.613 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.613 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:53.614 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:53.614 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:53.614 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:53.614 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:53.614 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:53.614 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:53.614 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.614 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:53.614 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:53.614 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:53.614 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:53.614 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:53.614 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:53.614 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:53.614 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.614 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:53.614 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:53.614 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:53.614 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:53.614 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:53.614 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:53.614 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:53.614 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.614 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:53.614 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:53.614 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:53.614 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1890575 1890576 1890578 1890580 1890582 1890584 1890586 1890588 00:07:53.614 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:53.614 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:53.614 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:53.614 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.614 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:53.875 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:53.875 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:53.875 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.875 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:53.875 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:53.875 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:53.875 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:53.875 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:54.136 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.136 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.136 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:54.136 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.136 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.136 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:54.136 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.136 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.137 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:54.137 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.137 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.137 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:54.137 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.137 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.137 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:54.137 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.137 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.137 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:54.137 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.137 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.137 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:54.137 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.137 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.137 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:54.137 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:54.137 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.137 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:54.399 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:54.399 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:54.399 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:54.399 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:54.399 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:54.399 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.399 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.399 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:54.399 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.399 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.399 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:54.399 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.399 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.399 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:54.399 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.399 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.399 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:54.399 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.399 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.399 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.399 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.399 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:54.399 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:54.399 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.399 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.399 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:54.399 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.399 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.399 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:54.663 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:54.663 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:54.663 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.663 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:54.663 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:54.663 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:54.663 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:54.663 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:54.663 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.663 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.663 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:54.925 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.925 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.925 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:54.925 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.925 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.925 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:54.925 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.925 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.925 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:54.925 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.925 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.925 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:54.925 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.925 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.925 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:54.925 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.925 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.925 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.925 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:54.925 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.925 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:54.925 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:54.925 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.925 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:55.186 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:55.186 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:55.186 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:55.186 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.186 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.186 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:55.186 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:55.186 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:55.186 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.186 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.186 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:55.186 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.186 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.186 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:55.186 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.186 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.186 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:55.186 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.187 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.187 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:55.187 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:55.187 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.187 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.187 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:55.187 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.187 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.187 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:55.187 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.187 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.187 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:55.187 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.458 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:55.458 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:55.458 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:55.458 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.458 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.458 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:55.458 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:55.458 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.458 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.458 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:55.458 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:55.458 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:55.458 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.458 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.459 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:55.720 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.720 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.720 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:55.720 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:55.720 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.720 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.720 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:55.720 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.720 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.720 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.720 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:55.720 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.720 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.720 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:55.720 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.721 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.721 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:55.721 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:55.721 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:55.721 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.721 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.721 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:55.721 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:55.721 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:55.721 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.721 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.721 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:55.982 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:55.982 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:55.982 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.982 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.982 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:55.982 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.982 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.982 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:55.982 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.982 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.982 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:55.982 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.982 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.982 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:55.983 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:55.983 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.983 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.983 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.983 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:55.983 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.983 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.983 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:56.243 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:56.243 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:56.243 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:56.243 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:56.243 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.243 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.243 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:56.244 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:56.244 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.244 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.244 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:56.244 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:56.244 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.244 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.244 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:56.244 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.244 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.244 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:56.244 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.244 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.244 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:56.505 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.505 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.505 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.505 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:56.505 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:56.505 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.505 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.505 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:56.505 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.505 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.505 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:56.505 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:56.505 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:56.505 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:56.505 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.505 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.505 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:56.505 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:56.765 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.765 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.765 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:56.765 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:56.766 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:56.766 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.766 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.766 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:56.766 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.766 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.766 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:56.766 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.766 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.766 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:56.766 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.766 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.766 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.766 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:56.766 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:56.766 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.766 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.766 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:56.766 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:56.766 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:56.766 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.766 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.766 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:57.029 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.029 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.029 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:57.029 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:57.029 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:57.029 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.029 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.029 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:57.029 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:57.029 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.029 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.029 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:57.029 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.029 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.029 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.029 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:57.029 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:57.029 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.029 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.029 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:57.029 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.029 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.029 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:57.292 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.292 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.292 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:57.292 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:57.292 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:57.292 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.292 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.292 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:57.292 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:57.292 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:57.292 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.292 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.292 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:57.292 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:57.292 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.292 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.553 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.553 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.553 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.553 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.553 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.553 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.553 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.553 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.553 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:57.553 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.553 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.814 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.814 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.814 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:57.814 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:57.814 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:57.814 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:57.814 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:57.814 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:57.814 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:57.814 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:57.814 rmmod nvme_tcp 00:07:57.814 rmmod nvme_fabrics 00:07:57.814 rmmod nvme_keyring 00:07:57.814 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:57.814 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:57.814 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:57.814 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1883322 ']' 00:07:57.814 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1883322 00:07:57.814 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1883322 ']' 00:07:57.815 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1883322 00:07:57.815 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:57.815 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:57.815 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1883322 00:07:57.815 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:57.815 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:57.815 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1883322' 00:07:57.815 killing process with pid 1883322 00:07:57.815 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1883322 00:07:57.815 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1883322 00:07:57.815 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:57.815 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:57.815 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:57.815 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:57.815 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:57.815 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:58.077 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:58.077 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:58.077 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:58.077 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.077 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:58.077 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.006 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:00.006 00:08:00.006 real 0m49.897s 00:08:00.006 user 3m20.930s 00:08:00.006 sys 0m17.848s 00:08:00.006 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.006 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:00.006 ************************************ 00:08:00.006 END TEST nvmf_ns_hotplug_stress 00:08:00.006 ************************************ 00:08:00.006 07:17:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:00.006 07:17:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:00.006 07:17:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.006 07:17:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:00.006 ************************************ 00:08:00.006 START TEST nvmf_delete_subsystem 00:08:00.006 ************************************ 00:08:00.006 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:00.268 * Looking for test storage... 00:08:00.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:00.268 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:00.268 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:08:00.268 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:00.268 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:00.268 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:00.268 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:00.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.269 --rc genhtml_branch_coverage=1 00:08:00.269 --rc genhtml_function_coverage=1 00:08:00.269 --rc genhtml_legend=1 00:08:00.269 --rc geninfo_all_blocks=1 00:08:00.269 --rc geninfo_unexecuted_blocks=1 00:08:00.269 00:08:00.269 ' 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:00.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.269 --rc genhtml_branch_coverage=1 00:08:00.269 --rc genhtml_function_coverage=1 00:08:00.269 --rc genhtml_legend=1 00:08:00.269 --rc geninfo_all_blocks=1 00:08:00.269 --rc geninfo_unexecuted_blocks=1 00:08:00.269 00:08:00.269 ' 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:00.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.269 --rc genhtml_branch_coverage=1 00:08:00.269 --rc genhtml_function_coverage=1 00:08:00.269 --rc genhtml_legend=1 00:08:00.269 --rc geninfo_all_blocks=1 00:08:00.269 --rc geninfo_unexecuted_blocks=1 00:08:00.269 00:08:00.269 ' 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:00.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.269 --rc genhtml_branch_coverage=1 00:08:00.269 --rc genhtml_function_coverage=1 00:08:00.269 --rc genhtml_legend=1 00:08:00.269 --rc geninfo_all_blocks=1 00:08:00.269 --rc geninfo_unexecuted_blocks=1 00:08:00.269 00:08:00.269 ' 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:00.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:00.269 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:00.270 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.270 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:00.270 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.270 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:00.270 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:00.270 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:00.270 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.414 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:08.414 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:08.414 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:08.414 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:08.414 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:08.414 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:08.414 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:08.414 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:08.414 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:08.414 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:08.414 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:08.414 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:08.414 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:08.414 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:08.414 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:08.414 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:08.414 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:08.414 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:08.414 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:08.414 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:08.414 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:08.415 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:08.415 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:08.415 Found net devices under 0000:31:00.0: cvl_0_0 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:08.415 Found net devices under 0000:31:00.1: cvl_0_1 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:08.415 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:08.677 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:08.677 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:08.677 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:08.677 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:08.677 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:08.677 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:08.677 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:08.677 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:08.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:08.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:08:08.677 00:08:08.677 --- 10.0.0.2 ping statistics --- 00:08:08.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.677 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:08:08.677 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:08.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:08.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:08:08.677 00:08:08.677 --- 10.0.0.1 ping statistics --- 00:08:08.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.677 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:08:08.677 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:08.677 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:08.677 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:08.677 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:08.677 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:08.677 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:08.677 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:08.677 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:08.677 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:08.677 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:08.677 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:08.677 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:08.677 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.677 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1896347 00:08:08.677 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1896347 00:08:08.677 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:08.677 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1896347 ']' 00:08:08.677 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.677 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:08.677 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.677 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:08.677 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.938 [2024-11-26 07:17:52.838763] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:08:08.938 [2024-11-26 07:17:52.838830] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.938 [2024-11-26 07:17:52.929766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:08.938 [2024-11-26 07:17:52.970257] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:08.938 [2024-11-26 07:17:52.970292] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:08.938 [2024-11-26 07:17:52.970299] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:08.938 [2024-11-26 07:17:52.970311] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:08.938 [2024-11-26 07:17:52.970317] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:08.938 [2024-11-26 07:17:52.971623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.938 [2024-11-26 07:17:52.971626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.509 07:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.509 07:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:08:09.509 07:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:09.509 07:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:09.509 07:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.770 07:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:09.770 07:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:09.770 07:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.770 07:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.770 [2024-11-26 07:17:53.677397] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:09.770 07:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.770 07:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:09.770 07:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.770 07:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.770 07:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.770 07:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:09.770 07:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.770 07:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.770 [2024-11-26 07:17:53.701570] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:09.770 07:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.770 07:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:09.770 07:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.770 07:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.770 NULL1 00:08:09.770 07:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.770 07:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:09.770 07:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.770 07:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.770 Delay0 00:08:09.770 07:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.770 07:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.770 07:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.770 07:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.770 07:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.770 07:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1896470 00:08:09.770 07:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:09.770 07:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:09.770 [2024-11-26 07:17:53.808378] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:11.685 07:17:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:11.685 07:17:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.685 07:17:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:11.946 Write completed with error (sct=0, sc=8) 00:08:11.946 Write completed with error (sct=0, sc=8) 00:08:11.946 Read completed with error (sct=0, sc=8) 00:08:11.946 Read completed with error (sct=0, sc=8) 00:08:11.946 starting I/O failed: -6 00:08:11.946 Read completed with error (sct=0, sc=8) 00:08:11.946 Read completed with error (sct=0, sc=8) 00:08:11.946 Write completed with error (sct=0, sc=8) 00:08:11.946 Read completed with error (sct=0, sc=8) 00:08:11.946 starting I/O failed: -6 00:08:11.946 Write completed with error (sct=0, sc=8) 00:08:11.946 Read completed with error (sct=0, sc=8) 00:08:11.946 Read completed with error (sct=0, sc=8) 00:08:11.946 Read completed with error (sct=0, sc=8) 00:08:11.946 starting I/O failed: -6 00:08:11.946 Read completed with error (sct=0, sc=8) 00:08:11.946 Read completed with error (sct=0, sc=8) 00:08:11.946 Write completed with error (sct=0, sc=8) 00:08:11.946 Read completed with error (sct=0, sc=8) 00:08:11.946 starting I/O failed: -6 00:08:11.946 Read completed with error (sct=0, sc=8) 00:08:11.946 Read completed with error (sct=0, sc=8) 00:08:11.946 Read completed with error (sct=0, sc=8) 00:08:11.946 Write completed with error (sct=0, sc=8) 00:08:11.946 starting I/O failed: -6 00:08:11.946 Read completed with error (sct=0, sc=8) 00:08:11.946 Read completed with error (sct=0, sc=8) 00:08:11.946 Read completed with error (sct=0, sc=8) 00:08:11.946 Write completed with error (sct=0, sc=8) 00:08:11.946 starting I/O failed: -6 00:08:11.946 Read completed with error (sct=0, sc=8) 00:08:11.946 Write completed with error (sct=0, sc=8) 00:08:11.946 Write completed with error (sct=0, sc=8) 00:08:11.946 Read completed with error (sct=0, sc=8) 00:08:11.946 starting I/O failed: -6 00:08:11.946 Read completed with error (sct=0, sc=8) 00:08:11.946 Read completed with error (sct=0, sc=8) 00:08:11.946 Read completed with error (sct=0, sc=8) 00:08:11.946 Read completed with error (sct=0, sc=8) 00:08:11.946 starting I/O failed: -6 00:08:11.946 Read completed with error (sct=0, sc=8) 00:08:11.946 Write completed with error (sct=0, sc=8) 00:08:11.946 Read completed with error (sct=0, sc=8) 00:08:11.946 Write completed with error (sct=0, sc=8) 00:08:11.946 starting I/O failed: -6 00:08:11.946 Write completed with error (sct=0, sc=8) 00:08:11.946 Read completed with error (sct=0, sc=8) 00:08:11.946 Write completed with error (sct=0, sc=8) 00:08:11.946 Read completed with error (sct=0, sc=8) 00:08:11.946 starting I/O failed: -6 00:08:11.946 Read completed with error (sct=0, sc=8) 00:08:11.946 Read completed with error (sct=0, sc=8) 00:08:11.946 Write completed with error (sct=0, sc=8) 00:08:11.946 Read completed with error (sct=0, sc=8) 00:08:11.946 starting I/O failed: -6 00:08:11.946 Read completed with error (sct=0, sc=8) 00:08:11.946 Read completed with error (sct=0, sc=8) 00:08:11.946 [2024-11-26 07:17:56.013131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0e2c0 is same with the state(6) to be set 00:08:11.946 Read completed with error (sct=0, sc=8) 00:08:11.946 Read completed with error (sct=0, sc=8) 00:08:11.946 Read completed with error (sct=0, sc=8) 00:08:11.946 Read completed with error (sct=0, sc=8) 00:08:11.946 Write completed with error (sct=0, sc=8) 00:08:11.946 Read completed with error (sct=0, sc=8) 00:08:11.946 Read completed with error (sct=0, sc=8) 00:08:11.946 Write completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Write completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Write completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Write completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Write completed with error (sct=0, sc=8) 00:08:11.947 Write completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Write completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Write completed with error (sct=0, sc=8) 00:08:11.947 Write completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Write completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Write completed with error (sct=0, sc=8) 00:08:11.947 Write completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Write completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Write completed with error (sct=0, sc=8) 00:08:11.947 Write completed with error (sct=0, sc=8) 00:08:11.947 Write completed with error (sct=0, sc=8) 00:08:11.947 Write completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Write completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 starting I/O failed: -6 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Write completed with error (sct=0, sc=8) 00:08:11.947 starting I/O failed: -6 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Write completed with error (sct=0, sc=8) 00:08:11.947 starting I/O failed: -6 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Write completed with error (sct=0, sc=8) 00:08:11.947 Write completed with error (sct=0, sc=8) 00:08:11.947 starting I/O failed: -6 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 starting I/O failed: -6 00:08:11.947 Write completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 starting I/O failed: -6 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Write completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 starting I/O failed: -6 00:08:11.947 Write completed with error (sct=0, sc=8) 00:08:11.947 Write completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Write completed with error (sct=0, sc=8) 00:08:11.947 starting I/O failed: -6 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Write completed with error (sct=0, sc=8) 00:08:11.947 starting I/O failed: -6 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 starting I/O failed: -6 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Write completed with error (sct=0, sc=8) 00:08:11.947 [2024-11-26 07:17:56.017120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f311400d4b0 is same with the state(6) to be set 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Write completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Write completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Write completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Write completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Write completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Write completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Write completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Write completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Write completed with error (sct=0, sc=8) 00:08:11.947 Write completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:11.947 Read completed with error (sct=0, sc=8) 00:08:12.890 [2024-11-26 07:17:56.988911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0f5e0 is same with the state(6) to be set 00:08:12.890 Write completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Write completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Write completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Write completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 [2024-11-26 07:17:57.017196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0e0e0 is same with the state(6) to be set 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Write completed with error (sct=0, sc=8) 00:08:12.890 Write completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Write completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Write completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Write completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Write completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Write completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 [2024-11-26 07:17:57.017357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0e4a0 is same with the state(6) to be set 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Write completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Write completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Write completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Write completed with error (sct=0, sc=8) 00:08:12.890 Write completed with error (sct=0, sc=8) 00:08:12.890 Write completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Write completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 [2024-11-26 07:17:57.019267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f311400d020 is same with the state(6) to be set 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Write completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Write completed with error (sct=0, sc=8) 00:08:12.890 Write completed with error (sct=0, sc=8) 00:08:12.890 Write completed with error (sct=0, sc=8) 00:08:12.890 Write completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Write completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Write completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Write completed with error (sct=0, sc=8) 00:08:12.890 Write completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 Write completed with error (sct=0, sc=8) 00:08:12.890 Read completed with error (sct=0, sc=8) 00:08:12.890 [2024-11-26 07:17:57.019487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f311400d7e0 is same with the state(6) to be set 00:08:12.890 Initializing NVMe Controllers 00:08:12.890 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:12.890 Controller IO queue size 128, less than required. 00:08:12.890 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:12.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:12.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:12.890 Initialization complete. Launching workers. 00:08:12.890 ======================================================== 00:08:12.890 Latency(us) 00:08:12.890 Device Information : IOPS MiB/s Average min max 00:08:12.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 172.33 0.08 887532.16 224.18 1006962.54 00:08:12.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 161.38 0.08 960980.98 277.20 2002798.37 00:08:12.891 ======================================================== 00:08:12.891 Total : 333.71 0.16 923050.69 224.18 2002798.37 00:08:12.891 00:08:12.891 [2024-11-26 07:17:57.020115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f0f5e0 (9): Bad file descriptor 00:08:12.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:13.151 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.151 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:13.151 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1896470 00:08:13.151 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:13.412 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:13.412 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1896470 00:08:13.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1896470) - No such process 00:08:13.412 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1896470 00:08:13.412 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:13.412 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1896470 00:08:13.412 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:13.412 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:13.412 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:13.412 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:13.412 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1896470 00:08:13.412 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:13.412 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:13.412 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:13.412 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:13.412 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:13.412 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.413 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:13.674 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.674 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:13.674 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.674 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:13.674 [2024-11-26 07:17:57.553073] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:13.674 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.674 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.674 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.674 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:13.674 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.674 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1897216 00:08:13.674 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:13.674 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:13.674 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1897216 00:08:13.674 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:13.674 [2024-11-26 07:17:57.629782] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:14.245 07:17:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:14.245 07:17:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1897216 00:08:14.245 07:17:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:14.505 07:17:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:14.505 07:17:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1897216 00:08:14.505 07:17:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:15.077 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:15.077 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1897216 00:08:15.077 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:15.647 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:15.647 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1897216 00:08:15.647 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:16.220 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:16.220 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1897216 00:08:16.220 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:16.481 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:16.481 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1897216 00:08:16.481 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:16.742 Initializing NVMe Controllers 00:08:16.742 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:16.742 Controller IO queue size 128, less than required. 00:08:16.742 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:16.742 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:16.742 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:16.742 Initialization complete. Launching workers. 00:08:16.742 ======================================================== 00:08:16.742 Latency(us) 00:08:16.742 Device Information : IOPS MiB/s Average min max 00:08:16.742 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001975.15 1000168.41 1005574.02 00:08:16.742 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002943.18 1000158.98 1009522.22 00:08:16.742 ======================================================== 00:08:16.742 Total : 256.00 0.12 1002459.17 1000158.98 1009522.22 00:08:16.742 00:08:17.003 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:17.003 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1897216 00:08:17.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1897216) - No such process 00:08:17.003 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1897216 00:08:17.003 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:17.003 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:17.003 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:17.003 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:17.003 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:17.003 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:17.003 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:17.003 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:17.003 rmmod nvme_tcp 00:08:17.264 rmmod nvme_fabrics 00:08:17.264 rmmod nvme_keyring 00:08:17.264 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:17.264 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:17.264 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:17.264 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1896347 ']' 00:08:17.264 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1896347 00:08:17.264 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1896347 ']' 00:08:17.264 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1896347 00:08:17.264 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:17.264 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:17.264 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1896347 00:08:17.264 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:17.264 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:17.264 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1896347' 00:08:17.264 killing process with pid 1896347 00:08:17.264 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1896347 00:08:17.264 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1896347 00:08:17.264 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:17.264 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:17.264 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:17.264 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:17.264 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:17.264 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:17.264 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:17.264 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:17.264 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:17.264 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.264 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:17.264 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:19.814 00:08:19.814 real 0m19.353s 00:08:19.814 user 0m31.220s 00:08:19.814 sys 0m7.480s 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:19.814 ************************************ 00:08:19.814 END TEST nvmf_delete_subsystem 00:08:19.814 ************************************ 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:19.814 ************************************ 00:08:19.814 START TEST nvmf_host_management 00:08:19.814 ************************************ 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:19.814 * Looking for test storage... 00:08:19.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:19.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.814 --rc genhtml_branch_coverage=1 00:08:19.814 --rc genhtml_function_coverage=1 00:08:19.814 --rc genhtml_legend=1 00:08:19.814 --rc geninfo_all_blocks=1 00:08:19.814 --rc geninfo_unexecuted_blocks=1 00:08:19.814 00:08:19.814 ' 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:19.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.814 --rc genhtml_branch_coverage=1 00:08:19.814 --rc genhtml_function_coverage=1 00:08:19.814 --rc genhtml_legend=1 00:08:19.814 --rc geninfo_all_blocks=1 00:08:19.814 --rc geninfo_unexecuted_blocks=1 00:08:19.814 00:08:19.814 ' 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:19.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.814 --rc genhtml_branch_coverage=1 00:08:19.814 --rc genhtml_function_coverage=1 00:08:19.814 --rc genhtml_legend=1 00:08:19.814 --rc geninfo_all_blocks=1 00:08:19.814 --rc geninfo_unexecuted_blocks=1 00:08:19.814 00:08:19.814 ' 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:19.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.814 --rc genhtml_branch_coverage=1 00:08:19.814 --rc genhtml_function_coverage=1 00:08:19.814 --rc genhtml_legend=1 00:08:19.814 --rc geninfo_all_blocks=1 00:08:19.814 --rc geninfo_unexecuted_blocks=1 00:08:19.814 00:08:19.814 ' 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.814 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:19.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:19.815 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:27.958 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:27.958 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:27.958 Found net devices under 0000:31:00.0: cvl_0_0 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:27.958 Found net devices under 0000:31:00.1: cvl_0_1 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:27.958 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:27.959 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:27.959 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:27.959 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:27.959 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:27.959 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:27.959 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:27.959 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:08:27.959 00:08:27.959 --- 10.0.0.2 ping statistics --- 00:08:27.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.959 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:08:27.959 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:27.959 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:27.959 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:08:27.959 00:08:27.959 --- 10.0.0.1 ping statistics --- 00:08:27.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.959 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:08:27.959 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:27.959 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:27.959 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:27.959 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:27.959 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:27.959 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:27.959 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:27.959 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:27.959 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:27.959 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:27.959 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:27.959 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:27.959 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:27.959 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:27.959 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:27.959 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1903400 00:08:27.959 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1903400 00:08:27.959 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:27.959 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1903400 ']' 00:08:27.959 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.959 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:27.959 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.959 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:27.959 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:28.220 [2024-11-26 07:18:12.124596] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:08:28.220 [2024-11-26 07:18:12.124668] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.220 [2024-11-26 07:18:12.234288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:28.220 [2024-11-26 07:18:12.279200] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:28.220 [2024-11-26 07:18:12.279250] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:28.220 [2024-11-26 07:18:12.279259] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:28.220 [2024-11-26 07:18:12.279267] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:28.220 [2024-11-26 07:18:12.279273] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:28.220 [2024-11-26 07:18:12.281186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:28.220 [2024-11-26 07:18:12.281349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:28.220 [2024-11-26 07:18:12.281545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.220 [2024-11-26 07:18:12.281545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:28.830 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.830 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:28.830 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:28.830 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:28.830 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:28.830 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:28.830 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:28.830 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.830 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:29.092 [2024-11-26 07:18:12.965643] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:29.092 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.092 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:29.092 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:29.092 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:29.092 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:29.092 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:29.092 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:29.092 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.092 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:29.092 Malloc0 00:08:29.092 [2024-11-26 07:18:13.042737] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:29.092 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.092 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:29.092 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:29.092 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:29.092 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1903544 00:08:29.092 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1903544 /var/tmp/bdevperf.sock 00:08:29.092 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1903544 ']' 00:08:29.092 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:29.092 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:29.092 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:29.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:29.092 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:29.092 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:29.092 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:29.092 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:29.092 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:29.092 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:29.092 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:29.092 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:29.092 { 00:08:29.092 "params": { 00:08:29.092 "name": "Nvme$subsystem", 00:08:29.092 "trtype": "$TEST_TRANSPORT", 00:08:29.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:29.092 "adrfam": "ipv4", 00:08:29.092 "trsvcid": "$NVMF_PORT", 00:08:29.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:29.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:29.092 "hdgst": ${hdgst:-false}, 00:08:29.092 "ddgst": ${ddgst:-false} 00:08:29.092 }, 00:08:29.092 "method": "bdev_nvme_attach_controller" 00:08:29.092 } 00:08:29.092 EOF 00:08:29.092 )") 00:08:29.092 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:29.092 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:29.092 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:29.092 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:29.092 "params": { 00:08:29.092 "name": "Nvme0", 00:08:29.092 "trtype": "tcp", 00:08:29.092 "traddr": "10.0.0.2", 00:08:29.092 "adrfam": "ipv4", 00:08:29.092 "trsvcid": "4420", 00:08:29.092 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:29.092 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:29.092 "hdgst": false, 00:08:29.092 "ddgst": false 00:08:29.092 }, 00:08:29.092 "method": "bdev_nvme_attach_controller" 00:08:29.092 }' 00:08:29.092 [2024-11-26 07:18:13.149148] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:08:29.092 [2024-11-26 07:18:13.149200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1903544 ] 00:08:29.353 [2024-11-26 07:18:13.227465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.353 [2024-11-26 07:18:13.263734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.613 Running I/O for 10 seconds... 00:08:29.874 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:29.874 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:29.874 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:29.874 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.874 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:29.874 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.874 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:29.874 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:29.874 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:29.874 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:29.874 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:29.874 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:29.874 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:29.874 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:29.874 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:29.874 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:29.874 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.874 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:29.874 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.874 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=716 00:08:29.874 07:18:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 716 -ge 100 ']' 00:08:29.874 07:18:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:29.874 07:18:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:29.874 07:18:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:29.874 07:18:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:29.874 07:18:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.874 07:18:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:30.141 [2024-11-26 07:18:14.005771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9530 is same with the state(6) to be set 00:08:30.141 [2024-11-26 07:18:14.005838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9530 is same with the state(6) to be set 00:08:30.141 [2024-11-26 07:18:14.005846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9530 is same with the state(6) to be set 00:08:30.141 [2024-11-26 07:18:14.005853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9530 is same with the state(6) to be set 00:08:30.141 [2024-11-26 07:18:14.005870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9530 is same with the state(6) to be set 00:08:30.141 [2024-11-26 07:18:14.005877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9530 is same with the state(6) to be set 00:08:30.141 [2024-11-26 07:18:14.005884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9530 is same with the state(6) to be set 00:08:30.141 [2024-11-26 07:18:14.005890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9530 is same with the state(6) to be set 00:08:30.141 [2024-11-26 07:18:14.005897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9530 is same with the state(6) to be set 00:08:30.141 [2024-11-26 07:18:14.005904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9530 is same with the state(6) to be set 00:08:30.141 [2024-11-26 07:18:14.005911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9530 is same with the state(6) to be set 00:08:30.141 [2024-11-26 07:18:14.005917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9530 is same with the state(6) to be set 00:08:30.141 [2024-11-26 07:18:14.005924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9530 is same with the state(6) to be set 00:08:30.141 [2024-11-26 07:18:14.005931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9530 is same with the state(6) to be set 00:08:30.141 [2024-11-26 07:18:14.005938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9530 is same with the state(6) to be set 00:08:30.141 [2024-11-26 07:18:14.009910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.141 [2024-11-26 07:18:14.009950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.141 [2024-11-26 07:18:14.009966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.141 [2024-11-26 07:18:14.009975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.141 [2024-11-26 07:18:14.009985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.141 [2024-11-26 07:18:14.009993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.141 [2024-11-26 07:18:14.010002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.141 [2024-11-26 07:18:14.010009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.141 [2024-11-26 07:18:14.010019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.141 [2024-11-26 07:18:14.010026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.141 [2024-11-26 07:18:14.010036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.141 [2024-11-26 07:18:14.010043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.141 [2024-11-26 07:18:14.010052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.141 [2024-11-26 07:18:14.010059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.141 [2024-11-26 07:18:14.010069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.141 [2024-11-26 07:18:14.010081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.141 [2024-11-26 07:18:14.010090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.141 [2024-11-26 07:18:14.010099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.141 [2024-11-26 07:18:14.010108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.141 [2024-11-26 07:18:14.010115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.141 [2024-11-26 07:18:14.010125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.141 [2024-11-26 07:18:14.010132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.141 [2024-11-26 07:18:14.010141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.141 [2024-11-26 07:18:14.010149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.141 [2024-11-26 07:18:14.010158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.141 [2024-11-26 07:18:14.010165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.141 [2024-11-26 07:18:14.010174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.141 [2024-11-26 07:18:14.010181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.141 [2024-11-26 07:18:14.010191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.141 [2024-11-26 07:18:14.010198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.141 [2024-11-26 07:18:14.010207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.141 [2024-11-26 07:18:14.010214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.141 [2024-11-26 07:18:14.010224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.141 [2024-11-26 07:18:14.010232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.141 [2024-11-26 07:18:14.010241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.141 [2024-11-26 07:18:14.010249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.142 [2024-11-26 07:18:14.010258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.142 [2024-11-26 07:18:14.010266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.142 [2024-11-26 07:18:14.010275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.142 [2024-11-26 07:18:14.010282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.142 [2024-11-26 07:18:14.010294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.142 [2024-11-26 07:18:14.010302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.142 [2024-11-26 07:18:14.010312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.142 [2024-11-26 07:18:14.010319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.142 [2024-11-26 07:18:14.010329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.142 [2024-11-26 07:18:14.010336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.142 [2024-11-26 07:18:14.010345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.142 [2024-11-26 07:18:14.010353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.142 [2024-11-26 07:18:14.010363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.142 [2024-11-26 07:18:14.010371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.142 [2024-11-26 07:18:14.010380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.142 [2024-11-26 07:18:14.010387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.142 [2024-11-26 07:18:14.010396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.142 [2024-11-26 07:18:14.010404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.142 [2024-11-26 07:18:14.010413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.142 [2024-11-26 07:18:14.010421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.142 [2024-11-26 07:18:14.010430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.142 [2024-11-26 07:18:14.010438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.142 [2024-11-26 07:18:14.010447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.142 [2024-11-26 07:18:14.010454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.142 [2024-11-26 07:18:14.010464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.142 [2024-11-26 07:18:14.010471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.142 [2024-11-26 07:18:14.010480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.142 [2024-11-26 07:18:14.010488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.142 [2024-11-26 07:18:14.010498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.142 [2024-11-26 07:18:14.010507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.142 [2024-11-26 07:18:14.010516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.142 [2024-11-26 07:18:14.010523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.142 [2024-11-26 07:18:14.010533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.142 [2024-11-26 07:18:14.010541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.142 [2024-11-26 07:18:14.010551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.142 [2024-11-26 07:18:14.010559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.142 [2024-11-26 07:18:14.010568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.142 [2024-11-26 07:18:14.010575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.142 [2024-11-26 07:18:14.010586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.142 [2024-11-26 07:18:14.010593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.142 [2024-11-26 07:18:14.010604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.142 [2024-11-26 07:18:14.010611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.142 [2024-11-26 07:18:14.010620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.142 [2024-11-26 07:18:14.010627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.142 [2024-11-26 07:18:14.010636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.142 [2024-11-26 07:18:14.010645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.142 [2024-11-26 07:18:14.010654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.142 [2024-11-26 07:18:14.010662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.142 [2024-11-26 07:18:14.010671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.142 [2024-11-26 07:18:14.010678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.142 [2024-11-26 07:18:14.010687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.142 [2024-11-26 07:18:14.010695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.142 [2024-11-26 07:18:14.010705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.142 [2024-11-26 07:18:14.010713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.142 [2024-11-26 07:18:14.010723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.142 [2024-11-26 07:18:14.010730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.142 [2024-11-26 07:18:14.010739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.142 [2024-11-26 07:18:14.010748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.142 [2024-11-26 07:18:14.010757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.142 [2024-11-26 07:18:14.010765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.142 [2024-11-26 07:18:14.010774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.142 [2024-11-26 07:18:14.010782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.142 [2024-11-26 07:18:14.010792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.142 [2024-11-26 07:18:14.010800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.142 [2024-11-26 07:18:14.010810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.142 [2024-11-26 07:18:14.010818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.142 [2024-11-26 07:18:14.010827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.142 [2024-11-26 07:18:14.010834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.142 [2024-11-26 07:18:14.010843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.142 [2024-11-26 07:18:14.010850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.142 [2024-11-26 07:18:14.010860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.143 [2024-11-26 07:18:14.010872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.143 [2024-11-26 07:18:14.010882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.143 [2024-11-26 07:18:14.010889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.143 [2024-11-26 07:18:14.010898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:1 07:18:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.143 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.143 [2024-11-26 07:18:14.010914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.143 [2024-11-26 07:18:14.010924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.143 [2024-11-26 07:18:14.010932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.143 [2024-11-26 07:18:14.010943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.143 [2024-11-26 07:18:14.010952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.143 [2024-11-26 07:18:14.010962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.143 [2024-11-26 07:18:14.010969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.143 [2024-11-26 07:18:14.010979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.143 [2024-11-26 07:18:14.010987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.143 [2024-11-26 07:18:14.010996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.143 [2024-11-26 07:18:14.011004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.143 [2024-11-26 07:18:14.011013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.143 [2024-11-26 07:18:14.011021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.143 [2024-11-26 07:18:14.011030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.143 [2024-11-26 07:18:14.011038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.143 [2024-11-26 07:18:14.011047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.143 [2024-11-26 07:18:14.011055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.143 [2024-11-26 07:18:14.011140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:30.143 [2024-11-26 07:18:14.011153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.143 [2024-11-26 07:18:14.011161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:30.143 [2024-11-26 07:18:14.011169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.143 [2024-11-26 07:18:14.011177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:30.143 [2024-11-26 07:18:14.011184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.143 [2024-11-26 07:18:14.011192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:30.143 [2024-11-26 07:18:14.011200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.143 [2024-11-26 07:18:14.011208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dab00 is same with the state(6) to be set 00:08:30.143 07:18:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:30.143 07:18:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.143 07:18:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:30.143 [2024-11-26 07:18:14.012402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:30.143 task offset: 106752 on job bdev=Nvme0n1 fails 00:08:30.143 00:08:30.143 Latency(us) 00:08:30.143 [2024-11-26T06:18:14.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.143 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:30.143 Job: Nvme0n1 ended in about 0.44 seconds with error 00:08:30.143 Verification LBA range: start 0x0 length 0x400 00:08:30.143 Nvme0n1 : 0.44 1818.46 113.65 144.75 0.00 31601.99 1815.89 34078.72 00:08:30.143 [2024-11-26T06:18:14.280Z] =================================================================================================================== 00:08:30.143 [2024-11-26T06:18:14.280Z] Total : 1818.46 113.65 144.75 0.00 31601.99 1815.89 34078.72 00:08:30.143 [2024-11-26 07:18:14.014377] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:30.143 [2024-11-26 07:18:14.014399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15dab00 (9): Bad file descriptor 00:08:30.143 [2024-11-26 07:18:14.015511] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:08:30.143 [2024-11-26 07:18:14.015587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:08:30.143 [2024-11-26 07:18:14.015617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.143 [2024-11-26 07:18:14.015632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:08:30.143 [2024-11-26 07:18:14.015641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:08:30.143 [2024-11-26 07:18:14.015648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:08:30.143 [2024-11-26 07:18:14.015655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15dab00 00:08:30.143 [2024-11-26 07:18:14.015676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15dab00 (9): Bad file descriptor 00:08:30.143 [2024-11-26 07:18:14.015690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:08:30.143 [2024-11-26 07:18:14.015698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:08:30.143 [2024-11-26 07:18:14.015707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:08:30.143 [2024-11-26 07:18:14.015715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:08:30.143 07:18:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.143 07:18:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:31.088 07:18:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1903544 00:08:31.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1903544) - No such process 00:08:31.088 07:18:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:31.088 07:18:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:31.088 07:18:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:31.088 07:18:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:31.088 07:18:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:31.088 07:18:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:31.088 07:18:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:31.088 07:18:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:31.088 { 00:08:31.088 "params": { 00:08:31.088 "name": "Nvme$subsystem", 00:08:31.088 "trtype": "$TEST_TRANSPORT", 00:08:31.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:31.088 "adrfam": "ipv4", 00:08:31.088 "trsvcid": "$NVMF_PORT", 00:08:31.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:31.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:31.088 "hdgst": ${hdgst:-false}, 00:08:31.088 "ddgst": ${ddgst:-false} 00:08:31.088 }, 00:08:31.088 "method": "bdev_nvme_attach_controller" 00:08:31.088 } 00:08:31.088 EOF 00:08:31.088 )") 00:08:31.088 07:18:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:31.088 07:18:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:31.088 07:18:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:31.088 07:18:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:31.088 "params": { 00:08:31.088 "name": "Nvme0", 00:08:31.088 "trtype": "tcp", 00:08:31.088 "traddr": "10.0.0.2", 00:08:31.088 "adrfam": "ipv4", 00:08:31.088 "trsvcid": "4420", 00:08:31.088 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:31.088 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:31.088 "hdgst": false, 00:08:31.088 "ddgst": false 00:08:31.088 }, 00:08:31.088 "method": "bdev_nvme_attach_controller" 00:08:31.088 }' 00:08:31.088 [2024-11-26 07:18:15.082210] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:08:31.088 [2024-11-26 07:18:15.082266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1904018 ] 00:08:31.088 [2024-11-26 07:18:15.160095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.088 [2024-11-26 07:18:15.195671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.348 Running I/O for 1 seconds... 00:08:32.735 1726.00 IOPS, 107.88 MiB/s 00:08:32.735 Latency(us) 00:08:32.735 [2024-11-26T06:18:16.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.735 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:32.735 Verification LBA range: start 0x0 length 0x400 00:08:32.735 Nvme0n1 : 1.03 1731.69 108.23 0.00 0.00 36293.90 6007.47 34078.72 00:08:32.735 [2024-11-26T06:18:16.872Z] =================================================================================================================== 00:08:32.735 [2024-11-26T06:18:16.872Z] Total : 1731.69 108.23 0.00 0.00 36293.90 6007.47 34078.72 00:08:32.735 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:32.735 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:32.735 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:32.735 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:32.735 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:32.735 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:32.735 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:32.735 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:32.735 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:32.735 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:32.735 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:32.735 rmmod nvme_tcp 00:08:32.735 rmmod nvme_fabrics 00:08:32.735 rmmod nvme_keyring 00:08:32.735 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:32.735 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:32.735 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:32.735 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1903400 ']' 00:08:32.735 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1903400 00:08:32.735 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1903400 ']' 00:08:32.735 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1903400 00:08:32.735 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:32.735 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:32.735 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1903400 00:08:32.735 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:32.735 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:32.735 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1903400' 00:08:32.735 killing process with pid 1903400 00:08:32.735 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1903400 00:08:32.735 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1903400 00:08:32.735 [2024-11-26 07:18:16.860101] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:32.997 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:32.997 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:32.997 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:32.997 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:32.997 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:32.997 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:32.997 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:32.997 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:32.997 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:32.997 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.997 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:32.997 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.910 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:34.910 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:34.910 00:08:34.910 real 0m15.435s 00:08:34.910 user 0m23.336s 00:08:34.910 sys 0m7.242s 00:08:34.910 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.910 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.910 ************************************ 00:08:34.910 END TEST nvmf_host_management 00:08:34.910 ************************************ 00:08:34.910 07:18:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:34.910 07:18:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:34.910 07:18:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.910 07:18:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:35.172 ************************************ 00:08:35.172 START TEST nvmf_lvol 00:08:35.172 ************************************ 00:08:35.172 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:35.172 * Looking for test storage... 00:08:35.172 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:35.172 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:35.172 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:08:35.172 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:35.172 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:35.172 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:35.172 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:35.172 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:35.172 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:35.172 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:35.172 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:35.172 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:35.172 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:35.172 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:35.172 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:35.172 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:35.172 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:35.172 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:35.172 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:35.172 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:35.172 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:35.172 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:35.172 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:35.172 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:35.172 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:35.172 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:35.172 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:35.172 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:35.172 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:35.172 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:35.172 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:35.172 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:35.172 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:35.172 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:35.172 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:35.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.172 --rc genhtml_branch_coverage=1 00:08:35.173 --rc genhtml_function_coverage=1 00:08:35.173 --rc genhtml_legend=1 00:08:35.173 --rc geninfo_all_blocks=1 00:08:35.173 --rc geninfo_unexecuted_blocks=1 00:08:35.173 00:08:35.173 ' 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:35.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.173 --rc genhtml_branch_coverage=1 00:08:35.173 --rc genhtml_function_coverage=1 00:08:35.173 --rc genhtml_legend=1 00:08:35.173 --rc geninfo_all_blocks=1 00:08:35.173 --rc geninfo_unexecuted_blocks=1 00:08:35.173 00:08:35.173 ' 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:35.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.173 --rc genhtml_branch_coverage=1 00:08:35.173 --rc genhtml_function_coverage=1 00:08:35.173 --rc genhtml_legend=1 00:08:35.173 --rc geninfo_all_blocks=1 00:08:35.173 --rc geninfo_unexecuted_blocks=1 00:08:35.173 00:08:35.173 ' 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:35.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.173 --rc genhtml_branch_coverage=1 00:08:35.173 --rc genhtml_function_coverage=1 00:08:35.173 --rc genhtml_legend=1 00:08:35.173 --rc geninfo_all_blocks=1 00:08:35.173 --rc geninfo_unexecuted_blocks=1 00:08:35.173 00:08:35.173 ' 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:35.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.173 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:35.174 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.174 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:35.174 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:35.174 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:35.174 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:43.322 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:43.322 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:43.322 Found net devices under 0000:31:00.0: cvl_0_0 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:43.322 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:43.323 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:43.323 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.323 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:43.323 Found net devices under 0000:31:00.1: cvl_0_1 00:08:43.323 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.323 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:43.323 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:43.323 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:43.323 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:43.323 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:43.323 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:43.323 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:43.323 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:43.323 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:43.323 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:43.323 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:43.323 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:43.323 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:43.323 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:43.323 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:43.323 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:43.323 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:43.323 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:43.323 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:43.323 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:43.323 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:43.323 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:43.323 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:43.323 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:43.585 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:43.585 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:43.585 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:43.585 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:43.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:43.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.538 ms 00:08:43.585 00:08:43.585 --- 10.0.0.2 ping statistics --- 00:08:43.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.585 rtt min/avg/max/mdev = 0.538/0.538/0.538/0.000 ms 00:08:43.585 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:43.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:43.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:08:43.585 00:08:43.585 --- 10.0.0.1 ping statistics --- 00:08:43.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.585 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:08:43.585 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:43.585 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:43.585 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:43.585 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:43.585 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:43.585 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:43.585 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:43.585 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:43.585 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:43.585 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:43.585 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:43.585 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:43.585 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:43.585 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1909171 00:08:43.585 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1909171 00:08:43.585 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:43.585 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1909171 ']' 00:08:43.585 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.585 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:43.585 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.585 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:43.585 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:43.585 [2024-11-26 07:18:27.670083] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:08:43.585 [2024-11-26 07:18:27.670136] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.846 [2024-11-26 07:18:27.755329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:43.846 [2024-11-26 07:18:27.791768] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:43.846 [2024-11-26 07:18:27.791800] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:43.846 [2024-11-26 07:18:27.791809] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:43.846 [2024-11-26 07:18:27.791816] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:43.846 [2024-11-26 07:18:27.791821] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:43.846 [2024-11-26 07:18:27.793190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.846 [2024-11-26 07:18:27.793305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:43.846 [2024-11-26 07:18:27.793308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.418 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:44.418 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:44.418 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:44.418 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:44.418 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:44.418 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:44.418 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:44.680 [2024-11-26 07:18:28.658999] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:44.680 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:44.942 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:44.942 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:45.202 07:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:45.202 07:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:45.202 07:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:45.462 07:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=08a4ef25-1140-4f74-9ace-aa31c5458eea 00:08:45.462 07:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 08a4ef25-1140-4f74-9ace-aa31c5458eea lvol 20 00:08:45.723 07:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=81dc4f31-4b08-4fa8-8992-f66f37bfdf2a 00:08:45.723 07:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:45.724 07:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 81dc4f31-4b08-4fa8-8992-f66f37bfdf2a 00:08:45.985 07:18:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:46.245 [2024-11-26 07:18:30.191303] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:46.246 07:18:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:46.507 07:18:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1909660 00:08:46.507 07:18:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:46.507 07:18:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:47.449 07:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 81dc4f31-4b08-4fa8-8992-f66f37bfdf2a MY_SNAPSHOT 00:08:47.710 07:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=07a645d0-0401-4c2f-91a6-bf94feb7a99a 00:08:47.710 07:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 81dc4f31-4b08-4fa8-8992-f66f37bfdf2a 30 00:08:47.710 07:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 07a645d0-0401-4c2f-91a6-bf94feb7a99a MY_CLONE 00:08:48.032 07:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a4aa1bf8-898f-4f97-aee2-00154906a834 00:08:48.032 07:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate a4aa1bf8-898f-4f97-aee2-00154906a834 00:08:48.629 07:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1909660 00:08:56.767 Initializing NVMe Controllers 00:08:56.767 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:56.767 Controller IO queue size 128, less than required. 00:08:56.767 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:56.767 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:56.767 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:56.767 Initialization complete. Launching workers. 00:08:56.767 ======================================================== 00:08:56.767 Latency(us) 00:08:56.767 Device Information : IOPS MiB/s Average min max 00:08:56.767 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12307.30 48.08 10402.74 1615.52 48514.65 00:08:56.767 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 18087.50 70.65 7077.24 362.63 54666.96 00:08:56.767 ======================================================== 00:08:56.767 Total : 30394.80 118.73 8423.78 362.63 54666.96 00:08:56.767 00:08:56.767 07:18:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:56.767 07:18:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 81dc4f31-4b08-4fa8-8992-f66f37bfdf2a 00:08:57.027 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 08a4ef25-1140-4f74-9ace-aa31c5458eea 00:08:57.288 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:57.288 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:57.288 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:57.288 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:57.288 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:57.288 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:57.288 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:57.288 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:57.288 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:57.288 rmmod nvme_tcp 00:08:57.288 rmmod nvme_fabrics 00:08:57.288 rmmod nvme_keyring 00:08:57.288 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:57.288 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:57.288 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:57.288 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1909171 ']' 00:08:57.288 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1909171 00:08:57.288 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1909171 ']' 00:08:57.288 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1909171 00:08:57.288 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:57.288 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:57.288 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1909171 00:08:57.288 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:57.288 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:57.288 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1909171' 00:08:57.288 killing process with pid 1909171 00:08:57.288 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1909171 00:08:57.288 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1909171 00:08:57.549 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:57.549 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:57.549 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:57.549 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:57.549 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:57.549 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:57.549 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:57.549 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:57.549 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:57.549 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.549 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:57.549 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.463 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:59.463 00:08:59.463 real 0m24.542s 00:08:59.463 user 1m4.392s 00:08:59.463 sys 0m9.038s 00:08:59.463 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.463 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:59.463 ************************************ 00:08:59.463 END TEST nvmf_lvol 00:08:59.463 ************************************ 00:08:59.724 07:18:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:59.724 07:18:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:59.724 07:18:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.724 07:18:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:59.724 ************************************ 00:08:59.724 START TEST nvmf_lvs_grow 00:08:59.724 ************************************ 00:08:59.724 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:59.724 * Looking for test storage... 00:08:59.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:59.724 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:59.724 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:08:59.724 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:59.724 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:59.724 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:59.724 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:59.724 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:59.724 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:59.724 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:59.724 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:59.724 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:59.724 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:59.724 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:59.724 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:59.724 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:59.724 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:59.724 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:59.724 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:59.724 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:59.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.987 --rc genhtml_branch_coverage=1 00:08:59.987 --rc genhtml_function_coverage=1 00:08:59.987 --rc genhtml_legend=1 00:08:59.987 --rc geninfo_all_blocks=1 00:08:59.987 --rc geninfo_unexecuted_blocks=1 00:08:59.987 00:08:59.987 ' 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:59.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.987 --rc genhtml_branch_coverage=1 00:08:59.987 --rc genhtml_function_coverage=1 00:08:59.987 --rc genhtml_legend=1 00:08:59.987 --rc geninfo_all_blocks=1 00:08:59.987 --rc geninfo_unexecuted_blocks=1 00:08:59.987 00:08:59.987 ' 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:59.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.987 --rc genhtml_branch_coverage=1 00:08:59.987 --rc genhtml_function_coverage=1 00:08:59.987 --rc genhtml_legend=1 00:08:59.987 --rc geninfo_all_blocks=1 00:08:59.987 --rc geninfo_unexecuted_blocks=1 00:08:59.987 00:08:59.987 ' 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:59.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.987 --rc genhtml_branch_coverage=1 00:08:59.987 --rc genhtml_function_coverage=1 00:08:59.987 --rc genhtml_legend=1 00:08:59.987 --rc geninfo_all_blocks=1 00:08:59.987 --rc geninfo_unexecuted_blocks=1 00:08:59.987 00:08:59.987 ' 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:59.987 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:59.987 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:08.130 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:08.130 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.130 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:08.131 Found net devices under 0000:31:00.0: cvl_0_0 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:08.131 Found net devices under 0000:31:00.1: cvl_0_1 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:08.131 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:08.391 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:08.391 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:08.391 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:08.391 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:08.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.500 ms 00:09:08.391 00:09:08.391 --- 10.0.0.2 ping statistics --- 00:09:08.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.391 rtt min/avg/max/mdev = 0.500/0.500/0.500/0.000 ms 00:09:08.391 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:08.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:09:08.391 00:09:08.391 --- 10.0.0.1 ping statistics --- 00:09:08.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.391 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:09:08.391 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.392 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:09:08.392 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:08.392 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.392 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:08.392 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:08.392 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.392 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:08.392 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:08.392 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:08.392 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:08.392 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:08.392 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:08.392 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1916608 00:09:08.392 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1916608 00:09:08.392 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:08.392 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1916608 ']' 00:09:08.392 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.392 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.392 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.392 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.392 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:08.392 [2024-11-26 07:18:52.482599] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:09:08.392 [2024-11-26 07:18:52.482669] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.653 [2024-11-26 07:18:52.573750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.653 [2024-11-26 07:18:52.614079] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:08.653 [2024-11-26 07:18:52.614116] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:08.653 [2024-11-26 07:18:52.614125] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:08.653 [2024-11-26 07:18:52.614131] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:08.653 [2024-11-26 07:18:52.614137] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:08.653 [2024-11-26 07:18:52.614735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.224 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:09.224 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:09:09.224 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:09.224 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:09.224 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:09.224 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.224 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:09.485 [2024-11-26 07:18:53.464627] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:09.485 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:09.485 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:09.485 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.485 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:09.485 ************************************ 00:09:09.485 START TEST lvs_grow_clean 00:09:09.485 ************************************ 00:09:09.485 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:09:09.485 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:09.485 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:09.485 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:09.485 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:09.485 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:09.485 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:09.485 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:09.485 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:09.485 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:09.746 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:09.746 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:10.007 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=391e8e9f-0ece-4a8a-ab59-4e570ae189ce 00:09:10.007 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 391e8e9f-0ece-4a8a-ab59-4e570ae189ce 00:09:10.007 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:10.007 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:10.007 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:10.007 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 391e8e9f-0ece-4a8a-ab59-4e570ae189ce lvol 150 00:09:10.268 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=86d9a949-51dc-46c0-955a-e2bd4cc1e671 00:09:10.268 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:10.268 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:10.268 [2024-11-26 07:18:54.360555] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:10.268 [2024-11-26 07:18:54.360605] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:10.268 true 00:09:10.268 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 391e8e9f-0ece-4a8a-ab59-4e570ae189ce 00:09:10.268 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:10.529 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:10.529 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:10.790 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 86d9a949-51dc-46c0-955a-e2bd4cc1e671 00:09:10.790 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:11.051 [2024-11-26 07:18:55.030598] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.051 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:11.311 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1917317 00:09:11.311 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:11.311 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:11.311 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1917317 /var/tmp/bdevperf.sock 00:09:11.311 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1917317 ']' 00:09:11.311 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:11.311 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:11.311 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:11.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:11.311 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:11.311 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:11.311 [2024-11-26 07:18:55.258400] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:09:11.311 [2024-11-26 07:18:55.258451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1917317 ] 00:09:11.311 [2024-11-26 07:18:55.353014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.311 [2024-11-26 07:18:55.388822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.266 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.266 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:12.266 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:12.525 Nvme0n1 00:09:12.525 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:12.525 [ 00:09:12.525 { 00:09:12.525 "name": "Nvme0n1", 00:09:12.525 "aliases": [ 00:09:12.525 "86d9a949-51dc-46c0-955a-e2bd4cc1e671" 00:09:12.525 ], 00:09:12.525 "product_name": "NVMe disk", 00:09:12.525 "block_size": 4096, 00:09:12.526 "num_blocks": 38912, 00:09:12.526 "uuid": "86d9a949-51dc-46c0-955a-e2bd4cc1e671", 00:09:12.526 "numa_id": 0, 00:09:12.526 "assigned_rate_limits": { 00:09:12.526 "rw_ios_per_sec": 0, 00:09:12.526 "rw_mbytes_per_sec": 0, 00:09:12.526 "r_mbytes_per_sec": 0, 00:09:12.526 "w_mbytes_per_sec": 0 00:09:12.526 }, 00:09:12.526 "claimed": false, 00:09:12.526 "zoned": false, 00:09:12.526 "supported_io_types": { 00:09:12.526 "read": true, 00:09:12.526 "write": true, 00:09:12.526 "unmap": true, 00:09:12.526 "flush": true, 00:09:12.526 "reset": true, 00:09:12.526 "nvme_admin": true, 00:09:12.526 "nvme_io": true, 00:09:12.526 "nvme_io_md": false, 00:09:12.526 "write_zeroes": true, 00:09:12.526 "zcopy": false, 00:09:12.526 "get_zone_info": false, 00:09:12.526 "zone_management": false, 00:09:12.526 "zone_append": false, 00:09:12.526 "compare": true, 00:09:12.526 "compare_and_write": true, 00:09:12.526 "abort": true, 00:09:12.526 "seek_hole": false, 00:09:12.526 "seek_data": false, 00:09:12.526 "copy": true, 00:09:12.526 "nvme_iov_md": false 00:09:12.526 }, 00:09:12.526 "memory_domains": [ 00:09:12.526 { 00:09:12.526 "dma_device_id": "system", 00:09:12.526 "dma_device_type": 1 00:09:12.526 } 00:09:12.526 ], 00:09:12.526 "driver_specific": { 00:09:12.526 "nvme": [ 00:09:12.526 { 00:09:12.526 "trid": { 00:09:12.526 "trtype": "TCP", 00:09:12.526 "adrfam": "IPv4", 00:09:12.526 "traddr": "10.0.0.2", 00:09:12.526 "trsvcid": "4420", 00:09:12.526 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:12.526 }, 00:09:12.526 "ctrlr_data": { 00:09:12.526 "cntlid": 1, 00:09:12.526 "vendor_id": "0x8086", 00:09:12.526 "model_number": "SPDK bdev Controller", 00:09:12.526 "serial_number": "SPDK0", 00:09:12.526 "firmware_revision": "25.01", 00:09:12.526 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:12.526 "oacs": { 00:09:12.526 "security": 0, 00:09:12.526 "format": 0, 00:09:12.526 "firmware": 0, 00:09:12.526 "ns_manage": 0 00:09:12.526 }, 00:09:12.526 "multi_ctrlr": true, 00:09:12.526 "ana_reporting": false 00:09:12.526 }, 00:09:12.526 "vs": { 00:09:12.526 "nvme_version": "1.3" 00:09:12.526 }, 00:09:12.526 "ns_data": { 00:09:12.526 "id": 1, 00:09:12.526 "can_share": true 00:09:12.526 } 00:09:12.526 } 00:09:12.526 ], 00:09:12.526 "mp_policy": "active_passive" 00:09:12.526 } 00:09:12.526 } 00:09:12.526 ] 00:09:12.526 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:12.526 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1917655 00:09:12.526 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:12.786 Running I/O for 10 seconds... 00:09:13.729 Latency(us) 00:09:13.729 [2024-11-26T06:18:57.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.729 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.729 Nvme0n1 : 1.00 17774.00 69.43 0.00 0.00 0.00 0.00 0.00 00:09:13.729 [2024-11-26T06:18:57.866Z] =================================================================================================================== 00:09:13.729 [2024-11-26T06:18:57.866Z] Total : 17774.00 69.43 0.00 0.00 0.00 0.00 0.00 00:09:13.729 00:09:14.672 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 391e8e9f-0ece-4a8a-ab59-4e570ae189ce 00:09:14.672 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.672 Nvme0n1 : 2.00 17874.50 69.82 0.00 0.00 0.00 0.00 0.00 00:09:14.672 [2024-11-26T06:18:58.809Z] =================================================================================================================== 00:09:14.672 [2024-11-26T06:18:58.809Z] Total : 17874.50 69.82 0.00 0.00 0.00 0.00 0.00 00:09:14.672 00:09:14.933 true 00:09:14.933 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 391e8e9f-0ece-4a8a-ab59-4e570ae189ce 00:09:14.933 07:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:14.933 07:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:14.933 07:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:14.933 07:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1917655 00:09:15.876 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.876 Nvme0n1 : 3.00 17931.00 70.04 0.00 0.00 0.00 0.00 0.00 00:09:15.876 [2024-11-26T06:19:00.013Z] =================================================================================================================== 00:09:15.876 [2024-11-26T06:19:00.013Z] Total : 17931.00 70.04 0.00 0.00 0.00 0.00 0.00 00:09:15.876 00:09:16.818 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.818 Nvme0n1 : 4.00 17973.00 70.21 0.00 0.00 0.00 0.00 0.00 00:09:16.818 [2024-11-26T06:19:00.955Z] =================================================================================================================== 00:09:16.818 [2024-11-26T06:19:00.955Z] Total : 17973.00 70.21 0.00 0.00 0.00 0.00 0.00 00:09:16.818 00:09:17.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.759 Nvme0n1 : 5.00 18025.00 70.41 0.00 0.00 0.00 0.00 0.00 00:09:17.759 [2024-11-26T06:19:01.896Z] =================================================================================================================== 00:09:17.759 [2024-11-26T06:19:01.896Z] Total : 18025.00 70.41 0.00 0.00 0.00 0.00 0.00 00:09:17.760 00:09:18.701 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.701 Nvme0n1 : 6.00 18036.83 70.46 0.00 0.00 0.00 0.00 0.00 00:09:18.701 [2024-11-26T06:19:02.838Z] =================================================================================================================== 00:09:18.701 [2024-11-26T06:19:02.838Z] Total : 18036.83 70.46 0.00 0.00 0.00 0.00 0.00 00:09:18.701 00:09:19.641 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.641 Nvme0n1 : 7.00 18064.57 70.56 0.00 0.00 0.00 0.00 0.00 00:09:19.641 [2024-11-26T06:19:03.778Z] =================================================================================================================== 00:09:19.641 [2024-11-26T06:19:03.778Z] Total : 18064.57 70.56 0.00 0.00 0.00 0.00 0.00 00:09:19.641 00:09:21.024 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.024 Nvme0n1 : 8.00 18077.75 70.62 0.00 0.00 0.00 0.00 0.00 00:09:21.024 [2024-11-26T06:19:05.161Z] =================================================================================================================== 00:09:21.024 [2024-11-26T06:19:05.161Z] Total : 18077.75 70.62 0.00 0.00 0.00 0.00 0.00 00:09:21.024 00:09:21.597 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.597 Nvme0n1 : 9.00 18099.56 70.70 0.00 0.00 0.00 0.00 0.00 00:09:21.597 [2024-11-26T06:19:05.734Z] =================================================================================================================== 00:09:21.597 [2024-11-26T06:19:05.734Z] Total : 18099.56 70.70 0.00 0.00 0.00 0.00 0.00 00:09:21.597 00:09:22.983 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.983 Nvme0n1 : 10.00 18107.30 70.73 0.00 0.00 0.00 0.00 0.00 00:09:22.983 [2024-11-26T06:19:07.120Z] =================================================================================================================== 00:09:22.983 [2024-11-26T06:19:07.120Z] Total : 18107.30 70.73 0.00 0.00 0.00 0.00 0.00 00:09:22.983 00:09:22.983 00:09:22.983 Latency(us) 00:09:22.983 [2024-11-26T06:19:07.120Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.983 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.983 Nvme0n1 : 10.00 18111.42 70.75 0.00 0.00 7064.46 4150.61 17803.95 00:09:22.983 [2024-11-26T06:19:07.120Z] =================================================================================================================== 00:09:22.983 [2024-11-26T06:19:07.120Z] Total : 18111.42 70.75 0.00 0.00 7064.46 4150.61 17803.95 00:09:22.983 { 00:09:22.983 "results": [ 00:09:22.983 { 00:09:22.983 "job": "Nvme0n1", 00:09:22.983 "core_mask": "0x2", 00:09:22.983 "workload": "randwrite", 00:09:22.983 "status": "finished", 00:09:22.983 "queue_depth": 128, 00:09:22.983 "io_size": 4096, 00:09:22.983 "runtime": 10.004795, 00:09:22.983 "iops": 18111.415576231197, 00:09:22.983 "mibps": 70.74771709465311, 00:09:22.983 "io_failed": 0, 00:09:22.983 "io_timeout": 0, 00:09:22.983 "avg_latency_us": 7064.458800116998, 00:09:22.983 "min_latency_us": 4150.613333333334, 00:09:22.983 "max_latency_us": 17803.946666666667 00:09:22.983 } 00:09:22.983 ], 00:09:22.983 "core_count": 1 00:09:22.983 } 00:09:22.984 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1917317 00:09:22.984 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1917317 ']' 00:09:22.984 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1917317 00:09:22.984 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:22.984 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:22.984 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1917317 00:09:22.984 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:22.984 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:22.984 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1917317' 00:09:22.984 killing process with pid 1917317 00:09:22.984 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1917317 00:09:22.984 Received shutdown signal, test time was about 10.000000 seconds 00:09:22.984 00:09:22.984 Latency(us) 00:09:22.984 [2024-11-26T06:19:07.121Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.984 [2024-11-26T06:19:07.121Z] =================================================================================================================== 00:09:22.984 [2024-11-26T06:19:07.121Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:22.984 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1917317 00:09:22.984 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:22.984 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:23.244 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 391e8e9f-0ece-4a8a-ab59-4e570ae189ce 00:09:23.244 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:23.505 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:23.505 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:23.505 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:23.505 [2024-11-26 07:19:07.575402] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:23.505 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 391e8e9f-0ece-4a8a-ab59-4e570ae189ce 00:09:23.505 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:23.505 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 391e8e9f-0ece-4a8a-ab59-4e570ae189ce 00:09:23.505 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:23.505 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:23.505 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:23.505 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:23.505 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:23.505 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:23.505 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:23.505 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:23.505 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 391e8e9f-0ece-4a8a-ab59-4e570ae189ce 00:09:23.766 request: 00:09:23.766 { 00:09:23.766 "uuid": "391e8e9f-0ece-4a8a-ab59-4e570ae189ce", 00:09:23.766 "method": "bdev_lvol_get_lvstores", 00:09:23.766 "req_id": 1 00:09:23.766 } 00:09:23.766 Got JSON-RPC error response 00:09:23.766 response: 00:09:23.766 { 00:09:23.766 "code": -19, 00:09:23.766 "message": "No such device" 00:09:23.766 } 00:09:23.766 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:23.766 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:23.766 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:23.766 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:23.766 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:24.026 aio_bdev 00:09:24.026 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 86d9a949-51dc-46c0-955a-e2bd4cc1e671 00:09:24.026 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=86d9a949-51dc-46c0-955a-e2bd4cc1e671 00:09:24.026 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:24.026 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:24.026 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:24.026 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:24.026 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:24.026 07:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 86d9a949-51dc-46c0-955a-e2bd4cc1e671 -t 2000 00:09:24.287 [ 00:09:24.287 { 00:09:24.287 "name": "86d9a949-51dc-46c0-955a-e2bd4cc1e671", 00:09:24.287 "aliases": [ 00:09:24.287 "lvs/lvol" 00:09:24.287 ], 00:09:24.287 "product_name": "Logical Volume", 00:09:24.287 "block_size": 4096, 00:09:24.287 "num_blocks": 38912, 00:09:24.287 "uuid": "86d9a949-51dc-46c0-955a-e2bd4cc1e671", 00:09:24.287 "assigned_rate_limits": { 00:09:24.287 "rw_ios_per_sec": 0, 00:09:24.287 "rw_mbytes_per_sec": 0, 00:09:24.287 "r_mbytes_per_sec": 0, 00:09:24.287 "w_mbytes_per_sec": 0 00:09:24.287 }, 00:09:24.287 "claimed": false, 00:09:24.287 "zoned": false, 00:09:24.287 "supported_io_types": { 00:09:24.287 "read": true, 00:09:24.287 "write": true, 00:09:24.287 "unmap": true, 00:09:24.287 "flush": false, 00:09:24.287 "reset": true, 00:09:24.287 "nvme_admin": false, 00:09:24.287 "nvme_io": false, 00:09:24.287 "nvme_io_md": false, 00:09:24.287 "write_zeroes": true, 00:09:24.287 "zcopy": false, 00:09:24.287 "get_zone_info": false, 00:09:24.287 "zone_management": false, 00:09:24.287 "zone_append": false, 00:09:24.287 "compare": false, 00:09:24.287 "compare_and_write": false, 00:09:24.287 "abort": false, 00:09:24.287 "seek_hole": true, 00:09:24.287 "seek_data": true, 00:09:24.287 "copy": false, 00:09:24.287 "nvme_iov_md": false 00:09:24.287 }, 00:09:24.287 "driver_specific": { 00:09:24.287 "lvol": { 00:09:24.287 "lvol_store_uuid": "391e8e9f-0ece-4a8a-ab59-4e570ae189ce", 00:09:24.287 "base_bdev": "aio_bdev", 00:09:24.287 "thin_provision": false, 00:09:24.287 "num_allocated_clusters": 38, 00:09:24.287 "snapshot": false, 00:09:24.287 "clone": false, 00:09:24.287 "esnap_clone": false 00:09:24.287 } 00:09:24.287 } 00:09:24.287 } 00:09:24.287 ] 00:09:24.287 07:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:24.287 07:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 391e8e9f-0ece-4a8a-ab59-4e570ae189ce 00:09:24.287 07:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:24.548 07:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:24.548 07:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 391e8e9f-0ece-4a8a-ab59-4e570ae189ce 00:09:24.548 07:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:24.548 07:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:24.548 07:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 86d9a949-51dc-46c0-955a-e2bd4cc1e671 00:09:24.809 07:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 391e8e9f-0ece-4a8a-ab59-4e570ae189ce 00:09:25.069 07:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:25.069 07:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:25.069 00:09:25.069 real 0m15.642s 00:09:25.069 user 0m15.473s 00:09:25.069 sys 0m1.287s 00:09:25.069 07:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.069 07:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:25.069 ************************************ 00:09:25.069 END TEST lvs_grow_clean 00:09:25.069 ************************************ 00:09:25.069 07:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:25.069 07:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:25.069 07:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.069 07:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:25.330 ************************************ 00:09:25.330 START TEST lvs_grow_dirty 00:09:25.330 ************************************ 00:09:25.330 07:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:25.330 07:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:25.330 07:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:25.330 07:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:25.330 07:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:25.330 07:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:25.330 07:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:25.330 07:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:25.330 07:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:25.330 07:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:25.330 07:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:25.330 07:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:25.591 07:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=944e006c-38b7-463b-ba1d-bd0fec5f2adf 00:09:25.591 07:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 944e006c-38b7-463b-ba1d-bd0fec5f2adf 00:09:25.592 07:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:25.853 07:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:25.853 07:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:25.853 07:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 944e006c-38b7-463b-ba1d-bd0fec5f2adf lvol 150 00:09:25.853 07:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=9b666254-0900-452c-a012-5f15b8402b66 00:09:25.853 07:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:25.853 07:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:26.113 [2024-11-26 07:19:10.093035] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:26.114 [2024-11-26 07:19:10.093089] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:26.114 true 00:09:26.114 07:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 944e006c-38b7-463b-ba1d-bd0fec5f2adf 00:09:26.114 07:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:26.374 07:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:26.374 07:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:26.374 07:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9b666254-0900-452c-a012-5f15b8402b66 00:09:26.634 07:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:26.634 [2024-11-26 07:19:10.751037] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:26.894 07:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:26.894 07:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1920415 00:09:26.894 07:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:26.894 07:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:26.894 07:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1920415 /var/tmp/bdevperf.sock 00:09:26.894 07:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1920415 ']' 00:09:26.894 07:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:26.894 07:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.894 07:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:26.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:26.894 07:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.894 07:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:26.894 [2024-11-26 07:19:10.979162] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:09:26.894 [2024-11-26 07:19:10.979213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1920415 ] 00:09:27.154 [2024-11-26 07:19:11.074388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.154 [2024-11-26 07:19:11.111093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.725 07:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:27.725 07:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:27.725 07:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:27.985 Nvme0n1 00:09:27.985 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:28.245 [ 00:09:28.245 { 00:09:28.245 "name": "Nvme0n1", 00:09:28.245 "aliases": [ 00:09:28.245 "9b666254-0900-452c-a012-5f15b8402b66" 00:09:28.245 ], 00:09:28.245 "product_name": "NVMe disk", 00:09:28.245 "block_size": 4096, 00:09:28.245 "num_blocks": 38912, 00:09:28.245 "uuid": "9b666254-0900-452c-a012-5f15b8402b66", 00:09:28.245 "numa_id": 0, 00:09:28.245 "assigned_rate_limits": { 00:09:28.245 "rw_ios_per_sec": 0, 00:09:28.245 "rw_mbytes_per_sec": 0, 00:09:28.245 "r_mbytes_per_sec": 0, 00:09:28.245 "w_mbytes_per_sec": 0 00:09:28.245 }, 00:09:28.245 "claimed": false, 00:09:28.245 "zoned": false, 00:09:28.245 "supported_io_types": { 00:09:28.245 "read": true, 00:09:28.245 "write": true, 00:09:28.245 "unmap": true, 00:09:28.245 "flush": true, 00:09:28.245 "reset": true, 00:09:28.245 "nvme_admin": true, 00:09:28.245 "nvme_io": true, 00:09:28.245 "nvme_io_md": false, 00:09:28.245 "write_zeroes": true, 00:09:28.245 "zcopy": false, 00:09:28.245 "get_zone_info": false, 00:09:28.245 "zone_management": false, 00:09:28.245 "zone_append": false, 00:09:28.245 "compare": true, 00:09:28.245 "compare_and_write": true, 00:09:28.245 "abort": true, 00:09:28.245 "seek_hole": false, 00:09:28.245 "seek_data": false, 00:09:28.245 "copy": true, 00:09:28.245 "nvme_iov_md": false 00:09:28.245 }, 00:09:28.245 "memory_domains": [ 00:09:28.245 { 00:09:28.245 "dma_device_id": "system", 00:09:28.245 "dma_device_type": 1 00:09:28.245 } 00:09:28.245 ], 00:09:28.245 "driver_specific": { 00:09:28.245 "nvme": [ 00:09:28.245 { 00:09:28.245 "trid": { 00:09:28.245 "trtype": "TCP", 00:09:28.245 "adrfam": "IPv4", 00:09:28.245 "traddr": "10.0.0.2", 00:09:28.245 "trsvcid": "4420", 00:09:28.245 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:28.245 }, 00:09:28.245 "ctrlr_data": { 00:09:28.245 "cntlid": 1, 00:09:28.245 "vendor_id": "0x8086", 00:09:28.245 "model_number": "SPDK bdev Controller", 00:09:28.245 "serial_number": "SPDK0", 00:09:28.245 "firmware_revision": "25.01", 00:09:28.245 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:28.245 "oacs": { 00:09:28.245 "security": 0, 00:09:28.245 "format": 0, 00:09:28.245 "firmware": 0, 00:09:28.245 "ns_manage": 0 00:09:28.245 }, 00:09:28.245 "multi_ctrlr": true, 00:09:28.245 "ana_reporting": false 00:09:28.245 }, 00:09:28.245 "vs": { 00:09:28.245 "nvme_version": "1.3" 00:09:28.245 }, 00:09:28.245 "ns_data": { 00:09:28.245 "id": 1, 00:09:28.245 "can_share": true 00:09:28.245 } 00:09:28.245 } 00:09:28.245 ], 00:09:28.245 "mp_policy": "active_passive" 00:09:28.245 } 00:09:28.245 } 00:09:28.245 ] 00:09:28.245 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1920751 00:09:28.245 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:28.245 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:28.245 Running I/O for 10 seconds... 00:09:29.183 Latency(us) 00:09:29.183 [2024-11-26T06:19:13.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.183 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.183 Nvme0n1 : 1.00 17833.00 69.66 0.00 0.00 0.00 0.00 0.00 00:09:29.183 [2024-11-26T06:19:13.320Z] =================================================================================================================== 00:09:29.183 [2024-11-26T06:19:13.320Z] Total : 17833.00 69.66 0.00 0.00 0.00 0.00 0.00 00:09:29.183 00:09:30.123 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 944e006c-38b7-463b-ba1d-bd0fec5f2adf 00:09:30.383 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.383 Nvme0n1 : 2.00 17933.50 70.05 0.00 0.00 0.00 0.00 0.00 00:09:30.383 [2024-11-26T06:19:14.520Z] =================================================================================================================== 00:09:30.383 [2024-11-26T06:19:14.520Z] Total : 17933.50 70.05 0.00 0.00 0.00 0.00 0.00 00:09:30.383 00:09:30.383 true 00:09:30.383 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 944e006c-38b7-463b-ba1d-bd0fec5f2adf 00:09:30.383 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:30.643 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:30.643 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:30.643 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1920751 00:09:31.213 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.213 Nvme0n1 : 3.00 17996.00 70.30 0.00 0.00 0.00 0.00 0.00 00:09:31.213 [2024-11-26T06:19:15.350Z] =================================================================================================================== 00:09:31.213 [2024-11-26T06:19:15.350Z] Total : 17996.00 70.30 0.00 0.00 0.00 0.00 0.00 00:09:31.213 00:09:32.153 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.153 Nvme0n1 : 4.00 18027.25 70.42 0.00 0.00 0.00 0.00 0.00 00:09:32.153 [2024-11-26T06:19:16.290Z] =================================================================================================================== 00:09:32.153 [2024-11-26T06:19:16.290Z] Total : 18027.25 70.42 0.00 0.00 0.00 0.00 0.00 00:09:32.153 00:09:33.534 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.534 Nvme0n1 : 5.00 18068.20 70.58 0.00 0.00 0.00 0.00 0.00 00:09:33.534 [2024-11-26T06:19:17.671Z] =================================================================================================================== 00:09:33.534 [2024-11-26T06:19:17.671Z] Total : 18068.20 70.58 0.00 0.00 0.00 0.00 0.00 00:09:33.534 00:09:34.474 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.474 Nvme0n1 : 6.00 18086.83 70.65 0.00 0.00 0.00 0.00 0.00 00:09:34.474 [2024-11-26T06:19:18.611Z] =================================================================================================================== 00:09:34.474 [2024-11-26T06:19:18.611Z] Total : 18086.83 70.65 0.00 0.00 0.00 0.00 0.00 00:09:34.474 00:09:35.413 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.413 Nvme0n1 : 7.00 18109.43 70.74 0.00 0.00 0.00 0.00 0.00 00:09:35.413 [2024-11-26T06:19:19.550Z] =================================================================================================================== 00:09:35.413 [2024-11-26T06:19:19.550Z] Total : 18109.43 70.74 0.00 0.00 0.00 0.00 0.00 00:09:35.413 00:09:36.352 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:36.352 Nvme0n1 : 8.00 18116.75 70.77 0.00 0.00 0.00 0.00 0.00 00:09:36.352 [2024-11-26T06:19:20.489Z] =================================================================================================================== 00:09:36.352 [2024-11-26T06:19:20.489Z] Total : 18116.75 70.77 0.00 0.00 0.00 0.00 0.00 00:09:36.352 00:09:37.291 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.291 Nvme0n1 : 9.00 18129.11 70.82 0.00 0.00 0.00 0.00 0.00 00:09:37.291 [2024-11-26T06:19:21.428Z] =================================================================================================================== 00:09:37.291 [2024-11-26T06:19:21.428Z] Total : 18129.11 70.82 0.00 0.00 0.00 0.00 0.00 00:09:37.291 00:09:38.231 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.231 Nvme0n1 : 10.00 18142.40 70.87 0.00 0.00 0.00 0.00 0.00 00:09:38.231 [2024-11-26T06:19:22.368Z] =================================================================================================================== 00:09:38.231 [2024-11-26T06:19:22.368Z] Total : 18142.40 70.87 0.00 0.00 0.00 0.00 0.00 00:09:38.231 00:09:38.231 00:09:38.231 Latency(us) 00:09:38.231 [2024-11-26T06:19:22.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:38.231 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.231 Nvme0n1 : 10.00 18140.57 70.86 0.00 0.00 7053.08 4177.92 14090.24 00:09:38.231 [2024-11-26T06:19:22.368Z] =================================================================================================================== 00:09:38.231 [2024-11-26T06:19:22.368Z] Total : 18140.57 70.86 0.00 0.00 7053.08 4177.92 14090.24 00:09:38.231 { 00:09:38.231 "results": [ 00:09:38.231 { 00:09:38.231 "job": "Nvme0n1", 00:09:38.231 "core_mask": "0x2", 00:09:38.231 "workload": "randwrite", 00:09:38.231 "status": "finished", 00:09:38.231 "queue_depth": 128, 00:09:38.231 "io_size": 4096, 00:09:38.231 "runtime": 10.004482, 00:09:38.231 "iops": 18140.569396796356, 00:09:38.231 "mibps": 70.86159920623577, 00:09:38.231 "io_failed": 0, 00:09:38.231 "io_timeout": 0, 00:09:38.231 "avg_latency_us": 7053.0815543445715, 00:09:38.231 "min_latency_us": 4177.92, 00:09:38.231 "max_latency_us": 14090.24 00:09:38.231 } 00:09:38.231 ], 00:09:38.231 "core_count": 1 00:09:38.231 } 00:09:38.231 07:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1920415 00:09:38.231 07:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1920415 ']' 00:09:38.231 07:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1920415 00:09:38.231 07:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:38.231 07:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:38.231 07:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1920415 00:09:38.491 07:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:38.491 07:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:38.491 07:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1920415' 00:09:38.491 killing process with pid 1920415 00:09:38.491 07:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1920415 00:09:38.491 Received shutdown signal, test time was about 10.000000 seconds 00:09:38.491 00:09:38.491 Latency(us) 00:09:38.491 [2024-11-26T06:19:22.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:38.491 [2024-11-26T06:19:22.628Z] =================================================================================================================== 00:09:38.491 [2024-11-26T06:19:22.628Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:38.491 07:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1920415 00:09:38.491 07:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:38.751 07:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:38.751 07:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 944e006c-38b7-463b-ba1d-bd0fec5f2adf 00:09:38.751 07:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:39.011 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:39.012 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:39.012 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1916608 00:09:39.012 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1916608 00:09:39.012 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1916608 Killed "${NVMF_APP[@]}" "$@" 00:09:39.012 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:39.012 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:39.012 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:39.012 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:39.012 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:39.012 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1922783 00:09:39.012 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1922783 00:09:39.012 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:39.012 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1922783 ']' 00:09:39.012 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.012 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:39.012 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.012 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:39.012 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:39.012 [2024-11-26 07:19:23.131014] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:09:39.012 [2024-11-26 07:19:23.131068] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.272 [2024-11-26 07:19:23.217002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.272 [2024-11-26 07:19:23.253655] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:39.272 [2024-11-26 07:19:23.253689] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:39.272 [2024-11-26 07:19:23.253698] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:39.272 [2024-11-26 07:19:23.253705] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:39.272 [2024-11-26 07:19:23.253710] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:39.272 [2024-11-26 07:19:23.254286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.843 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:39.843 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:39.843 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:39.843 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:39.843 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:39.843 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:39.843 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:40.102 [2024-11-26 07:19:24.113029] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:40.102 [2024-11-26 07:19:24.113115] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:40.102 [2024-11-26 07:19:24.113146] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:40.102 07:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:40.102 07:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 9b666254-0900-452c-a012-5f15b8402b66 00:09:40.102 07:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=9b666254-0900-452c-a012-5f15b8402b66 00:09:40.102 07:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:40.102 07:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:40.102 07:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:40.102 07:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:40.102 07:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:40.421 07:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9b666254-0900-452c-a012-5f15b8402b66 -t 2000 00:09:40.421 [ 00:09:40.421 { 00:09:40.421 "name": "9b666254-0900-452c-a012-5f15b8402b66", 00:09:40.421 "aliases": [ 00:09:40.421 "lvs/lvol" 00:09:40.421 ], 00:09:40.421 "product_name": "Logical Volume", 00:09:40.421 "block_size": 4096, 00:09:40.421 "num_blocks": 38912, 00:09:40.421 "uuid": "9b666254-0900-452c-a012-5f15b8402b66", 00:09:40.421 "assigned_rate_limits": { 00:09:40.421 "rw_ios_per_sec": 0, 00:09:40.421 "rw_mbytes_per_sec": 0, 00:09:40.421 "r_mbytes_per_sec": 0, 00:09:40.421 "w_mbytes_per_sec": 0 00:09:40.421 }, 00:09:40.421 "claimed": false, 00:09:40.421 "zoned": false, 00:09:40.421 "supported_io_types": { 00:09:40.421 "read": true, 00:09:40.421 "write": true, 00:09:40.421 "unmap": true, 00:09:40.421 "flush": false, 00:09:40.421 "reset": true, 00:09:40.421 "nvme_admin": false, 00:09:40.421 "nvme_io": false, 00:09:40.421 "nvme_io_md": false, 00:09:40.421 "write_zeroes": true, 00:09:40.421 "zcopy": false, 00:09:40.421 "get_zone_info": false, 00:09:40.421 "zone_management": false, 00:09:40.421 "zone_append": false, 00:09:40.421 "compare": false, 00:09:40.421 "compare_and_write": false, 00:09:40.421 "abort": false, 00:09:40.421 "seek_hole": true, 00:09:40.421 "seek_data": true, 00:09:40.421 "copy": false, 00:09:40.421 "nvme_iov_md": false 00:09:40.421 }, 00:09:40.421 "driver_specific": { 00:09:40.421 "lvol": { 00:09:40.421 "lvol_store_uuid": "944e006c-38b7-463b-ba1d-bd0fec5f2adf", 00:09:40.421 "base_bdev": "aio_bdev", 00:09:40.421 "thin_provision": false, 00:09:40.421 "num_allocated_clusters": 38, 00:09:40.421 "snapshot": false, 00:09:40.421 "clone": false, 00:09:40.421 "esnap_clone": false 00:09:40.421 } 00:09:40.421 } 00:09:40.421 } 00:09:40.421 ] 00:09:40.421 07:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:40.421 07:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 944e006c-38b7-463b-ba1d-bd0fec5f2adf 00:09:40.421 07:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:40.734 07:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:40.734 07:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 944e006c-38b7-463b-ba1d-bd0fec5f2adf 00:09:40.734 07:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:40.734 07:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:40.734 07:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:41.052 [2024-11-26 07:19:24.965231] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:41.052 07:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 944e006c-38b7-463b-ba1d-bd0fec5f2adf 00:09:41.052 07:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:41.052 07:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 944e006c-38b7-463b-ba1d-bd0fec5f2adf 00:09:41.052 07:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:41.052 07:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:41.052 07:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:41.052 07:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:41.052 07:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:41.052 07:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:41.052 07:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:41.052 07:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:41.052 07:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 944e006c-38b7-463b-ba1d-bd0fec5f2adf 00:09:41.052 request: 00:09:41.052 { 00:09:41.052 "uuid": "944e006c-38b7-463b-ba1d-bd0fec5f2adf", 00:09:41.052 "method": "bdev_lvol_get_lvstores", 00:09:41.052 "req_id": 1 00:09:41.052 } 00:09:41.052 Got JSON-RPC error response 00:09:41.052 response: 00:09:41.052 { 00:09:41.052 "code": -19, 00:09:41.052 "message": "No such device" 00:09:41.052 } 00:09:41.312 07:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:41.312 07:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:41.312 07:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:41.312 07:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:41.312 07:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:41.312 aio_bdev 00:09:41.312 07:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9b666254-0900-452c-a012-5f15b8402b66 00:09:41.312 07:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=9b666254-0900-452c-a012-5f15b8402b66 00:09:41.312 07:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:41.312 07:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:41.312 07:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:41.312 07:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:41.312 07:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:41.573 07:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9b666254-0900-452c-a012-5f15b8402b66 -t 2000 00:09:41.573 [ 00:09:41.573 { 00:09:41.573 "name": "9b666254-0900-452c-a012-5f15b8402b66", 00:09:41.573 "aliases": [ 00:09:41.573 "lvs/lvol" 00:09:41.573 ], 00:09:41.573 "product_name": "Logical Volume", 00:09:41.573 "block_size": 4096, 00:09:41.573 "num_blocks": 38912, 00:09:41.573 "uuid": "9b666254-0900-452c-a012-5f15b8402b66", 00:09:41.573 "assigned_rate_limits": { 00:09:41.573 "rw_ios_per_sec": 0, 00:09:41.573 "rw_mbytes_per_sec": 0, 00:09:41.573 "r_mbytes_per_sec": 0, 00:09:41.573 "w_mbytes_per_sec": 0 00:09:41.573 }, 00:09:41.573 "claimed": false, 00:09:41.573 "zoned": false, 00:09:41.573 "supported_io_types": { 00:09:41.573 "read": true, 00:09:41.573 "write": true, 00:09:41.573 "unmap": true, 00:09:41.573 "flush": false, 00:09:41.573 "reset": true, 00:09:41.573 "nvme_admin": false, 00:09:41.573 "nvme_io": false, 00:09:41.573 "nvme_io_md": false, 00:09:41.573 "write_zeroes": true, 00:09:41.573 "zcopy": false, 00:09:41.573 "get_zone_info": false, 00:09:41.573 "zone_management": false, 00:09:41.573 "zone_append": false, 00:09:41.573 "compare": false, 00:09:41.573 "compare_and_write": false, 00:09:41.573 "abort": false, 00:09:41.573 "seek_hole": true, 00:09:41.573 "seek_data": true, 00:09:41.573 "copy": false, 00:09:41.573 "nvme_iov_md": false 00:09:41.573 }, 00:09:41.573 "driver_specific": { 00:09:41.573 "lvol": { 00:09:41.573 "lvol_store_uuid": "944e006c-38b7-463b-ba1d-bd0fec5f2adf", 00:09:41.573 "base_bdev": "aio_bdev", 00:09:41.573 "thin_provision": false, 00:09:41.573 "num_allocated_clusters": 38, 00:09:41.573 "snapshot": false, 00:09:41.573 "clone": false, 00:09:41.573 "esnap_clone": false 00:09:41.573 } 00:09:41.573 } 00:09:41.573 } 00:09:41.573 ] 00:09:41.573 07:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:41.573 07:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:41.573 07:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 944e006c-38b7-463b-ba1d-bd0fec5f2adf 00:09:41.833 07:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:41.833 07:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 944e006c-38b7-463b-ba1d-bd0fec5f2adf 00:09:41.833 07:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:42.093 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:42.093 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9b666254-0900-452c-a012-5f15b8402b66 00:09:42.093 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 944e006c-38b7-463b-ba1d-bd0fec5f2adf 00:09:42.353 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:42.614 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:42.614 00:09:42.614 real 0m17.363s 00:09:42.614 user 0m45.432s 00:09:42.614 sys 0m2.899s 00:09:42.614 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.614 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:42.614 ************************************ 00:09:42.614 END TEST lvs_grow_dirty 00:09:42.614 ************************************ 00:09:42.614 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:42.614 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:42.614 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:42.614 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:42.614 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:42.614 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:42.614 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:42.614 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:42.614 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:42.614 nvmf_trace.0 00:09:42.614 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:42.614 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:42.614 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:42.614 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:42.614 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:42.614 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:42.614 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:42.614 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:42.614 rmmod nvme_tcp 00:09:42.614 rmmod nvme_fabrics 00:09:42.874 rmmod nvme_keyring 00:09:42.874 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:42.874 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:42.874 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:42.874 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1922783 ']' 00:09:42.874 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1922783 00:09:42.874 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1922783 ']' 00:09:42.874 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1922783 00:09:42.874 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:42.874 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:42.874 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1922783 00:09:42.874 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:42.874 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:42.874 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1922783' 00:09:42.874 killing process with pid 1922783 00:09:42.874 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1922783 00:09:42.874 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1922783 00:09:42.874 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:42.874 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:42.874 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:42.874 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:42.874 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:42.874 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:42.874 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:42.874 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:42.874 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:42.874 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.874 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.874 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:45.416 00:09:45.416 real 0m45.374s 00:09:45.416 user 1m7.553s 00:09:45.416 sys 0m11.043s 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:45.416 ************************************ 00:09:45.416 END TEST nvmf_lvs_grow 00:09:45.416 ************************************ 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:45.416 ************************************ 00:09:45.416 START TEST nvmf_bdev_io_wait 00:09:45.416 ************************************ 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:45.416 * Looking for test storage... 00:09:45.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:45.416 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:45.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.417 --rc genhtml_branch_coverage=1 00:09:45.417 --rc genhtml_function_coverage=1 00:09:45.417 --rc genhtml_legend=1 00:09:45.417 --rc geninfo_all_blocks=1 00:09:45.417 --rc geninfo_unexecuted_blocks=1 00:09:45.417 00:09:45.417 ' 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:45.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.417 --rc genhtml_branch_coverage=1 00:09:45.417 --rc genhtml_function_coverage=1 00:09:45.417 --rc genhtml_legend=1 00:09:45.417 --rc geninfo_all_blocks=1 00:09:45.417 --rc geninfo_unexecuted_blocks=1 00:09:45.417 00:09:45.417 ' 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:45.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.417 --rc genhtml_branch_coverage=1 00:09:45.417 --rc genhtml_function_coverage=1 00:09:45.417 --rc genhtml_legend=1 00:09:45.417 --rc geninfo_all_blocks=1 00:09:45.417 --rc geninfo_unexecuted_blocks=1 00:09:45.417 00:09:45.417 ' 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:45.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.417 --rc genhtml_branch_coverage=1 00:09:45.417 --rc genhtml_function_coverage=1 00:09:45.417 --rc genhtml_legend=1 00:09:45.417 --rc geninfo_all_blocks=1 00:09:45.417 --rc geninfo_unexecuted_blocks=1 00:09:45.417 00:09:45.417 ' 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:45.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:45.417 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.582 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:53.582 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:53.582 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:53.582 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:53.582 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:53.582 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:53.582 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:53.582 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:53.582 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:53.582 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:53.582 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:53.582 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:53.582 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:53.582 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:53.582 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:53.582 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:53.582 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:53.582 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:53.582 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:53.582 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:53.582 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:53.582 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:53.582 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:53.582 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:53.583 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:53.583 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:53.583 Found net devices under 0000:31:00.0: cvl_0_0 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:53.583 Found net devices under 0000:31:00.1: cvl_0_1 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:53.583 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:53.844 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:53.844 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:53.844 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:53.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:53.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:09:53.844 00:09:53.844 --- 10.0.0.2 ping statistics --- 00:09:53.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.844 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:09:53.844 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:53.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:53.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:09:53.844 00:09:53.844 --- 10.0.0.1 ping statistics --- 00:09:53.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.844 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:09:53.844 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:53.844 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:53.844 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:53.844 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:53.844 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:53.844 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:53.844 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:53.844 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:53.844 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:53.844 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:53.844 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:53.844 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:53.844 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.844 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1928541 00:09:53.844 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1928541 00:09:53.844 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:53.844 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1928541 ']' 00:09:53.844 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.844 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:53.844 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.844 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:53.844 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.844 [2024-11-26 07:19:37.852154] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:09:53.844 [2024-11-26 07:19:37.852222] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.844 [2024-11-26 07:19:37.944062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:54.105 [2024-11-26 07:19:37.986901] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:54.105 [2024-11-26 07:19:37.986939] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:54.105 [2024-11-26 07:19:37.986946] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:54.105 [2024-11-26 07:19:37.986953] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:54.105 [2024-11-26 07:19:37.986959] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:54.105 [2024-11-26 07:19:37.988786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:54.105 [2024-11-26 07:19:37.988911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:54.105 [2024-11-26 07:19:37.989020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.105 [2024-11-26 07:19:37.989021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:54.676 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.676 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:54.676 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:54.676 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:54.676 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:54.676 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:54.676 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:54.676 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.676 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:54.676 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.676 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:54.676 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.676 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:54.676 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.676 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:54.676 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.676 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:54.676 [2024-11-26 07:19:38.765337] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:54.676 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.676 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:54.676 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.676 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:54.676 Malloc0 00:09:54.676 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.676 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:54.676 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.676 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:54.676 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.676 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:54.676 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.676 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:54.936 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.936 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:54.936 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.936 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:54.936 [2024-11-26 07:19:38.824571] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:54.936 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.936 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1928606 00:09:54.936 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:54.936 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1928609 00:09:54.936 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:54.936 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:54.936 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:54.936 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:54.936 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:54.936 { 00:09:54.936 "params": { 00:09:54.936 "name": "Nvme$subsystem", 00:09:54.936 "trtype": "$TEST_TRANSPORT", 00:09:54.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:54.936 "adrfam": "ipv4", 00:09:54.936 "trsvcid": "$NVMF_PORT", 00:09:54.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:54.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:54.936 "hdgst": ${hdgst:-false}, 00:09:54.936 "ddgst": ${ddgst:-false} 00:09:54.936 }, 00:09:54.936 "method": "bdev_nvme_attach_controller" 00:09:54.936 } 00:09:54.936 EOF 00:09:54.936 )") 00:09:54.936 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1928611 00:09:54.936 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:54.936 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:54.936 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:54.936 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:54.936 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:54.936 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:54.936 { 00:09:54.936 "params": { 00:09:54.936 "name": "Nvme$subsystem", 00:09:54.936 "trtype": "$TEST_TRANSPORT", 00:09:54.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:54.936 "adrfam": "ipv4", 00:09:54.936 "trsvcid": "$NVMF_PORT", 00:09:54.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:54.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:54.936 "hdgst": ${hdgst:-false}, 00:09:54.936 "ddgst": ${ddgst:-false} 00:09:54.936 }, 00:09:54.936 "method": "bdev_nvme_attach_controller" 00:09:54.936 } 00:09:54.936 EOF 00:09:54.936 )") 00:09:54.936 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1928615 00:09:54.936 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:54.936 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:54.936 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:54.936 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:54.936 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:54.936 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:54.936 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:54.936 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:54.936 { 00:09:54.936 "params": { 00:09:54.936 "name": "Nvme$subsystem", 00:09:54.936 "trtype": "$TEST_TRANSPORT", 00:09:54.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:54.936 "adrfam": "ipv4", 00:09:54.936 "trsvcid": "$NVMF_PORT", 00:09:54.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:54.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:54.936 "hdgst": ${hdgst:-false}, 00:09:54.936 "ddgst": ${ddgst:-false} 00:09:54.936 }, 00:09:54.936 "method": "bdev_nvme_attach_controller" 00:09:54.936 } 00:09:54.936 EOF 00:09:54.936 )") 00:09:54.936 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:54.936 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:54.936 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:54.936 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:54.936 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:54.936 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:54.936 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:54.936 { 00:09:54.936 "params": { 00:09:54.936 "name": "Nvme$subsystem", 00:09:54.936 "trtype": "$TEST_TRANSPORT", 00:09:54.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:54.937 "adrfam": "ipv4", 00:09:54.937 "trsvcid": "$NVMF_PORT", 00:09:54.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:54.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:54.937 "hdgst": ${hdgst:-false}, 00:09:54.937 "ddgst": ${ddgst:-false} 00:09:54.937 }, 00:09:54.937 "method": "bdev_nvme_attach_controller" 00:09:54.937 } 00:09:54.937 EOF 00:09:54.937 )") 00:09:54.937 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:54.937 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1928606 00:09:54.937 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:54.937 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:54.937 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:54.937 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:54.937 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:54.937 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:54.937 "params": { 00:09:54.937 "name": "Nvme1", 00:09:54.937 "trtype": "tcp", 00:09:54.937 "traddr": "10.0.0.2", 00:09:54.937 "adrfam": "ipv4", 00:09:54.937 "trsvcid": "4420", 00:09:54.937 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:54.937 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:54.937 "hdgst": false, 00:09:54.937 "ddgst": false 00:09:54.937 }, 00:09:54.937 "method": "bdev_nvme_attach_controller" 00:09:54.937 }' 00:09:54.937 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:54.937 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:54.937 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:54.937 "params": { 00:09:54.937 "name": "Nvme1", 00:09:54.937 "trtype": "tcp", 00:09:54.937 "traddr": "10.0.0.2", 00:09:54.937 "adrfam": "ipv4", 00:09:54.937 "trsvcid": "4420", 00:09:54.937 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:54.937 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:54.937 "hdgst": false, 00:09:54.937 "ddgst": false 00:09:54.937 }, 00:09:54.937 "method": "bdev_nvme_attach_controller" 00:09:54.937 }' 00:09:54.937 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:54.937 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:54.937 "params": { 00:09:54.937 "name": "Nvme1", 00:09:54.937 "trtype": "tcp", 00:09:54.937 "traddr": "10.0.0.2", 00:09:54.937 "adrfam": "ipv4", 00:09:54.937 "trsvcid": "4420", 00:09:54.937 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:54.937 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:54.937 "hdgst": false, 00:09:54.937 "ddgst": false 00:09:54.937 }, 00:09:54.937 "method": "bdev_nvme_attach_controller" 00:09:54.937 }' 00:09:54.937 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:54.937 07:19:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:54.937 "params": { 00:09:54.937 "name": "Nvme1", 00:09:54.937 "trtype": "tcp", 00:09:54.937 "traddr": "10.0.0.2", 00:09:54.937 "adrfam": "ipv4", 00:09:54.937 "trsvcid": "4420", 00:09:54.937 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:54.937 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:54.937 "hdgst": false, 00:09:54.937 "ddgst": false 00:09:54.937 }, 00:09:54.937 "method": "bdev_nvme_attach_controller" 00:09:54.937 }' 00:09:54.937 [2024-11-26 07:19:38.879375] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:09:54.937 [2024-11-26 07:19:38.879426] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:54.937 [2024-11-26 07:19:38.880682] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:09:54.937 [2024-11-26 07:19:38.880731] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:54.937 [2024-11-26 07:19:38.883484] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:09:54.937 [2024-11-26 07:19:38.883529] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:54.937 [2024-11-26 07:19:38.885333] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:09:54.937 [2024-11-26 07:19:38.885376] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:54.937 [2024-11-26 07:19:39.050584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.197 [2024-11-26 07:19:39.080473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:55.197 [2024-11-26 07:19:39.098594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.197 [2024-11-26 07:19:39.127008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:55.197 [2024-11-26 07:19:39.145328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.197 [2024-11-26 07:19:39.173717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:55.197 [2024-11-26 07:19:39.194950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.197 [2024-11-26 07:19:39.223130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:55.458 Running I/O for 1 seconds... 00:09:55.458 Running I/O for 1 seconds... 00:09:55.458 Running I/O for 1 seconds... 00:09:55.458 Running I/O for 1 seconds... 00:09:56.397 8341.00 IOPS, 32.58 MiB/s [2024-11-26T06:19:40.534Z] 12703.00 IOPS, 49.62 MiB/s [2024-11-26T06:19:40.534Z] 173504.00 IOPS, 677.75 MiB/s 00:09:56.397 Latency(us) 00:09:56.397 [2024-11-26T06:19:40.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:56.397 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:56.397 Nvme1n1 : 1.00 173155.47 676.39 0.00 0.00 735.00 312.32 2020.69 00:09:56.397 [2024-11-26T06:19:40.534Z] =================================================================================================================== 00:09:56.397 [2024-11-26T06:19:40.534Z] Total : 173155.47 676.39 0.00 0.00 735.00 312.32 2020.69 00:09:56.397 00:09:56.397 Latency(us) 00:09:56.397 [2024-11-26T06:19:40.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:56.397 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:56.397 Nvme1n1 : 1.01 8346.11 32.60 0.00 0.00 15177.87 6335.15 23265.28 00:09:56.397 [2024-11-26T06:19:40.534Z] =================================================================================================================== 00:09:56.397 [2024-11-26T06:19:40.534Z] Total : 8346.11 32.60 0.00 0.00 15177.87 6335.15 23265.28 00:09:56.397 00:09:56.397 Latency(us) 00:09:56.397 [2024-11-26T06:19:40.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:56.397 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:56.397 Nvme1n1 : 1.01 12749.42 49.80 0.00 0.00 10005.29 4942.51 20753.07 00:09:56.397 [2024-11-26T06:19:40.534Z] =================================================================================================================== 00:09:56.397 [2024-11-26T06:19:40.534Z] Total : 12749.42 49.80 0.00 0.00 10005.29 4942.51 20753.07 00:09:56.397 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1928609 00:09:56.658 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1928611 00:09:56.658 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1928615 00:09:56.658 8782.00 IOPS, 34.30 MiB/s 00:09:56.659 Latency(us) 00:09:56.659 [2024-11-26T06:19:40.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:56.659 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:56.659 Nvme1n1 : 1.01 8902.02 34.77 0.00 0.00 14343.17 3713.71 35607.89 00:09:56.659 [2024-11-26T06:19:40.796Z] =================================================================================================================== 00:09:56.659 [2024-11-26T06:19:40.796Z] Total : 8902.02 34.77 0.00 0.00 14343.17 3713.71 35607.89 00:09:56.659 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:56.659 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.659 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:56.659 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.659 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:56.659 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:56.659 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:56.659 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:56.659 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:56.659 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:56.659 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:56.659 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:56.659 rmmod nvme_tcp 00:09:56.659 rmmod nvme_fabrics 00:09:56.659 rmmod nvme_keyring 00:09:56.659 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:56.659 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:56.659 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:56.659 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1928541 ']' 00:09:56.659 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1928541 00:09:56.659 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1928541 ']' 00:09:56.659 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1928541 00:09:56.659 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:56.659 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.659 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1928541 00:09:56.919 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:56.919 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:56.919 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1928541' 00:09:56.919 killing process with pid 1928541 00:09:56.919 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1928541 00:09:56.919 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1928541 00:09:56.919 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:56.919 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:56.919 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:56.919 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:56.919 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:56.919 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:56.919 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:56.919 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:56.919 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:56.919 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.919 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:56.919 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.465 07:19:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:59.465 00:09:59.465 real 0m13.863s 00:09:59.465 user 0m19.553s 00:09:59.465 sys 0m7.804s 00:09:59.465 07:19:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.465 07:19:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:59.465 ************************************ 00:09:59.465 END TEST nvmf_bdev_io_wait 00:09:59.465 ************************************ 00:09:59.465 07:19:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:59.465 07:19:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:59.465 07:19:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.465 07:19:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:59.465 ************************************ 00:09:59.465 START TEST nvmf_queue_depth 00:09:59.465 ************************************ 00:09:59.465 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:59.465 * Looking for test storage... 00:09:59.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:59.465 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:59.465 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:59.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.466 --rc genhtml_branch_coverage=1 00:09:59.466 --rc genhtml_function_coverage=1 00:09:59.466 --rc genhtml_legend=1 00:09:59.466 --rc geninfo_all_blocks=1 00:09:59.466 --rc geninfo_unexecuted_blocks=1 00:09:59.466 00:09:59.466 ' 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:59.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.466 --rc genhtml_branch_coverage=1 00:09:59.466 --rc genhtml_function_coverage=1 00:09:59.466 --rc genhtml_legend=1 00:09:59.466 --rc geninfo_all_blocks=1 00:09:59.466 --rc geninfo_unexecuted_blocks=1 00:09:59.466 00:09:59.466 ' 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:59.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.466 --rc genhtml_branch_coverage=1 00:09:59.466 --rc genhtml_function_coverage=1 00:09:59.466 --rc genhtml_legend=1 00:09:59.466 --rc geninfo_all_blocks=1 00:09:59.466 --rc geninfo_unexecuted_blocks=1 00:09:59.466 00:09:59.466 ' 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:59.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.466 --rc genhtml_branch_coverage=1 00:09:59.466 --rc genhtml_function_coverage=1 00:09:59.466 --rc genhtml_legend=1 00:09:59.466 --rc geninfo_all_blocks=1 00:09:59.466 --rc geninfo_unexecuted_blocks=1 00:09:59.466 00:09:59.466 ' 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.466 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:59.466 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:59.467 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:59.467 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:59.467 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:59.467 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:59.467 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:59.467 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:59.467 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:59.467 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:59.467 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:59.467 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:59.467 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:59.467 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:59.467 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.467 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.467 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.467 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:59.467 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:59.467 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:59.467 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:07.598 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:07.598 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:10:07.598 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:07.598 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:07.598 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:07.598 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:07.598 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:07.598 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:10:07.598 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:07.598 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:10:07.598 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:10:07.598 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:10:07.598 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:10:07.598 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:10:07.598 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:10:07.598 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:07.598 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:07.598 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:07.598 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:07.598 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:07.598 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:07.598 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:07.598 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:07.598 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:07.598 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:07.598 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:07.598 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:07.598 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:07.598 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:07.598 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:07.598 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:07.598 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:07.599 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:07.599 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:07.599 Found net devices under 0000:31:00.0: cvl_0_0 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:07.599 Found net devices under 0000:31:00.1: cvl_0_1 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:07.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:07.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.475 ms 00:10:07.599 00:10:07.599 --- 10.0.0.2 ping statistics --- 00:10:07.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.599 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:07.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:07.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:10:07.599 00:10:07.599 --- 10.0.0.1 ping statistics --- 00:10:07.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.599 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1933948 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1933948 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1933948 ']' 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:07.599 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:07.599 [2024-11-26 07:19:51.694842] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:10:07.599 [2024-11-26 07:19:51.694916] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:07.860 [2024-11-26 07:19:51.809599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.860 [2024-11-26 07:19:51.860626] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:07.860 [2024-11-26 07:19:51.860687] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:07.860 [2024-11-26 07:19:51.860696] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:07.860 [2024-11-26 07:19:51.860703] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:07.860 [2024-11-26 07:19:51.860709] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:07.860 [2024-11-26 07:19:51.861554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.431 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:08.431 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:08.431 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:08.431 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:08.431 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:08.431 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:08.431 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:08.431 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.431 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:08.431 [2024-11-26 07:19:52.558460] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:08.692 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.692 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:08.692 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.692 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:08.692 Malloc0 00:10:08.692 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.692 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:08.692 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.692 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:08.692 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.692 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:08.692 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.692 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:08.692 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.692 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:08.692 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.692 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:08.692 [2024-11-26 07:19:52.602308] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:08.692 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.692 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1933996 00:10:08.692 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:08.692 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:08.692 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1933996 /var/tmp/bdevperf.sock 00:10:08.692 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1933996 ']' 00:10:08.692 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:08.692 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.692 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:08.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:08.692 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.692 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:08.692 [2024-11-26 07:19:52.661490] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:10:08.692 [2024-11-26 07:19:52.661561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1933996 ] 00:10:08.692 [2024-11-26 07:19:52.748831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.692 [2024-11-26 07:19:52.790757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.635 07:19:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:09.635 07:19:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:09.635 07:19:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:09.635 07:19:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.635 07:19:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:09.635 NVMe0n1 00:10:09.635 07:19:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.635 07:19:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:09.635 Running I/O for 10 seconds... 00:10:11.965 8943.00 IOPS, 34.93 MiB/s [2024-11-26T06:19:57.043Z] 9207.50 IOPS, 35.97 MiB/s [2024-11-26T06:19:57.984Z] 9895.33 IOPS, 38.65 MiB/s [2024-11-26T06:19:58.926Z] 10313.75 IOPS, 40.29 MiB/s [2024-11-26T06:19:59.869Z] 10631.60 IOPS, 41.53 MiB/s [2024-11-26T06:20:00.811Z] 10750.83 IOPS, 42.00 MiB/s [2024-11-26T06:20:01.752Z] 10883.43 IOPS, 42.51 MiB/s [2024-11-26T06:20:03.135Z] 10990.75 IOPS, 42.93 MiB/s [2024-11-26T06:20:03.706Z] 11036.11 IOPS, 43.11 MiB/s [2024-11-26T06:20:03.967Z] 11095.20 IOPS, 43.34 MiB/s 00:10:19.830 Latency(us) 00:10:19.830 [2024-11-26T06:20:03.967Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:19.830 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:19.830 Verification LBA range: start 0x0 length 0x4000 00:10:19.830 NVMe0n1 : 10.05 11128.78 43.47 0.00 0.00 91665.42 6417.07 72089.60 00:10:19.830 [2024-11-26T06:20:03.967Z] =================================================================================================================== 00:10:19.830 [2024-11-26T06:20:03.967Z] Total : 11128.78 43.47 0.00 0.00 91665.42 6417.07 72089.60 00:10:19.830 { 00:10:19.830 "results": [ 00:10:19.830 { 00:10:19.830 "job": "NVMe0n1", 00:10:19.830 "core_mask": "0x1", 00:10:19.830 "workload": "verify", 00:10:19.830 "status": "finished", 00:10:19.830 "verify_range": { 00:10:19.830 "start": 0, 00:10:19.830 "length": 16384 00:10:19.830 }, 00:10:19.830 "queue_depth": 1024, 00:10:19.830 "io_size": 4096, 00:10:19.830 "runtime": 10.047912, 00:10:19.830 "iops": 11128.779790268864, 00:10:19.830 "mibps": 43.47179605573775, 00:10:19.830 "io_failed": 0, 00:10:19.830 "io_timeout": 0, 00:10:19.830 "avg_latency_us": 91665.41907751377, 00:10:19.830 "min_latency_us": 6417.066666666667, 00:10:19.830 "max_latency_us": 72089.6 00:10:19.830 } 00:10:19.830 ], 00:10:19.830 "core_count": 1 00:10:19.830 } 00:10:19.830 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1933996 00:10:19.830 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1933996 ']' 00:10:19.830 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1933996 00:10:19.830 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:19.830 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:19.830 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1933996 00:10:19.830 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:19.830 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:19.830 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1933996' 00:10:19.830 killing process with pid 1933996 00:10:19.830 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1933996 00:10:19.830 Received shutdown signal, test time was about 10.000000 seconds 00:10:19.830 00:10:19.830 Latency(us) 00:10:19.830 [2024-11-26T06:20:03.967Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:19.830 [2024-11-26T06:20:03.967Z] =================================================================================================================== 00:10:19.830 [2024-11-26T06:20:03.967Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:19.830 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1933996 00:10:19.830 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:19.830 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:19.830 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:19.830 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:19.830 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:19.830 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:19.830 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:19.831 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:19.831 rmmod nvme_tcp 00:10:20.092 rmmod nvme_fabrics 00:10:20.092 rmmod nvme_keyring 00:10:20.092 07:20:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:20.092 07:20:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:20.092 07:20:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:20.092 07:20:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1933948 ']' 00:10:20.092 07:20:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1933948 00:10:20.092 07:20:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1933948 ']' 00:10:20.092 07:20:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1933948 00:10:20.092 07:20:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:20.092 07:20:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:20.092 07:20:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1933948 00:10:20.092 07:20:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:20.092 07:20:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:20.092 07:20:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1933948' 00:10:20.092 killing process with pid 1933948 00:10:20.092 07:20:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1933948 00:10:20.092 07:20:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1933948 00:10:20.092 07:20:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:20.092 07:20:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:20.092 07:20:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:20.092 07:20:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:20.092 07:20:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:10:20.092 07:20:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:20.092 07:20:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:10:20.353 07:20:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:20.353 07:20:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:20.353 07:20:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.353 07:20:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.353 07:20:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.266 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:22.266 00:10:22.266 real 0m23.237s 00:10:22.266 user 0m25.859s 00:10:22.266 sys 0m7.604s 00:10:22.266 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:22.266 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:22.266 ************************************ 00:10:22.266 END TEST nvmf_queue_depth 00:10:22.266 ************************************ 00:10:22.266 07:20:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:22.266 07:20:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:22.266 07:20:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.266 07:20:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:22.266 ************************************ 00:10:22.266 START TEST nvmf_target_multipath 00:10:22.266 ************************************ 00:10:22.266 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:22.528 * Looking for test storage... 00:10:22.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:22.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.528 --rc genhtml_branch_coverage=1 00:10:22.528 --rc genhtml_function_coverage=1 00:10:22.528 --rc genhtml_legend=1 00:10:22.528 --rc geninfo_all_blocks=1 00:10:22.528 --rc geninfo_unexecuted_blocks=1 00:10:22.528 00:10:22.528 ' 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:22.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.528 --rc genhtml_branch_coverage=1 00:10:22.528 --rc genhtml_function_coverage=1 00:10:22.528 --rc genhtml_legend=1 00:10:22.528 --rc geninfo_all_blocks=1 00:10:22.528 --rc geninfo_unexecuted_blocks=1 00:10:22.528 00:10:22.528 ' 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:22.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.528 --rc genhtml_branch_coverage=1 00:10:22.528 --rc genhtml_function_coverage=1 00:10:22.528 --rc genhtml_legend=1 00:10:22.528 --rc geninfo_all_blocks=1 00:10:22.528 --rc geninfo_unexecuted_blocks=1 00:10:22.528 00:10:22.528 ' 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:22.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.528 --rc genhtml_branch_coverage=1 00:10:22.528 --rc genhtml_function_coverage=1 00:10:22.528 --rc genhtml_legend=1 00:10:22.528 --rc geninfo_all_blocks=1 00:10:22.528 --rc geninfo_unexecuted_blocks=1 00:10:22.528 00:10:22.528 ' 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:22.528 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:22.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:22.529 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:30.675 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:30.675 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:30.675 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:30.675 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:30.675 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:30.675 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:30.675 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:30.675 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:30.675 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:30.675 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:30.675 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:30.675 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:30.675 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:30.675 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:30.675 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:30.675 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:30.675 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:30.675 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:30.675 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:30.675 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:30.675 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:30.675 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:30.675 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:30.675 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:30.675 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:30.675 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:30.675 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:30.675 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:30.675 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:30.675 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:30.675 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:30.675 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:30.675 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:30.675 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:30.675 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:30.675 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:30.675 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:30.676 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:30.676 Found net devices under 0000:31:00.0: cvl_0_0 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:30.676 Found net devices under 0000:31:00.1: cvl_0_1 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:30.676 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:30.938 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:30.938 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:30.938 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:30.938 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:30.938 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:31.199 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:31.199 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:31.199 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:31.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:31.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:10:31.199 00:10:31.199 --- 10.0.0.2 ping statistics --- 00:10:31.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.199 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:10:31.199 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:31.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:31.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:10:31.199 00:10:31.199 --- 10.0.0.1 ping statistics --- 00:10:31.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.199 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:10:31.200 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:31.200 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:10:31.200 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:31.200 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:31.200 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:31.200 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:31.200 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:31.200 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:31.200 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:31.200 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:31.200 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:31.200 only one NIC for nvmf test 00:10:31.200 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:31.200 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:31.200 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:31.200 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:31.200 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:31.200 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:31.200 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:31.200 rmmod nvme_tcp 00:10:31.200 rmmod nvme_fabrics 00:10:31.200 rmmod nvme_keyring 00:10:31.200 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:31.200 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:31.200 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:31.200 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:31.200 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:31.200 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:31.200 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:31.200 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:31.200 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:31.200 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:31.200 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:31.200 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:31.200 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:31.200 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.200 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:31.200 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.747 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:33.748 00:10:33.748 real 0m10.989s 00:10:33.748 user 0m2.362s 00:10:33.748 sys 0m6.545s 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:33.748 ************************************ 00:10:33.748 END TEST nvmf_target_multipath 00:10:33.748 ************************************ 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:33.748 ************************************ 00:10:33.748 START TEST nvmf_zcopy 00:10:33.748 ************************************ 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:33.748 * Looking for test storage... 00:10:33.748 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:33.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.748 --rc genhtml_branch_coverage=1 00:10:33.748 --rc genhtml_function_coverage=1 00:10:33.748 --rc genhtml_legend=1 00:10:33.748 --rc geninfo_all_blocks=1 00:10:33.748 --rc geninfo_unexecuted_blocks=1 00:10:33.748 00:10:33.748 ' 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:33.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.748 --rc genhtml_branch_coverage=1 00:10:33.748 --rc genhtml_function_coverage=1 00:10:33.748 --rc genhtml_legend=1 00:10:33.748 --rc geninfo_all_blocks=1 00:10:33.748 --rc geninfo_unexecuted_blocks=1 00:10:33.748 00:10:33.748 ' 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:33.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.748 --rc genhtml_branch_coverage=1 00:10:33.748 --rc genhtml_function_coverage=1 00:10:33.748 --rc genhtml_legend=1 00:10:33.748 --rc geninfo_all_blocks=1 00:10:33.748 --rc geninfo_unexecuted_blocks=1 00:10:33.748 00:10:33.748 ' 00:10:33.748 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:33.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.748 --rc genhtml_branch_coverage=1 00:10:33.749 --rc genhtml_function_coverage=1 00:10:33.749 --rc genhtml_legend=1 00:10:33.749 --rc geninfo_all_blocks=1 00:10:33.749 --rc geninfo_unexecuted_blocks=1 00:10:33.749 00:10:33.749 ' 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:33.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:33.749 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:41.894 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:41.894 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:41.894 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:41.895 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:41.895 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:41.895 Found net devices under 0000:31:00.0: cvl_0_0 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:41.895 Found net devices under 0000:31:00.1: cvl_0_1 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:41.895 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:42.157 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:42.157 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:42.157 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:42.157 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:42.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:42.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:10:42.157 00:10:42.157 --- 10.0.0.2 ping statistics --- 00:10:42.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.157 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:10:42.157 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:42.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:42.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:10:42.157 00:10:42.157 --- 10.0.0.1 ping statistics --- 00:10:42.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.157 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:10:42.157 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:42.157 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:42.157 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:42.157 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:42.157 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:42.157 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:42.157 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:42.157 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:42.157 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:42.157 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:42.157 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:42.157 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:42.157 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:42.157 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1945899 00:10:42.157 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1945899 00:10:42.157 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:42.157 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1945899 ']' 00:10:42.157 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.157 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:42.157 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.157 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:42.157 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:42.157 [2024-11-26 07:20:26.230955] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:10:42.157 [2024-11-26 07:20:26.231022] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:42.419 [2024-11-26 07:20:26.339620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.419 [2024-11-26 07:20:26.387944] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:42.419 [2024-11-26 07:20:26.387995] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:42.419 [2024-11-26 07:20:26.388004] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:42.419 [2024-11-26 07:20:26.388011] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:42.419 [2024-11-26 07:20:26.388018] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:42.419 [2024-11-26 07:20:26.388853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.993 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:42.993 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:42.993 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:42.993 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:42.993 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:42.993 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:42.993 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:42.993 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:42.993 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.993 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:42.993 [2024-11-26 07:20:27.089620] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:42.993 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.993 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:42.993 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.993 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:42.993 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.993 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:42.993 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.993 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:42.993 [2024-11-26 07:20:27.113877] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:42.993 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.993 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:42.993 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.993 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:43.254 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.255 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:43.255 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.255 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:43.255 malloc0 00:10:43.255 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.255 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:43.255 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.255 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:43.255 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.255 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:43.255 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:43.255 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:43.255 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:43.255 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:43.255 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:43.255 { 00:10:43.255 "params": { 00:10:43.255 "name": "Nvme$subsystem", 00:10:43.255 "trtype": "$TEST_TRANSPORT", 00:10:43.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:43.255 "adrfam": "ipv4", 00:10:43.255 "trsvcid": "$NVMF_PORT", 00:10:43.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:43.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:43.255 "hdgst": ${hdgst:-false}, 00:10:43.255 "ddgst": ${ddgst:-false} 00:10:43.255 }, 00:10:43.255 "method": "bdev_nvme_attach_controller" 00:10:43.255 } 00:10:43.255 EOF 00:10:43.255 )") 00:10:43.255 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:43.255 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:43.255 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:43.255 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:43.255 "params": { 00:10:43.255 "name": "Nvme1", 00:10:43.255 "trtype": "tcp", 00:10:43.255 "traddr": "10.0.0.2", 00:10:43.255 "adrfam": "ipv4", 00:10:43.255 "trsvcid": "4420", 00:10:43.255 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:43.255 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:43.255 "hdgst": false, 00:10:43.255 "ddgst": false 00:10:43.255 }, 00:10:43.255 "method": "bdev_nvme_attach_controller" 00:10:43.255 }' 00:10:43.255 [2024-11-26 07:20:27.215597] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:10:43.255 [2024-11-26 07:20:27.215663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1946082 ] 00:10:43.255 [2024-11-26 07:20:27.298257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.255 [2024-11-26 07:20:27.339508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.515 Running I/O for 10 seconds... 00:10:45.549 6712.00 IOPS, 52.44 MiB/s [2024-11-26T06:20:30.627Z] 6742.50 IOPS, 52.68 MiB/s [2024-11-26T06:20:31.570Z] 6767.33 IOPS, 52.87 MiB/s [2024-11-26T06:20:32.511Z] 6769.75 IOPS, 52.89 MiB/s [2024-11-26T06:20:33.896Z] 7099.20 IOPS, 55.46 MiB/s [2024-11-26T06:20:34.839Z] 7550.83 IOPS, 58.99 MiB/s [2024-11-26T06:20:35.783Z] 7871.43 IOPS, 61.50 MiB/s [2024-11-26T06:20:36.727Z] 8114.12 IOPS, 63.39 MiB/s [2024-11-26T06:20:37.669Z] 8303.33 IOPS, 64.87 MiB/s [2024-11-26T06:20:37.669Z] 8454.90 IOPS, 66.05 MiB/s 00:10:53.532 Latency(us) 00:10:53.532 [2024-11-26T06:20:37.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:53.532 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:53.532 Verification LBA range: start 0x0 length 0x1000 00:10:53.532 Nvme1n1 : 10.01 8457.20 66.07 0.00 0.00 15083.14 2184.53 26651.31 00:10:53.532 [2024-11-26T06:20:37.669Z] =================================================================================================================== 00:10:53.532 [2024-11-26T06:20:37.669Z] Total : 8457.20 66.07 0.00 0.00 15083.14 2184.53 26651.31 00:10:53.532 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1948102 00:10:53.532 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:53.532 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:53.532 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:53.532 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:53.532 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:53.532 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:53.532 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:53.532 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:53.532 { 00:10:53.532 "params": { 00:10:53.532 "name": "Nvme$subsystem", 00:10:53.532 "trtype": "$TEST_TRANSPORT", 00:10:53.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:53.532 "adrfam": "ipv4", 00:10:53.532 "trsvcid": "$NVMF_PORT", 00:10:53.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:53.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:53.532 "hdgst": ${hdgst:-false}, 00:10:53.532 "ddgst": ${ddgst:-false} 00:10:53.532 }, 00:10:53.532 "method": "bdev_nvme_attach_controller" 00:10:53.532 } 00:10:53.532 EOF 00:10:53.532 )") 00:10:53.532 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:53.532 [2024-11-26 07:20:37.640771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.532 [2024-11-26 07:20:37.640802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.532 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:53.532 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:53.532 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:53.532 "params": { 00:10:53.532 "name": "Nvme1", 00:10:53.532 "trtype": "tcp", 00:10:53.532 "traddr": "10.0.0.2", 00:10:53.532 "adrfam": "ipv4", 00:10:53.532 "trsvcid": "4420", 00:10:53.532 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:53.532 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:53.532 "hdgst": false, 00:10:53.532 "ddgst": false 00:10:53.532 }, 00:10:53.532 "method": "bdev_nvme_attach_controller" 00:10:53.532 }' 00:10:53.532 [2024-11-26 07:20:37.652772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.532 [2024-11-26 07:20:37.652780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.792 [2024-11-26 07:20:37.664801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.792 [2024-11-26 07:20:37.664809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.792 [2024-11-26 07:20:37.676830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.792 [2024-11-26 07:20:37.676837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.792 [2024-11-26 07:20:37.683091] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:10:53.792 [2024-11-26 07:20:37.683137] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1948102 ] 00:10:53.792 [2024-11-26 07:20:37.688860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.792 [2024-11-26 07:20:37.688872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.792 [2024-11-26 07:20:37.700894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.792 [2024-11-26 07:20:37.700901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.792 [2024-11-26 07:20:37.712924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.792 [2024-11-26 07:20:37.712932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.793 [2024-11-26 07:20:37.724955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.793 [2024-11-26 07:20:37.724963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.793 [2024-11-26 07:20:37.736987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.793 [2024-11-26 07:20:37.736996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.793 [2024-11-26 07:20:37.749019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.793 [2024-11-26 07:20:37.749027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.793 [2024-11-26 07:20:37.760569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.793 [2024-11-26 07:20:37.761049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.793 [2024-11-26 07:20:37.761055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.793 [2024-11-26 07:20:37.773082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.793 [2024-11-26 07:20:37.773091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.793 [2024-11-26 07:20:37.785111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.793 [2024-11-26 07:20:37.785119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.793 [2024-11-26 07:20:37.796197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.793 [2024-11-26 07:20:37.797141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.793 [2024-11-26 07:20:37.797149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.793 [2024-11-26 07:20:37.809173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.793 [2024-11-26 07:20:37.809182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.793 [2024-11-26 07:20:37.821207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.793 [2024-11-26 07:20:37.821220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.793 [2024-11-26 07:20:37.833234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.793 [2024-11-26 07:20:37.833245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.793 [2024-11-26 07:20:37.845266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.793 [2024-11-26 07:20:37.845275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.793 [2024-11-26 07:20:37.857294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.793 [2024-11-26 07:20:37.857301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.793 [2024-11-26 07:20:37.869324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.793 [2024-11-26 07:20:37.869333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.793 [2024-11-26 07:20:37.881369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.793 [2024-11-26 07:20:37.881384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.793 [2024-11-26 07:20:37.893389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.793 [2024-11-26 07:20:37.893402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.793 [2024-11-26 07:20:37.905419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.793 [2024-11-26 07:20:37.905429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.793 [2024-11-26 07:20:37.917452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.793 [2024-11-26 07:20:37.917459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.054 [2024-11-26 07:20:37.929483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.054 [2024-11-26 07:20:37.929491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.054 [2024-11-26 07:20:37.941514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.054 [2024-11-26 07:20:37.941522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.054 [2024-11-26 07:20:37.953546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.054 [2024-11-26 07:20:37.953557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.054 [2024-11-26 07:20:37.965575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.054 [2024-11-26 07:20:37.965582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.054 [2024-11-26 07:20:37.977606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.054 [2024-11-26 07:20:37.977613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.054 [2024-11-26 07:20:37.989639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.054 [2024-11-26 07:20:37.989646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.054 [2024-11-26 07:20:38.001672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.054 [2024-11-26 07:20:38.001682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.054 [2024-11-26 07:20:38.013702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.054 [2024-11-26 07:20:38.013709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.054 [2024-11-26 07:20:38.025733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.054 [2024-11-26 07:20:38.025740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.054 [2024-11-26 07:20:38.037764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.054 [2024-11-26 07:20:38.037772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.054 [2024-11-26 07:20:38.049796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.054 [2024-11-26 07:20:38.049806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.054 [2024-11-26 07:20:38.061829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.054 [2024-11-26 07:20:38.061836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.054 [2024-11-26 07:20:38.073861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.054 [2024-11-26 07:20:38.073872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.054 [2024-11-26 07:20:38.085895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.054 [2024-11-26 07:20:38.085903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.054 [2024-11-26 07:20:38.136387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.054 [2024-11-26 07:20:38.136400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.054 [2024-11-26 07:20:38.146053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.054 [2024-11-26 07:20:38.146063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.054 Running I/O for 5 seconds... 00:10:54.054 [2024-11-26 07:20:38.161882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.054 [2024-11-26 07:20:38.161898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.054 [2024-11-26 07:20:38.174384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.054 [2024-11-26 07:20:38.174401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.316 [2024-11-26 07:20:38.187218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.316 [2024-11-26 07:20:38.187234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.316 [2024-11-26 07:20:38.200295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.316 [2024-11-26 07:20:38.200311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.316 [2024-11-26 07:20:38.213238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.316 [2024-11-26 07:20:38.213253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.316 [2024-11-26 07:20:38.226484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.316 [2024-11-26 07:20:38.226500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.316 [2024-11-26 07:20:38.239763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.316 [2024-11-26 07:20:38.239778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.316 [2024-11-26 07:20:38.252392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.316 [2024-11-26 07:20:38.252407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.316 [2024-11-26 07:20:38.265761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.316 [2024-11-26 07:20:38.265776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.316 [2024-11-26 07:20:38.278792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.316 [2024-11-26 07:20:38.278807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.316 [2024-11-26 07:20:38.292109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.316 [2024-11-26 07:20:38.292123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.316 [2024-11-26 07:20:38.305082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.316 [2024-11-26 07:20:38.305097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.316 [2024-11-26 07:20:38.318381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.316 [2024-11-26 07:20:38.318396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.316 [2024-11-26 07:20:38.330942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.316 [2024-11-26 07:20:38.330957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.316 [2024-11-26 07:20:38.343535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.316 [2024-11-26 07:20:38.343550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.316 [2024-11-26 07:20:38.356889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.316 [2024-11-26 07:20:38.356904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.316 [2024-11-26 07:20:38.370061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.316 [2024-11-26 07:20:38.370076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.316 [2024-11-26 07:20:38.382742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.316 [2024-11-26 07:20:38.382757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.317 [2024-11-26 07:20:38.395806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.317 [2024-11-26 07:20:38.395821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.317 [2024-11-26 07:20:38.408851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.317 [2024-11-26 07:20:38.408869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.317 [2024-11-26 07:20:38.422288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.317 [2024-11-26 07:20:38.422302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.317 [2024-11-26 07:20:38.435337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.317 [2024-11-26 07:20:38.435351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.578 [2024-11-26 07:20:38.448023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.578 [2024-11-26 07:20:38.448038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.578 [2024-11-26 07:20:38.460536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.578 [2024-11-26 07:20:38.460551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.578 [2024-11-26 07:20:38.473622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.578 [2024-11-26 07:20:38.473637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.578 [2024-11-26 07:20:38.486304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.578 [2024-11-26 07:20:38.486318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.578 [2024-11-26 07:20:38.499586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.578 [2024-11-26 07:20:38.499601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.578 [2024-11-26 07:20:38.512381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.578 [2024-11-26 07:20:38.512395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.578 [2024-11-26 07:20:38.525411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.578 [2024-11-26 07:20:38.525425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.578 [2024-11-26 07:20:38.538514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.578 [2024-11-26 07:20:38.538529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.578 [2024-11-26 07:20:38.551858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.578 [2024-11-26 07:20:38.551876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.578 [2024-11-26 07:20:38.564645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.578 [2024-11-26 07:20:38.564660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.578 [2024-11-26 07:20:38.577764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.578 [2024-11-26 07:20:38.577779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.578 [2024-11-26 07:20:38.590898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.578 [2024-11-26 07:20:38.590912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.578 [2024-11-26 07:20:38.603638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.578 [2024-11-26 07:20:38.603652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.578 [2024-11-26 07:20:38.616733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.578 [2024-11-26 07:20:38.616747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.578 [2024-11-26 07:20:38.629972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.578 [2024-11-26 07:20:38.629986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.578 [2024-11-26 07:20:38.643068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.578 [2024-11-26 07:20:38.643086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.578 [2024-11-26 07:20:38.655942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.578 [2024-11-26 07:20:38.655956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.578 [2024-11-26 07:20:38.669035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.578 [2024-11-26 07:20:38.669049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.578 [2024-11-26 07:20:38.682284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.578 [2024-11-26 07:20:38.682299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.578 [2024-11-26 07:20:38.695585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.578 [2024-11-26 07:20:38.695600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.578 [2024-11-26 07:20:38.708045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.579 [2024-11-26 07:20:38.708059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.840 [2024-11-26 07:20:38.721259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.840 [2024-11-26 07:20:38.721274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.840 [2024-11-26 07:20:38.734172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.840 [2024-11-26 07:20:38.734186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.840 [2024-11-26 07:20:38.747334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.840 [2024-11-26 07:20:38.747349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.840 [2024-11-26 07:20:38.760356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.840 [2024-11-26 07:20:38.760371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.840 [2024-11-26 07:20:38.773432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.840 [2024-11-26 07:20:38.773448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.840 [2024-11-26 07:20:38.786439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.840 [2024-11-26 07:20:38.786453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.840 [2024-11-26 07:20:38.799692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.840 [2024-11-26 07:20:38.799707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.840 [2024-11-26 07:20:38.812828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.840 [2024-11-26 07:20:38.812842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.840 [2024-11-26 07:20:38.826030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.840 [2024-11-26 07:20:38.826044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.840 [2024-11-26 07:20:38.839118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.840 [2024-11-26 07:20:38.839132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.840 [2024-11-26 07:20:38.852592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.840 [2024-11-26 07:20:38.852606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.840 [2024-11-26 07:20:38.865444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.840 [2024-11-26 07:20:38.865459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.840 [2024-11-26 07:20:38.878434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.840 [2024-11-26 07:20:38.878448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.840 [2024-11-26 07:20:38.891270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.840 [2024-11-26 07:20:38.891289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.840 [2024-11-26 07:20:38.904539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.840 [2024-11-26 07:20:38.904554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.840 [2024-11-26 07:20:38.917346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.840 [2024-11-26 07:20:38.917361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.840 [2024-11-26 07:20:38.930440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.840 [2024-11-26 07:20:38.930455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.840 [2024-11-26 07:20:38.942957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.840 [2024-11-26 07:20:38.942971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.840 [2024-11-26 07:20:38.955435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.840 [2024-11-26 07:20:38.955450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.840 [2024-11-26 07:20:38.967838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.840 [2024-11-26 07:20:38.967852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.101 [2024-11-26 07:20:38.980981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.101 [2024-11-26 07:20:38.980996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.101 [2024-11-26 07:20:38.993883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.101 [2024-11-26 07:20:38.993898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.101 [2024-11-26 07:20:39.007214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.101 [2024-11-26 07:20:39.007229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.101 [2024-11-26 07:20:39.019473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.101 [2024-11-26 07:20:39.019487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.101 [2024-11-26 07:20:39.032779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.101 [2024-11-26 07:20:39.032793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.101 [2024-11-26 07:20:39.045676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.101 [2024-11-26 07:20:39.045691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.101 [2024-11-26 07:20:39.058900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.101 [2024-11-26 07:20:39.058915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.101 [2024-11-26 07:20:39.072134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.101 [2024-11-26 07:20:39.072149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.101 [2024-11-26 07:20:39.085168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.101 [2024-11-26 07:20:39.085182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.101 [2024-11-26 07:20:39.098063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.101 [2024-11-26 07:20:39.098078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.101 [2024-11-26 07:20:39.111188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.101 [2024-11-26 07:20:39.111203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.101 [2024-11-26 07:20:39.124036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.101 [2024-11-26 07:20:39.124050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.101 [2024-11-26 07:20:39.137243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.101 [2024-11-26 07:20:39.137262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.101 [2024-11-26 07:20:39.150397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.101 [2024-11-26 07:20:39.150412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.101 19553.00 IOPS, 152.76 MiB/s [2024-11-26T06:20:39.238Z] [2024-11-26 07:20:39.163384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.101 [2024-11-26 07:20:39.163398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.101 [2024-11-26 07:20:39.176494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.101 [2024-11-26 07:20:39.176508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.101 [2024-11-26 07:20:39.188953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.102 [2024-11-26 07:20:39.188968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.102 [2024-11-26 07:20:39.202224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.102 [2024-11-26 07:20:39.202239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.102 [2024-11-26 07:20:39.214616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.102 [2024-11-26 07:20:39.214631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.102 [2024-11-26 07:20:39.228738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.102 [2024-11-26 07:20:39.228753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.363 [2024-11-26 07:20:39.240778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.363 [2024-11-26 07:20:39.240793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.363 [2024-11-26 07:20:39.254084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.363 [2024-11-26 07:20:39.254099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.363 [2024-11-26 07:20:39.267252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.363 [2024-11-26 07:20:39.267267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.363 [2024-11-26 07:20:39.280285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.363 [2024-11-26 07:20:39.280300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.363 [2024-11-26 07:20:39.293176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.363 [2024-11-26 07:20:39.293191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.363 [2024-11-26 07:20:39.305598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.363 [2024-11-26 07:20:39.305614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.363 [2024-11-26 07:20:39.318973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.363 [2024-11-26 07:20:39.318988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.363 [2024-11-26 07:20:39.332136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.363 [2024-11-26 07:20:39.332150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.363 [2024-11-26 07:20:39.345180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.363 [2024-11-26 07:20:39.345195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.363 [2024-11-26 07:20:39.358052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.363 [2024-11-26 07:20:39.358067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.363 [2024-11-26 07:20:39.371244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.363 [2024-11-26 07:20:39.371258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.363 [2024-11-26 07:20:39.384250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.363 [2024-11-26 07:20:39.384265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.363 [2024-11-26 07:20:39.397097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.363 [2024-11-26 07:20:39.397112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.363 [2024-11-26 07:20:39.410016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.363 [2024-11-26 07:20:39.410031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.363 [2024-11-26 07:20:39.423249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.363 [2024-11-26 07:20:39.423264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.363 [2024-11-26 07:20:39.436338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.363 [2024-11-26 07:20:39.436353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.363 [2024-11-26 07:20:39.448939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.363 [2024-11-26 07:20:39.448955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.363 [2024-11-26 07:20:39.461673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.363 [2024-11-26 07:20:39.461688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.363 [2024-11-26 07:20:39.474676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.363 [2024-11-26 07:20:39.474691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.363 [2024-11-26 07:20:39.487830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.363 [2024-11-26 07:20:39.487845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.625 [2024-11-26 07:20:39.500439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.625 [2024-11-26 07:20:39.500454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.625 [2024-11-26 07:20:39.513391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.625 [2024-11-26 07:20:39.513406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.625 [2024-11-26 07:20:39.526619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.625 [2024-11-26 07:20:39.526634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.625 [2024-11-26 07:20:39.539706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.625 [2024-11-26 07:20:39.539721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.625 [2024-11-26 07:20:39.552677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.625 [2024-11-26 07:20:39.552692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.625 [2024-11-26 07:20:39.565805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.625 [2024-11-26 07:20:39.565820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.625 [2024-11-26 07:20:39.579128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.625 [2024-11-26 07:20:39.579143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.625 [2024-11-26 07:20:39.591352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.625 [2024-11-26 07:20:39.591367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.625 [2024-11-26 07:20:39.604151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.625 [2024-11-26 07:20:39.604166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.625 [2024-11-26 07:20:39.616964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.625 [2024-11-26 07:20:39.616979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.625 [2024-11-26 07:20:39.629752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.625 [2024-11-26 07:20:39.629767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.625 [2024-11-26 07:20:39.642497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.625 [2024-11-26 07:20:39.642512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.625 [2024-11-26 07:20:39.655039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.625 [2024-11-26 07:20:39.655054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.625 [2024-11-26 07:20:39.668494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.625 [2024-11-26 07:20:39.668509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.625 [2024-11-26 07:20:39.681583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.625 [2024-11-26 07:20:39.681598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.625 [2024-11-26 07:20:39.694566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.625 [2024-11-26 07:20:39.694581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.625 [2024-11-26 07:20:39.707050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.625 [2024-11-26 07:20:39.707065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.625 [2024-11-26 07:20:39.719798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.625 [2024-11-26 07:20:39.719813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.625 [2024-11-26 07:20:39.732310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.625 [2024-11-26 07:20:39.732325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.625 [2024-11-26 07:20:39.744599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.625 [2024-11-26 07:20:39.744614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.887 [2024-11-26 07:20:39.757441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.887 [2024-11-26 07:20:39.757456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.887 [2024-11-26 07:20:39.770629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.887 [2024-11-26 07:20:39.770644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.887 [2024-11-26 07:20:39.783376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.887 [2024-11-26 07:20:39.783391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.887 [2024-11-26 07:20:39.796608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.887 [2024-11-26 07:20:39.796623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.887 [2024-11-26 07:20:39.809954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.887 [2024-11-26 07:20:39.809968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.887 [2024-11-26 07:20:39.823403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.887 [2024-11-26 07:20:39.823417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.887 [2024-11-26 07:20:39.835792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.887 [2024-11-26 07:20:39.835807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.887 [2024-11-26 07:20:39.848964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.887 [2024-11-26 07:20:39.848979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.887 [2024-11-26 07:20:39.862508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.887 [2024-11-26 07:20:39.862524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.887 [2024-11-26 07:20:39.875597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.887 [2024-11-26 07:20:39.875612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.887 [2024-11-26 07:20:39.888504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.887 [2024-11-26 07:20:39.888519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.887 [2024-11-26 07:20:39.901751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.887 [2024-11-26 07:20:39.901767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.887 [2024-11-26 07:20:39.914633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.887 [2024-11-26 07:20:39.914648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.887 [2024-11-26 07:20:39.927370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.887 [2024-11-26 07:20:39.927385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.887 [2024-11-26 07:20:39.940305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.887 [2024-11-26 07:20:39.940320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.888 [2024-11-26 07:20:39.953118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.888 [2024-11-26 07:20:39.953133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.888 [2024-11-26 07:20:39.966192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.888 [2024-11-26 07:20:39.966207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.888 [2024-11-26 07:20:39.979498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.888 [2024-11-26 07:20:39.979513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.888 [2024-11-26 07:20:39.992320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.888 [2024-11-26 07:20:39.992335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.888 [2024-11-26 07:20:40.006061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.888 [2024-11-26 07:20:40.006077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.148 [2024-11-26 07:20:40.018702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.148 [2024-11-26 07:20:40.018718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.148 [2024-11-26 07:20:40.031576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.148 [2024-11-26 07:20:40.031591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.148 [2024-11-26 07:20:40.044369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.148 [2024-11-26 07:20:40.044385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.149 [2024-11-26 07:20:40.057460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.149 [2024-11-26 07:20:40.057476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.149 [2024-11-26 07:20:40.070362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.149 [2024-11-26 07:20:40.070377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.149 [2024-11-26 07:20:40.083581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.149 [2024-11-26 07:20:40.083596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.149 [2024-11-26 07:20:40.096035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.149 [2024-11-26 07:20:40.096049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.149 [2024-11-26 07:20:40.108979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.149 [2024-11-26 07:20:40.108993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.149 [2024-11-26 07:20:40.122028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.149 [2024-11-26 07:20:40.122043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.149 [2024-11-26 07:20:40.134967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.149 [2024-11-26 07:20:40.134982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.149 [2024-11-26 07:20:40.147878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.149 [2024-11-26 07:20:40.147893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.149 19649.50 IOPS, 153.51 MiB/s [2024-11-26T06:20:40.286Z] [2024-11-26 07:20:40.160869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.149 [2024-11-26 07:20:40.160883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.149 [2024-11-26 07:20:40.173876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.149 [2024-11-26 07:20:40.173890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.149 [2024-11-26 07:20:40.186183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.149 [2024-11-26 07:20:40.186197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.149 [2024-11-26 07:20:40.199431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.149 [2024-11-26 07:20:40.199445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.149 [2024-11-26 07:20:40.212027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.149 [2024-11-26 07:20:40.212042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.149 [2024-11-26 07:20:40.225254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.149 [2024-11-26 07:20:40.225269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.149 [2024-11-26 07:20:40.238299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.149 [2024-11-26 07:20:40.238314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.149 [2024-11-26 07:20:40.251371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.149 [2024-11-26 07:20:40.251386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.149 [2024-11-26 07:20:40.264766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.149 [2024-11-26 07:20:40.264781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.149 [2024-11-26 07:20:40.277960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.149 [2024-11-26 07:20:40.277975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.410 [2024-11-26 07:20:40.290643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.410 [2024-11-26 07:20:40.290658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.410 [2024-11-26 07:20:40.303434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.410 [2024-11-26 07:20:40.303448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.410 [2024-11-26 07:20:40.316654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.410 [2024-11-26 07:20:40.316668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.410 [2024-11-26 07:20:40.329650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.410 [2024-11-26 07:20:40.329665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.410 [2024-11-26 07:20:40.342680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.410 [2024-11-26 07:20:40.342694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.410 [2024-11-26 07:20:40.355607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.410 [2024-11-26 07:20:40.355625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.410 [2024-11-26 07:20:40.368284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.410 [2024-11-26 07:20:40.368298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.410 [2024-11-26 07:20:40.380774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.410 [2024-11-26 07:20:40.380788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.410 [2024-11-26 07:20:40.392831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.410 [2024-11-26 07:20:40.392845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.410 [2024-11-26 07:20:40.405822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.410 [2024-11-26 07:20:40.405836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.410 [2024-11-26 07:20:40.419014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.410 [2024-11-26 07:20:40.419028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.410 [2024-11-26 07:20:40.431707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.410 [2024-11-26 07:20:40.431722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.410 [2024-11-26 07:20:40.444975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.410 [2024-11-26 07:20:40.444990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.410 [2024-11-26 07:20:40.457162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.410 [2024-11-26 07:20:40.457177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.410 [2024-11-26 07:20:40.470345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.410 [2024-11-26 07:20:40.470359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.410 [2024-11-26 07:20:40.483074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.410 [2024-11-26 07:20:40.483089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.410 [2024-11-26 07:20:40.496124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.410 [2024-11-26 07:20:40.496139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.410 [2024-11-26 07:20:40.509244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.410 [2024-11-26 07:20:40.509258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.410 [2024-11-26 07:20:40.522390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.410 [2024-11-26 07:20:40.522405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.410 [2024-11-26 07:20:40.534637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.410 [2024-11-26 07:20:40.534653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.672 [2024-11-26 07:20:40.547526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.672 [2024-11-26 07:20:40.547541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.672 [2024-11-26 07:20:40.560438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.672 [2024-11-26 07:20:40.560453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.672 [2024-11-26 07:20:40.573578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.672 [2024-11-26 07:20:40.573593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.672 [2024-11-26 07:20:40.586449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.672 [2024-11-26 07:20:40.586464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.672 [2024-11-26 07:20:40.599782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.672 [2024-11-26 07:20:40.599801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.672 [2024-11-26 07:20:40.612927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.672 [2024-11-26 07:20:40.612941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.672 [2024-11-26 07:20:40.626032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.672 [2024-11-26 07:20:40.626046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.672 [2024-11-26 07:20:40.638999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.672 [2024-11-26 07:20:40.639014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.672 [2024-11-26 07:20:40.651822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.672 [2024-11-26 07:20:40.651836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.672 [2024-11-26 07:20:40.664765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.672 [2024-11-26 07:20:40.664780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.672 [2024-11-26 07:20:40.677457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.672 [2024-11-26 07:20:40.677472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.672 [2024-11-26 07:20:40.690933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.672 [2024-11-26 07:20:40.690947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.672 [2024-11-26 07:20:40.703986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.672 [2024-11-26 07:20:40.704000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.672 [2024-11-26 07:20:40.717193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.672 [2024-11-26 07:20:40.717208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.672 [2024-11-26 07:20:40.730196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.672 [2024-11-26 07:20:40.730211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.672 [2024-11-26 07:20:40.743289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.672 [2024-11-26 07:20:40.743304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.672 [2024-11-26 07:20:40.755916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.672 [2024-11-26 07:20:40.755930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.672 [2024-11-26 07:20:40.768790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.672 [2024-11-26 07:20:40.768804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.672 [2024-11-26 07:20:40.781794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.672 [2024-11-26 07:20:40.781809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.672 [2024-11-26 07:20:40.795049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.673 [2024-11-26 07:20:40.795063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.934 [2024-11-26 07:20:40.808096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.934 [2024-11-26 07:20:40.808110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.934 [2024-11-26 07:20:40.820561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.934 [2024-11-26 07:20:40.820575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.934 [2024-11-26 07:20:40.832654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.934 [2024-11-26 07:20:40.832668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.934 [2024-11-26 07:20:40.845830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.934 [2024-11-26 07:20:40.845848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.934 [2024-11-26 07:20:40.858799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.934 [2024-11-26 07:20:40.858814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.934 [2024-11-26 07:20:40.871943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.934 [2024-11-26 07:20:40.871957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.934 [2024-11-26 07:20:40.884890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.934 [2024-11-26 07:20:40.884905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.934 [2024-11-26 07:20:40.898319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.934 [2024-11-26 07:20:40.898333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.934 [2024-11-26 07:20:40.910966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.934 [2024-11-26 07:20:40.910980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.934 [2024-11-26 07:20:40.924327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.934 [2024-11-26 07:20:40.924341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.934 [2024-11-26 07:20:40.937299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.934 [2024-11-26 07:20:40.937313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.934 [2024-11-26 07:20:40.950151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.934 [2024-11-26 07:20:40.950166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.934 [2024-11-26 07:20:40.963499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.934 [2024-11-26 07:20:40.963514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.934 [2024-11-26 07:20:40.976185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.934 [2024-11-26 07:20:40.976200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.934 [2024-11-26 07:20:40.988944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.934 [2024-11-26 07:20:40.988959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.934 [2024-11-26 07:20:41.002105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.934 [2024-11-26 07:20:41.002120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.934 [2024-11-26 07:20:41.015027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.934 [2024-11-26 07:20:41.015041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.934 [2024-11-26 07:20:41.027878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.934 [2024-11-26 07:20:41.027892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.934 [2024-11-26 07:20:41.041126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.935 [2024-11-26 07:20:41.041140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.935 [2024-11-26 07:20:41.054290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.935 [2024-11-26 07:20:41.054304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.195 [2024-11-26 07:20:41.067406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.195 [2024-11-26 07:20:41.067421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.195 [2024-11-26 07:20:41.080762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.195 [2024-11-26 07:20:41.080776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.195 [2024-11-26 07:20:41.093898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.195 [2024-11-26 07:20:41.093913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.195 [2024-11-26 07:20:41.106816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.195 [2024-11-26 07:20:41.106830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.195 [2024-11-26 07:20:41.119571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.195 [2024-11-26 07:20:41.119586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.195 [2024-11-26 07:20:41.132936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.195 [2024-11-26 07:20:41.132950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.195 [2024-11-26 07:20:41.145778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.195 [2024-11-26 07:20:41.145793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.195 [2024-11-26 07:20:41.158490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.195 [2024-11-26 07:20:41.158505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.195 19665.33 IOPS, 153.64 MiB/s [2024-11-26T06:20:41.332Z] [2024-11-26 07:20:41.171238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.195 [2024-11-26 07:20:41.171252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.195 [2024-11-26 07:20:41.184494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.195 [2024-11-26 07:20:41.184509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.195 [2024-11-26 07:20:41.196845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.195 [2024-11-26 07:20:41.196859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.195 [2024-11-26 07:20:41.209702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.195 [2024-11-26 07:20:41.209717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.195 [2024-11-26 07:20:41.222938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.195 [2024-11-26 07:20:41.222953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.195 [2024-11-26 07:20:41.236099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.195 [2024-11-26 07:20:41.236113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.195 [2024-11-26 07:20:41.248598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.195 [2024-11-26 07:20:41.248613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.195 [2024-11-26 07:20:41.261897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.195 [2024-11-26 07:20:41.261911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.195 [2024-11-26 07:20:41.274965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.195 [2024-11-26 07:20:41.274980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.195 [2024-11-26 07:20:41.287749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.195 [2024-11-26 07:20:41.287764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.195 [2024-11-26 07:20:41.301038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.195 [2024-11-26 07:20:41.301053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.195 [2024-11-26 07:20:41.314335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.195 [2024-11-26 07:20:41.314350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.457 [2024-11-26 07:20:41.327388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.457 [2024-11-26 07:20:41.327403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.457 [2024-11-26 07:20:41.340502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.457 [2024-11-26 07:20:41.340516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.457 [2024-11-26 07:20:41.353573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.457 [2024-11-26 07:20:41.353587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.457 [2024-11-26 07:20:41.366651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.457 [2024-11-26 07:20:41.366665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.457 [2024-11-26 07:20:41.379649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.457 [2024-11-26 07:20:41.379664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.457 [2024-11-26 07:20:41.392506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.457 [2024-11-26 07:20:41.392521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.457 [2024-11-26 07:20:41.405537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.457 [2024-11-26 07:20:41.405552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.457 [2024-11-26 07:20:41.418662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.457 [2024-11-26 07:20:41.418677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.457 [2024-11-26 07:20:41.431704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.457 [2024-11-26 07:20:41.431719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.457 [2024-11-26 07:20:41.444717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.457 [2024-11-26 07:20:41.444732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.457 [2024-11-26 07:20:41.457562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.457 [2024-11-26 07:20:41.457576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.457 [2024-11-26 07:20:41.470847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.457 [2024-11-26 07:20:41.470866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.457 [2024-11-26 07:20:41.484121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.457 [2024-11-26 07:20:41.484136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.457 [2024-11-26 07:20:41.496798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.457 [2024-11-26 07:20:41.496813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.457 [2024-11-26 07:20:41.509548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.457 [2024-11-26 07:20:41.509563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.457 [2024-11-26 07:20:41.522484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.457 [2024-11-26 07:20:41.522499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.457 [2024-11-26 07:20:41.535585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.457 [2024-11-26 07:20:41.535599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.457 [2024-11-26 07:20:41.548595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.457 [2024-11-26 07:20:41.548610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.457 [2024-11-26 07:20:41.560764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.457 [2024-11-26 07:20:41.560778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.457 [2024-11-26 07:20:41.573765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.457 [2024-11-26 07:20:41.573785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.457 [2024-11-26 07:20:41.586878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.457 [2024-11-26 07:20:41.586893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.719 [2024-11-26 07:20:41.599990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.719 [2024-11-26 07:20:41.600005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.719 [2024-11-26 07:20:41.612573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.719 [2024-11-26 07:20:41.612588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.719 [2024-11-26 07:20:41.625444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.719 [2024-11-26 07:20:41.625459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.719 [2024-11-26 07:20:41.638564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.719 [2024-11-26 07:20:41.638579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.719 [2024-11-26 07:20:41.651535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.719 [2024-11-26 07:20:41.651549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.719 [2024-11-26 07:20:41.664213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.719 [2024-11-26 07:20:41.664227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.719 [2024-11-26 07:20:41.676818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.719 [2024-11-26 07:20:41.676833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.719 [2024-11-26 07:20:41.689825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.719 [2024-11-26 07:20:41.689840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.719 [2024-11-26 07:20:41.702755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.719 [2024-11-26 07:20:41.702770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.719 [2024-11-26 07:20:41.715309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.719 [2024-11-26 07:20:41.715323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.719 [2024-11-26 07:20:41.728533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.719 [2024-11-26 07:20:41.728547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.719 [2024-11-26 07:20:41.741300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.719 [2024-11-26 07:20:41.741315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.719 [2024-11-26 07:20:41.754514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.719 [2024-11-26 07:20:41.754529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.719 [2024-11-26 07:20:41.767494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.719 [2024-11-26 07:20:41.767508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.719 [2024-11-26 07:20:41.780919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.719 [2024-11-26 07:20:41.780934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.719 [2024-11-26 07:20:41.793908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.719 [2024-11-26 07:20:41.793922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.719 [2024-11-26 07:20:41.807080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.719 [2024-11-26 07:20:41.807095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.719 [2024-11-26 07:20:41.820544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.719 [2024-11-26 07:20:41.820561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.719 [2024-11-26 07:20:41.832980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.719 [2024-11-26 07:20:41.832994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.719 [2024-11-26 07:20:41.845829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.719 [2024-11-26 07:20:41.845843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.980 [2024-11-26 07:20:41.858505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.980 [2024-11-26 07:20:41.858520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.980 [2024-11-26 07:20:41.870426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.980 [2024-11-26 07:20:41.870440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.980 [2024-11-26 07:20:41.883469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.980 [2024-11-26 07:20:41.883484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.980 [2024-11-26 07:20:41.896526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.980 [2024-11-26 07:20:41.896540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.980 [2024-11-26 07:20:41.909405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.980 [2024-11-26 07:20:41.909419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.980 [2024-11-26 07:20:41.922542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.980 [2024-11-26 07:20:41.922556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.980 [2024-11-26 07:20:41.935425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.980 [2024-11-26 07:20:41.935440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.980 [2024-11-26 07:20:41.948354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.980 [2024-11-26 07:20:41.948369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.980 [2024-11-26 07:20:41.961481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.980 [2024-11-26 07:20:41.961496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.980 [2024-11-26 07:20:41.974275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.980 [2024-11-26 07:20:41.974289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.980 [2024-11-26 07:20:41.987135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.980 [2024-11-26 07:20:41.987150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.980 [2024-11-26 07:20:42.000202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.980 [2024-11-26 07:20:42.000217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.980 [2024-11-26 07:20:42.013046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.980 [2024-11-26 07:20:42.013060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.980 [2024-11-26 07:20:42.025736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.980 [2024-11-26 07:20:42.025751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.980 [2024-11-26 07:20:42.038442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.980 [2024-11-26 07:20:42.038457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.980 [2024-11-26 07:20:42.051549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.980 [2024-11-26 07:20:42.051563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.980 [2024-11-26 07:20:42.064809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.980 [2024-11-26 07:20:42.064828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.980 [2024-11-26 07:20:42.078099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.980 [2024-11-26 07:20:42.078114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.980 [2024-11-26 07:20:42.091031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.980 [2024-11-26 07:20:42.091046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.980 [2024-11-26 07:20:42.104054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.980 [2024-11-26 07:20:42.104068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.242 [2024-11-26 07:20:42.117281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.242 [2024-11-26 07:20:42.117295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.242 [2024-11-26 07:20:42.130308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.242 [2024-11-26 07:20:42.130323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.242 [2024-11-26 07:20:42.143243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.242 [2024-11-26 07:20:42.143257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.242 [2024-11-26 07:20:42.156367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.242 [2024-11-26 07:20:42.156382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.242 19655.50 IOPS, 153.56 MiB/s [2024-11-26T06:20:42.379Z] [2024-11-26 07:20:42.169194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.242 [2024-11-26 07:20:42.169208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.242 [2024-11-26 07:20:42.181276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.242 [2024-11-26 07:20:42.181291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.242 [2024-11-26 07:20:42.194445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.242 [2024-11-26 07:20:42.194460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.242 [2024-11-26 07:20:42.207436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.242 [2024-11-26 07:20:42.207450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.242 [2024-11-26 07:20:42.220618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.242 [2024-11-26 07:20:42.220632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.242 [2024-11-26 07:20:42.234094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.242 [2024-11-26 07:20:42.234108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.242 [2024-11-26 07:20:42.246408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.242 [2024-11-26 07:20:42.246422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.242 [2024-11-26 07:20:42.259850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.242 [2024-11-26 07:20:42.259869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.242 [2024-11-26 07:20:42.272409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.242 [2024-11-26 07:20:42.272424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.242 [2024-11-26 07:20:42.285243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.242 [2024-11-26 07:20:42.285258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.242 [2024-11-26 07:20:42.298027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.242 [2024-11-26 07:20:42.298042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.242 [2024-11-26 07:20:42.311135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.242 [2024-11-26 07:20:42.311154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.242 [2024-11-26 07:20:42.324109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.242 [2024-11-26 07:20:42.324124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.242 [2024-11-26 07:20:42.337301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.242 [2024-11-26 07:20:42.337316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.242 [2024-11-26 07:20:42.350406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.242 [2024-11-26 07:20:42.350420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.242 [2024-11-26 07:20:42.363248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.242 [2024-11-26 07:20:42.363262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.503 [2024-11-26 07:20:42.375964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.503 [2024-11-26 07:20:42.375979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.503 [2024-11-26 07:20:42.389040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.503 [2024-11-26 07:20:42.389055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.503 [2024-11-26 07:20:42.402074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.503 [2024-11-26 07:20:42.402088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.503 [2024-11-26 07:20:42.415421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.503 [2024-11-26 07:20:42.415435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.503 [2024-11-26 07:20:42.428496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.503 [2024-11-26 07:20:42.428511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.503 [2024-11-26 07:20:42.441866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.503 [2024-11-26 07:20:42.441881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.503 [2024-11-26 07:20:42.455290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.503 [2024-11-26 07:20:42.455305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.503 [2024-11-26 07:20:42.467902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.503 [2024-11-26 07:20:42.467916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.503 [2024-11-26 07:20:42.481004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.503 [2024-11-26 07:20:42.481018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.503 [2024-11-26 07:20:42.494254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.503 [2024-11-26 07:20:42.494269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.503 [2024-11-26 07:20:42.507101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.503 [2024-11-26 07:20:42.507116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.503 [2024-11-26 07:20:42.520142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.503 [2024-11-26 07:20:42.520157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.503 [2024-11-26 07:20:42.533074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.503 [2024-11-26 07:20:42.533089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.503 [2024-11-26 07:20:42.546367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.503 [2024-11-26 07:20:42.546381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.503 [2024-11-26 07:20:42.558815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.503 [2024-11-26 07:20:42.558830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.503 [2024-11-26 07:20:42.571566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.503 [2024-11-26 07:20:42.571580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.503 [2024-11-26 07:20:42.584325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.503 [2024-11-26 07:20:42.584340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.503 [2024-11-26 07:20:42.597562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.503 [2024-11-26 07:20:42.597577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.503 [2024-11-26 07:20:42.610390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.503 [2024-11-26 07:20:42.610404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.503 [2024-11-26 07:20:42.623306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.503 [2024-11-26 07:20:42.623320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.765 [2024-11-26 07:20:42.636659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.765 [2024-11-26 07:20:42.636674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.765 [2024-11-26 07:20:42.649331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.765 [2024-11-26 07:20:42.649345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.765 [2024-11-26 07:20:42.662359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.765 [2024-11-26 07:20:42.662374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.765 [2024-11-26 07:20:42.675282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.765 [2024-11-26 07:20:42.675296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.765 [2024-11-26 07:20:42.688479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.765 [2024-11-26 07:20:42.688494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.765 [2024-11-26 07:20:42.701400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.765 [2024-11-26 07:20:42.701414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.765 [2024-11-26 07:20:42.714503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.765 [2024-11-26 07:20:42.714517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.765 [2024-11-26 07:20:42.727361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.765 [2024-11-26 07:20:42.727376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.765 [2024-11-26 07:20:42.740061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.765 [2024-11-26 07:20:42.740076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.765 [2024-11-26 07:20:42.753199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.765 [2024-11-26 07:20:42.753213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.765 [2024-11-26 07:20:42.766053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.765 [2024-11-26 07:20:42.766067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.765 [2024-11-26 07:20:42.778592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.765 [2024-11-26 07:20:42.778606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.765 [2024-11-26 07:20:42.791878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.765 [2024-11-26 07:20:42.791893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.765 [2024-11-26 07:20:42.804451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.765 [2024-11-26 07:20:42.804466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.765 [2024-11-26 07:20:42.817724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.765 [2024-11-26 07:20:42.817739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.765 [2024-11-26 07:20:42.830728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.765 [2024-11-26 07:20:42.830743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.765 [2024-11-26 07:20:42.843881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.765 [2024-11-26 07:20:42.843896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.765 [2024-11-26 07:20:42.856930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.765 [2024-11-26 07:20:42.856945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.765 [2024-11-26 07:20:42.869830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.765 [2024-11-26 07:20:42.869844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.765 [2024-11-26 07:20:42.882820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.765 [2024-11-26 07:20:42.882834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.027 [2024-11-26 07:20:42.896303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.027 [2024-11-26 07:20:42.896318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.027 [2024-11-26 07:20:42.909012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.027 [2024-11-26 07:20:42.909026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.027 [2024-11-26 07:20:42.922346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.027 [2024-11-26 07:20:42.922361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.027 [2024-11-26 07:20:42.935822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.027 [2024-11-26 07:20:42.935837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.027 [2024-11-26 07:20:42.948361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.027 [2024-11-26 07:20:42.948375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.027 [2024-11-26 07:20:42.961168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.027 [2024-11-26 07:20:42.961183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.027 [2024-11-26 07:20:42.973885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.027 [2024-11-26 07:20:42.973899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.027 [2024-11-26 07:20:42.986945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.027 [2024-11-26 07:20:42.986960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.027 [2024-11-26 07:20:43.000268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.027 [2024-11-26 07:20:43.000283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.027 [2024-11-26 07:20:43.013640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.027 [2024-11-26 07:20:43.013656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.027 [2024-11-26 07:20:43.026396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.027 [2024-11-26 07:20:43.026411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.027 [2024-11-26 07:20:43.039137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.027 [2024-11-26 07:20:43.039152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.027 [2024-11-26 07:20:43.052240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.027 [2024-11-26 07:20:43.052255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.027 [2024-11-26 07:20:43.064646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.027 [2024-11-26 07:20:43.064660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.027 [2024-11-26 07:20:43.077068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.027 [2024-11-26 07:20:43.077082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.027 [2024-11-26 07:20:43.090312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.027 [2024-11-26 07:20:43.090327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.027 [2024-11-26 07:20:43.103123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.027 [2024-11-26 07:20:43.103137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.027 [2024-11-26 07:20:43.115950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.027 [2024-11-26 07:20:43.115965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.027 [2024-11-26 07:20:43.129095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.027 [2024-11-26 07:20:43.129110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.027 [2024-11-26 07:20:43.142326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.027 [2024-11-26 07:20:43.142341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.027 [2024-11-26 07:20:43.155506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.027 [2024-11-26 07:20:43.155520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.289 19680.80 IOPS, 153.76 MiB/s [2024-11-26T06:20:43.426Z] [2024-11-26 07:20:43.167124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.289 [2024-11-26 07:20:43.167139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.289 00:10:59.289 Latency(us) 00:10:59.289 [2024-11-26T06:20:43.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:59.289 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:59.289 Nvme1n1 : 5.01 19679.42 153.75 0.00 0.00 6497.44 2512.21 16602.45 00:10:59.289 [2024-11-26T06:20:43.426Z] =================================================================================================================== 00:10:59.289 [2024-11-26T06:20:43.426Z] Total : 19679.42 153.75 0.00 0.00 6497.44 2512.21 16602.45 00:10:59.289 [2024-11-26 07:20:43.177335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.289 [2024-11-26 07:20:43.177348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.289 [2024-11-26 07:20:43.189372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.289 [2024-11-26 07:20:43.189385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.289 [2024-11-26 07:20:43.201396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.289 [2024-11-26 07:20:43.201408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.289 [2024-11-26 07:20:43.213427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.289 [2024-11-26 07:20:43.213439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.289 [2024-11-26 07:20:43.225455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.289 [2024-11-26 07:20:43.225466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.289 [2024-11-26 07:20:43.237485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.289 [2024-11-26 07:20:43.237501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.289 [2024-11-26 07:20:43.249516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.289 [2024-11-26 07:20:43.249524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.289 [2024-11-26 07:20:43.261549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.289 [2024-11-26 07:20:43.261560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.289 [2024-11-26 07:20:43.273578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.289 [2024-11-26 07:20:43.273585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.289 [2024-11-26 07:20:43.285608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.289 [2024-11-26 07:20:43.285618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.289 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1948102) - No such process 00:10:59.289 07:20:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1948102 00:10:59.289 07:20:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.289 07:20:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.289 07:20:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:59.289 07:20:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.289 07:20:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:59.289 07:20:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.289 07:20:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:59.289 delay0 00:10:59.289 07:20:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.289 07:20:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:59.289 07:20:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.289 07:20:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:59.289 07:20:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.289 07:20:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:59.551 [2024-11-26 07:20:43.440342] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:07.685 [2024-11-26 07:20:50.517968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127dce0 is same with the state(6) to be set 00:11:07.685 Initializing NVMe Controllers 00:11:07.685 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:07.685 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:07.685 Initialization complete. Launching workers. 00:11:07.685 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 236, failed: 31205 00:11:07.685 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 31313, failed to submit 128 00:11:07.685 success 31239, unsuccessful 74, failed 0 00:11:07.685 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:07.685 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:07.685 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:07.685 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:07.685 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:07.685 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:07.685 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:07.685 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:07.685 rmmod nvme_tcp 00:11:07.685 rmmod nvme_fabrics 00:11:07.685 rmmod nvme_keyring 00:11:07.685 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:07.685 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:07.685 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:07.686 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1945899 ']' 00:11:07.686 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1945899 00:11:07.686 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1945899 ']' 00:11:07.686 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1945899 00:11:07.686 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:11:07.686 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:07.686 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1945899 00:11:07.686 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:07.686 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:07.686 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1945899' 00:11:07.686 killing process with pid 1945899 00:11:07.686 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1945899 00:11:07.686 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1945899 00:11:07.686 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:07.686 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:07.686 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:07.686 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:11:07.686 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:11:07.686 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:07.686 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:11:07.686 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:07.686 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:07.686 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.686 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:07.686 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.070 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:09.070 00:11:09.070 real 0m35.393s 00:11:09.070 user 0m46.026s 00:11:09.070 sys 0m12.097s 00:11:09.070 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:09.070 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:09.070 ************************************ 00:11:09.070 END TEST nvmf_zcopy 00:11:09.070 ************************************ 00:11:09.070 07:20:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:09.070 07:20:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:09.070 07:20:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:09.070 07:20:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:09.071 ************************************ 00:11:09.071 START TEST nvmf_nmic 00:11:09.071 ************************************ 00:11:09.071 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:09.071 * Looking for test storage... 00:11:09.071 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:09.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.071 --rc genhtml_branch_coverage=1 00:11:09.071 --rc genhtml_function_coverage=1 00:11:09.071 --rc genhtml_legend=1 00:11:09.071 --rc geninfo_all_blocks=1 00:11:09.071 --rc geninfo_unexecuted_blocks=1 00:11:09.071 00:11:09.071 ' 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:09.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.071 --rc genhtml_branch_coverage=1 00:11:09.071 --rc genhtml_function_coverage=1 00:11:09.071 --rc genhtml_legend=1 00:11:09.071 --rc geninfo_all_blocks=1 00:11:09.071 --rc geninfo_unexecuted_blocks=1 00:11:09.071 00:11:09.071 ' 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:09.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.071 --rc genhtml_branch_coverage=1 00:11:09.071 --rc genhtml_function_coverage=1 00:11:09.071 --rc genhtml_legend=1 00:11:09.071 --rc geninfo_all_blocks=1 00:11:09.071 --rc geninfo_unexecuted_blocks=1 00:11:09.071 00:11:09.071 ' 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:09.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.071 --rc genhtml_branch_coverage=1 00:11:09.071 --rc genhtml_function_coverage=1 00:11:09.071 --rc genhtml_legend=1 00:11:09.071 --rc geninfo_all_blocks=1 00:11:09.071 --rc geninfo_unexecuted_blocks=1 00:11:09.071 00:11:09.071 ' 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:09.071 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:09.071 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:09.072 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:09.072 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:09.072 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:09.072 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.072 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:09.072 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:09.072 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:09.072 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.072 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.072 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.072 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:09.072 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:09.072 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:11:09.072 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:17.214 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:17.214 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.214 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:17.215 Found net devices under 0000:31:00.0: cvl_0_0 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:17.215 Found net devices under 0000:31:00.1: cvl_0_1 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:17.215 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:17.476 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:17.476 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:17.476 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:17.476 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:17.476 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:17.476 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.696 ms 00:11:17.476 00:11:17.476 --- 10.0.0.2 ping statistics --- 00:11:17.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.476 rtt min/avg/max/mdev = 0.696/0.696/0.696/0.000 ms 00:11:17.476 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:17.476 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:17.476 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:11:17.476 00:11:17.476 --- 10.0.0.1 ping statistics --- 00:11:17.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.476 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:11:17.476 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:17.476 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:11:17.476 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:17.476 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:17.476 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:17.476 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:17.476 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:17.476 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:17.476 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:17.476 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:17.476 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:17.476 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:17.476 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:17.476 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1955556 00:11:17.476 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1955556 00:11:17.476 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:17.476 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1955556 ']' 00:11:17.476 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.476 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:17.477 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.477 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:17.477 07:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:17.477 [2024-11-26 07:21:01.548106] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:11:17.477 [2024-11-26 07:21:01.548177] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:17.737 [2024-11-26 07:21:01.640499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:17.737 [2024-11-26 07:21:01.682683] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:17.737 [2024-11-26 07:21:01.682718] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:17.737 [2024-11-26 07:21:01.682726] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:17.737 [2024-11-26 07:21:01.682733] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:17.737 [2024-11-26 07:21:01.682739] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:17.737 [2024-11-26 07:21:01.684510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.737 [2024-11-26 07:21:01.684627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:17.737 [2024-11-26 07:21:01.684784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.737 [2024-11-26 07:21:01.684784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:18.307 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:18.307 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:11:18.307 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:18.307 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:18.307 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:18.307 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:18.307 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:18.307 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.307 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:18.307 [2024-11-26 07:21:02.403504] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:18.307 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.307 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:18.307 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.307 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:18.567 Malloc0 00:11:18.567 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.567 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:18.567 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.567 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:18.567 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.567 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:18.567 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.567 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:18.567 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.567 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:18.568 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.568 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:18.568 [2024-11-26 07:21:02.473215] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:18.568 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.568 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:18.568 test case1: single bdev can't be used in multiple subsystems 00:11:18.568 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:18.568 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.568 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:18.568 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.568 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:18.568 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.568 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:18.568 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.568 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:18.568 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:18.568 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.568 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:18.568 [2024-11-26 07:21:02.509136] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:18.568 [2024-11-26 07:21:02.509155] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:18.568 [2024-11-26 07:21:02.509163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.568 request: 00:11:18.568 { 00:11:18.568 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:18.568 "namespace": { 00:11:18.568 "bdev_name": "Malloc0", 00:11:18.568 "no_auto_visible": false 00:11:18.568 }, 00:11:18.568 "method": "nvmf_subsystem_add_ns", 00:11:18.568 "req_id": 1 00:11:18.568 } 00:11:18.568 Got JSON-RPC error response 00:11:18.568 response: 00:11:18.568 { 00:11:18.568 "code": -32602, 00:11:18.568 "message": "Invalid parameters" 00:11:18.568 } 00:11:18.568 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:18.568 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:18.568 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:18.568 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:18.568 Adding namespace failed - expected result. 00:11:18.568 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:18.568 test case2: host connect to nvmf target in multiple paths 00:11:18.568 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:18.568 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.568 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:18.568 [2024-11-26 07:21:02.521288] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:18.568 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.568 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:20.477 07:21:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:21.860 07:21:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:21.860 07:21:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:11:21.860 07:21:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:21.860 07:21:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:21.860 07:21:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:11:23.771 07:21:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:23.771 07:21:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:23.771 07:21:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:23.771 07:21:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:23.771 07:21:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:23.771 07:21:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:11:23.771 07:21:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:23.771 [global] 00:11:23.771 thread=1 00:11:23.771 invalidate=1 00:11:23.771 rw=write 00:11:23.771 time_based=1 00:11:23.771 runtime=1 00:11:23.771 ioengine=libaio 00:11:23.771 direct=1 00:11:23.771 bs=4096 00:11:23.771 iodepth=1 00:11:23.771 norandommap=0 00:11:23.771 numjobs=1 00:11:23.771 00:11:23.771 verify_dump=1 00:11:23.771 verify_backlog=512 00:11:23.771 verify_state_save=0 00:11:23.771 do_verify=1 00:11:23.771 verify=crc32c-intel 00:11:23.771 [job0] 00:11:23.771 filename=/dev/nvme0n1 00:11:23.771 Could not set queue depth (nvme0n1) 00:11:24.031 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:24.031 fio-3.35 00:11:24.031 Starting 1 thread 00:11:25.416 00:11:25.416 job0: (groupid=0, jobs=1): err= 0: pid=1957131: Tue Nov 26 07:21:09 2024 00:11:25.416 read: IOPS=70, BW=284KiB/s (291kB/s)(284KiB/1001msec) 00:11:25.416 slat (nsec): min=25565, max=39141, avg=26657.89, stdev=1865.60 00:11:25.416 clat (usec): min=749, max=43032, avg=9028.54, stdev=16341.56 00:11:25.416 lat (usec): min=775, max=43060, avg=9055.20, stdev=16342.33 00:11:25.416 clat percentiles (usec): 00:11:25.416 | 1.00th=[ 750], 5.00th=[ 848], 10.00th=[ 906], 20.00th=[ 955], 00:11:25.416 | 30.00th=[ 979], 40.00th=[ 996], 50.00th=[ 1012], 60.00th=[ 1020], 00:11:25.416 | 70.00th=[ 1057], 80.00th=[ 1156], 90.00th=[41681], 95.00th=[42206], 00:11:25.416 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:11:25.416 | 99.99th=[43254] 00:11:25.416 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:11:25.416 slat (usec): min=9, max=25765, avg=138.08, stdev=1592.59 00:11:25.416 clat (usec): min=271, max=764, avg=553.29, stdev=94.61 00:11:25.416 lat (usec): min=281, max=26246, avg=691.37, stdev=1589.56 00:11:25.416 clat percentiles (usec): 00:11:25.416 | 1.00th=[ 351], 5.00th=[ 396], 10.00th=[ 424], 20.00th=[ 478], 00:11:25.416 | 30.00th=[ 502], 40.00th=[ 529], 50.00th=[ 553], 60.00th=[ 578], 00:11:25.416 | 70.00th=[ 603], 80.00th=[ 644], 90.00th=[ 685], 95.00th=[ 709], 00:11:25.416 | 99.00th=[ 734], 99.50th=[ 734], 99.90th=[ 766], 99.95th=[ 766], 00:11:25.416 | 99.99th=[ 766] 00:11:25.416 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:11:25.416 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:25.416 lat (usec) : 500=25.56%, 750=62.26%, 1000=5.32% 00:11:25.416 lat (msec) : 2=4.46%, 50=2.40% 00:11:25.416 cpu : usr=1.70%, sys=1.40%, ctx=589, majf=0, minf=1 00:11:25.416 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:25.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.416 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.417 issued rwts: total=71,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:25.417 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:25.417 00:11:25.417 Run status group 0 (all jobs): 00:11:25.417 READ: bw=284KiB/s (291kB/s), 284KiB/s-284KiB/s (291kB/s-291kB/s), io=284KiB (291kB), run=1001-1001msec 00:11:25.417 WRITE: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:11:25.417 00:11:25.417 Disk stats (read/write): 00:11:25.417 nvme0n1: ios=57/512, merge=0/0, ticks=1204/240, in_queue=1444, util=95.99% 00:11:25.417 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:25.417 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:25.417 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:25.417 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:11:25.417 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:25.417 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:25.417 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:25.417 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:25.417 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:11:25.417 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:25.417 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:25.417 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:25.417 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:25.417 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:25.417 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:25.417 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:25.417 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:25.417 rmmod nvme_tcp 00:11:25.417 rmmod nvme_fabrics 00:11:25.417 rmmod nvme_keyring 00:11:25.417 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:25.417 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:25.417 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:25.417 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1955556 ']' 00:11:25.417 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1955556 00:11:25.417 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1955556 ']' 00:11:25.417 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1955556 00:11:25.417 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:11:25.417 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:25.417 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1955556 00:11:25.678 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:25.678 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:25.678 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1955556' 00:11:25.678 killing process with pid 1955556 00:11:25.678 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1955556 00:11:25.678 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1955556 00:11:25.678 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:25.678 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:25.678 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:25.678 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:25.678 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:25.678 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:11:25.678 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:11:25.678 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:25.678 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:25.678 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.678 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:25.678 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.225 07:21:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:28.225 00:11:28.225 real 0m18.900s 00:11:28.225 user 0m48.924s 00:11:28.225 sys 0m7.163s 00:11:28.225 07:21:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.225 07:21:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:28.225 ************************************ 00:11:28.225 END TEST nvmf_nmic 00:11:28.225 ************************************ 00:11:28.225 07:21:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:28.225 07:21:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:28.225 07:21:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:28.225 07:21:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:28.225 ************************************ 00:11:28.225 START TEST nvmf_fio_target 00:11:28.225 ************************************ 00:11:28.225 07:21:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:28.225 * Looking for test storage... 00:11:28.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.225 07:21:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:28.225 07:21:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:11:28.225 07:21:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:28.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.225 --rc genhtml_branch_coverage=1 00:11:28.225 --rc genhtml_function_coverage=1 00:11:28.225 --rc genhtml_legend=1 00:11:28.225 --rc geninfo_all_blocks=1 00:11:28.225 --rc geninfo_unexecuted_blocks=1 00:11:28.225 00:11:28.225 ' 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:28.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.225 --rc genhtml_branch_coverage=1 00:11:28.225 --rc genhtml_function_coverage=1 00:11:28.225 --rc genhtml_legend=1 00:11:28.225 --rc geninfo_all_blocks=1 00:11:28.225 --rc geninfo_unexecuted_blocks=1 00:11:28.225 00:11:28.225 ' 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:28.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.225 --rc genhtml_branch_coverage=1 00:11:28.225 --rc genhtml_function_coverage=1 00:11:28.225 --rc genhtml_legend=1 00:11:28.225 --rc geninfo_all_blocks=1 00:11:28.225 --rc geninfo_unexecuted_blocks=1 00:11:28.225 00:11:28.225 ' 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:28.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.225 --rc genhtml_branch_coverage=1 00:11:28.225 --rc genhtml_function_coverage=1 00:11:28.225 --rc genhtml_legend=1 00:11:28.225 --rc geninfo_all_blocks=1 00:11:28.225 --rc geninfo_unexecuted_blocks=1 00:11:28.225 00:11:28.225 ' 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.225 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:28.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:28.226 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:36.368 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:36.368 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:36.368 Found net devices under 0000:31:00.0: cvl_0_0 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:36.368 Found net devices under 0000:31:00.1: cvl_0_1 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:36.368 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:36.369 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:36.369 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:36.369 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:36.369 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:36.369 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:36.369 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:36.369 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:36.369 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:36.369 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:36.369 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:36.369 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:36.369 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:36.369 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:36.369 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:36.369 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:36.369 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:36.369 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:36.369 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:36.369 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:36.369 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:36.369 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:36.369 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:36.369 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:36.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:36.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:11:36.369 00:11:36.369 --- 10.0.0.2 ping statistics --- 00:11:36.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.369 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:11:36.369 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:36.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:36.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:11:36.369 00:11:36.369 --- 10.0.0.1 ping statistics --- 00:11:36.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.369 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:11:36.630 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:36.630 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:11:36.630 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:36.630 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:36.630 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:36.630 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:36.630 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:36.630 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:36.630 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:36.630 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:36.630 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:36.630 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:36.630 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.630 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1962613 00:11:36.630 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1962613 00:11:36.630 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:36.630 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1962613 ']' 00:11:36.630 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.630 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:36.630 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.630 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:36.630 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.630 [2024-11-26 07:21:20.597921] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:11:36.630 [2024-11-26 07:21:20.597982] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:36.630 [2024-11-26 07:21:20.688525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:36.630 [2024-11-26 07:21:20.729902] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:36.630 [2024-11-26 07:21:20.729938] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:36.630 [2024-11-26 07:21:20.729947] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:36.630 [2024-11-26 07:21:20.729953] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:36.630 [2024-11-26 07:21:20.729959] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:36.630 [2024-11-26 07:21:20.731590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.630 [2024-11-26 07:21:20.731708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:36.630 [2024-11-26 07:21:20.731868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.630 [2024-11-26 07:21:20.731928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:37.575 07:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:37.575 07:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:11:37.575 07:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:37.575 07:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:37.575 07:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.575 07:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:37.575 07:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:37.575 [2024-11-26 07:21:21.599025] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:37.575 07:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:37.836 07:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:37.836 07:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:38.097 07:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:38.097 07:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:38.357 07:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:38.357 07:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:38.357 07:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:38.357 07:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:38.619 07:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:38.880 07:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:38.880 07:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:38.880 07:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:38.880 07:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:39.141 07:21:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:39.141 07:21:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:39.401 07:21:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:39.662 07:21:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:39.662 07:21:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:39.662 07:21:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:39.662 07:21:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:39.922 07:21:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:40.182 [2024-11-26 07:21:24.063209] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:40.182 07:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:40.182 07:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:40.443 07:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:42.353 07:21:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:42.353 07:21:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:42.353 07:21:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:42.353 07:21:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:42.353 07:21:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:42.353 07:21:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:44.395 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:44.395 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:44.395 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:44.395 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:44.395 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:44.395 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:44.395 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:44.395 [global] 00:11:44.395 thread=1 00:11:44.395 invalidate=1 00:11:44.395 rw=write 00:11:44.395 time_based=1 00:11:44.395 runtime=1 00:11:44.395 ioengine=libaio 00:11:44.395 direct=1 00:11:44.395 bs=4096 00:11:44.395 iodepth=1 00:11:44.395 norandommap=0 00:11:44.395 numjobs=1 00:11:44.395 00:11:44.395 verify_dump=1 00:11:44.395 verify_backlog=512 00:11:44.395 verify_state_save=0 00:11:44.395 do_verify=1 00:11:44.395 verify=crc32c-intel 00:11:44.395 [job0] 00:11:44.395 filename=/dev/nvme0n1 00:11:44.395 [job1] 00:11:44.395 filename=/dev/nvme0n2 00:11:44.395 [job2] 00:11:44.395 filename=/dev/nvme0n3 00:11:44.395 [job3] 00:11:44.395 filename=/dev/nvme0n4 00:11:44.395 Could not set queue depth (nvme0n1) 00:11:44.395 Could not set queue depth (nvme0n2) 00:11:44.395 Could not set queue depth (nvme0n3) 00:11:44.395 Could not set queue depth (nvme0n4) 00:11:44.395 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:44.395 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:44.395 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:44.395 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:44.395 fio-3.35 00:11:44.395 Starting 4 threads 00:11:45.794 00:11:45.794 job0: (groupid=0, jobs=1): err= 0: pid=1964528: Tue Nov 26 07:21:29 2024 00:11:45.794 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:45.794 slat (nsec): min=7939, max=65224, avg=26632.67, stdev=4182.35 00:11:45.794 clat (usec): min=626, max=1193, avg=962.34, stdev=107.85 00:11:45.794 lat (usec): min=636, max=1220, avg=988.97, stdev=108.08 00:11:45.794 clat percentiles (usec): 00:11:45.794 | 1.00th=[ 660], 5.00th=[ 783], 10.00th=[ 832], 20.00th=[ 873], 00:11:45.794 | 30.00th=[ 906], 40.00th=[ 947], 50.00th=[ 963], 60.00th=[ 988], 00:11:45.794 | 70.00th=[ 1012], 80.00th=[ 1057], 90.00th=[ 1106], 95.00th=[ 1139], 00:11:45.794 | 99.00th=[ 1188], 99.50th=[ 1188], 99.90th=[ 1188], 99.95th=[ 1188], 00:11:45.794 | 99.99th=[ 1188] 00:11:45.794 write: IOPS=695, BW=2781KiB/s (2848kB/s)(2784KiB/1001msec); 0 zone resets 00:11:45.794 slat (usec): min=6, max=41563, avg=103.24, stdev=1608.36 00:11:45.794 clat (usec): min=217, max=865, avg=591.20, stdev=126.52 00:11:45.794 lat (usec): min=228, max=42414, avg=694.44, stdev=1624.86 00:11:45.794 clat percentiles (usec): 00:11:45.794 | 1.00th=[ 314], 5.00th=[ 371], 10.00th=[ 433], 20.00th=[ 478], 00:11:45.794 | 30.00th=[ 523], 40.00th=[ 562], 50.00th=[ 594], 60.00th=[ 627], 00:11:45.794 | 70.00th=[ 660], 80.00th=[ 709], 90.00th=[ 766], 95.00th=[ 799], 00:11:45.794 | 99.00th=[ 848], 99.50th=[ 857], 99.90th=[ 865], 99.95th=[ 865], 00:11:45.794 | 99.99th=[ 865] 00:11:45.794 bw ( KiB/s): min= 4096, max= 4096, per=38.62%, avg=4096.00, stdev= 0.00, samples=1 00:11:45.794 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:45.794 lat (usec) : 250=0.17%, 500=14.82%, 750=36.59%, 1000=33.20% 00:11:45.794 lat (msec) : 2=15.23% 00:11:45.794 cpu : usr=2.00%, sys=3.20%, ctx=1214, majf=0, minf=1 00:11:45.794 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:45.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.794 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.794 issued rwts: total=512,696,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.795 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:45.795 job1: (groupid=0, jobs=1): err= 0: pid=1964551: Tue Nov 26 07:21:29 2024 00:11:45.795 read: IOPS=15, BW=63.8KiB/s (65.3kB/s)(64.0KiB/1003msec) 00:11:45.795 slat (nsec): min=26164, max=26943, avg=26393.13, stdev=235.81 00:11:45.795 clat (usec): min=919, max=42892, avg=39444.72, stdev=10276.43 00:11:45.795 lat (usec): min=946, max=42919, avg=39471.11, stdev=10276.30 00:11:45.795 clat percentiles (usec): 00:11:45.795 | 1.00th=[ 922], 5.00th=[ 922], 10.00th=[41681], 20.00th=[41681], 00:11:45.795 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:11:45.795 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:11:45.795 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:11:45.795 | 99.99th=[42730] 00:11:45.795 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:11:45.795 slat (usec): min=9, max=40753, avg=128.63, stdev=1837.34 00:11:45.795 clat (usec): min=227, max=1014, avg=585.64, stdev=115.47 00:11:45.795 lat (usec): min=238, max=41378, avg=714.27, stdev=1845.01 00:11:45.795 clat percentiles (usec): 00:11:45.795 | 1.00th=[ 326], 5.00th=[ 388], 10.00th=[ 449], 20.00th=[ 490], 00:11:45.795 | 30.00th=[ 529], 40.00th=[ 562], 50.00th=[ 586], 60.00th=[ 611], 00:11:45.795 | 70.00th=[ 644], 80.00th=[ 685], 90.00th=[ 725], 95.00th=[ 775], 00:11:45.795 | 99.00th=[ 848], 99.50th=[ 865], 99.90th=[ 1012], 99.95th=[ 1012], 00:11:45.795 | 99.99th=[ 1012] 00:11:45.795 bw ( KiB/s): min= 4096, max= 4096, per=38.62%, avg=4096.00, stdev= 0.00, samples=1 00:11:45.795 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:45.795 lat (usec) : 250=0.19%, 500=23.30%, 750=66.67%, 1000=6.82% 00:11:45.795 lat (msec) : 2=0.19%, 50=2.84% 00:11:45.795 cpu : usr=0.90%, sys=1.40%, ctx=532, majf=0, minf=1 00:11:45.795 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:45.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.795 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.795 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.795 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:45.795 job2: (groupid=0, jobs=1): err= 0: pid=1964552: Tue Nov 26 07:21:29 2024 00:11:45.795 read: IOPS=19, BW=79.1KiB/s (81.0kB/s)(80.0KiB/1011msec) 00:11:45.795 slat (nsec): min=8025, max=27744, avg=26081.65, stdev=4261.50 00:11:45.795 clat (usec): min=690, max=42029, avg=39290.05, stdev=9097.21 00:11:45.795 lat (usec): min=718, max=42056, avg=39316.13, stdev=9096.77 00:11:45.795 clat percentiles (usec): 00:11:45.795 | 1.00th=[ 693], 5.00th=[ 693], 10.00th=[41157], 20.00th=[41157], 00:11:45.795 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:45.795 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:11:45.795 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:45.795 | 99.99th=[42206] 00:11:45.795 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:11:45.795 slat (nsec): min=8981, max=53971, avg=29115.32, stdev=10598.69 00:11:45.795 clat (usec): min=143, max=737, avg=399.40, stdev=112.67 00:11:45.795 lat (usec): min=153, max=771, avg=428.52, stdev=116.78 00:11:45.795 clat percentiles (usec): 00:11:45.795 | 1.00th=[ 167], 5.00th=[ 215], 10.00th=[ 255], 20.00th=[ 302], 00:11:45.795 | 30.00th=[ 326], 40.00th=[ 359], 50.00th=[ 408], 60.00th=[ 437], 00:11:45.795 | 70.00th=[ 461], 80.00th=[ 494], 90.00th=[ 545], 95.00th=[ 578], 00:11:45.795 | 99.00th=[ 668], 99.50th=[ 701], 99.90th=[ 742], 99.95th=[ 742], 00:11:45.795 | 99.99th=[ 742] 00:11:45.795 bw ( KiB/s): min= 4096, max= 4096, per=38.62%, avg=4096.00, stdev= 0.00, samples=1 00:11:45.795 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:45.795 lat (usec) : 250=8.65%, 500=69.92%, 750=17.86% 00:11:45.795 lat (msec) : 50=3.57% 00:11:45.795 cpu : usr=1.09%, sys=1.78%, ctx=533, majf=0, minf=1 00:11:45.795 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:45.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.795 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.795 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.795 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:45.795 job3: (groupid=0, jobs=1): err= 0: pid=1964553: Tue Nov 26 07:21:29 2024 00:11:45.795 read: IOPS=781, BW=3127KiB/s (3202kB/s)(3236KiB/1035msec) 00:11:45.795 slat (nsec): min=6956, max=47816, avg=25473.92, stdev=6305.21 00:11:45.795 clat (usec): min=218, max=42001, avg=905.63, stdev=3547.93 00:11:45.795 lat (usec): min=226, max=42028, avg=931.10, stdev=3548.44 00:11:45.795 clat percentiles (usec): 00:11:45.795 | 1.00th=[ 265], 5.00th=[ 347], 10.00th=[ 371], 20.00th=[ 437], 00:11:45.795 | 30.00th=[ 537], 40.00th=[ 594], 50.00th=[ 660], 60.00th=[ 676], 00:11:45.795 | 70.00th=[ 701], 80.00th=[ 717], 90.00th=[ 742], 95.00th=[ 766], 00:11:45.795 | 99.00th=[ 1106], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:11:45.795 | 99.99th=[42206] 00:11:45.795 write: IOPS=989, BW=3957KiB/s (4052kB/s)(4096KiB/1035msec); 0 zone resets 00:11:45.795 slat (usec): min=9, max=9254, avg=32.80, stdev=288.72 00:11:45.795 clat (usec): min=112, max=732, avg=227.84, stdev=104.08 00:11:45.795 lat (usec): min=123, max=9641, avg=260.64, stdev=314.17 00:11:45.795 clat percentiles (usec): 00:11:45.795 | 1.00th=[ 122], 5.00th=[ 127], 10.00th=[ 130], 20.00th=[ 135], 00:11:45.795 | 30.00th=[ 141], 40.00th=[ 153], 50.00th=[ 231], 60.00th=[ 247], 00:11:45.795 | 70.00th=[ 269], 80.00th=[ 289], 90.00th=[ 396], 95.00th=[ 437], 00:11:45.795 | 99.00th=[ 537], 99.50th=[ 562], 99.90th=[ 701], 99.95th=[ 734], 00:11:45.795 | 99.99th=[ 734] 00:11:45.795 bw ( KiB/s): min= 3416, max= 4776, per=38.62%, avg=4096.00, stdev=961.67, samples=2 00:11:45.795 iops : min= 854, max= 1194, avg=1024.00, stdev=240.42, samples=2 00:11:45.795 lat (usec) : 250=34.75%, 500=31.64%, 750=30.17%, 1000=2.84% 00:11:45.795 lat (msec) : 2=0.27%, 50=0.33% 00:11:45.795 cpu : usr=2.51%, sys=4.26%, ctx=1835, majf=0, minf=1 00:11:45.795 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:45.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.795 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.795 issued rwts: total=809,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.795 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:45.795 00:11:45.795 Run status group 0 (all jobs): 00:11:45.795 READ: bw=5244KiB/s (5370kB/s), 63.8KiB/s-3127KiB/s (65.3kB/s-3202kB/s), io=5428KiB (5558kB), run=1001-1035msec 00:11:45.795 WRITE: bw=10.4MiB/s (10.9MB/s), 2026KiB/s-3957KiB/s (2074kB/s-4052kB/s), io=10.7MiB (11.2MB), run=1001-1035msec 00:11:45.795 00:11:45.795 Disk stats (read/write): 00:11:45.795 nvme0n1: ios=500/512, merge=0/0, ticks=649/293, in_queue=942, util=83.77% 00:11:45.795 nvme0n2: ios=66/512, merge=0/0, ticks=686/291, in_queue=977, util=87.74% 00:11:45.795 nvme0n3: ios=72/512, merge=0/0, ticks=695/164, in_queue=859, util=95.34% 00:11:45.795 nvme0n4: ios=828/1024, merge=0/0, ticks=1402/225, in_queue=1627, util=97.64% 00:11:45.795 07:21:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:45.795 [global] 00:11:45.795 thread=1 00:11:45.795 invalidate=1 00:11:45.795 rw=randwrite 00:11:45.795 time_based=1 00:11:45.795 runtime=1 00:11:45.795 ioengine=libaio 00:11:45.795 direct=1 00:11:45.795 bs=4096 00:11:45.795 iodepth=1 00:11:45.795 norandommap=0 00:11:45.795 numjobs=1 00:11:45.795 00:11:45.795 verify_dump=1 00:11:45.795 verify_backlog=512 00:11:45.795 verify_state_save=0 00:11:45.795 do_verify=1 00:11:45.795 verify=crc32c-intel 00:11:45.795 [job0] 00:11:45.795 filename=/dev/nvme0n1 00:11:45.795 [job1] 00:11:45.795 filename=/dev/nvme0n2 00:11:45.795 [job2] 00:11:45.795 filename=/dev/nvme0n3 00:11:45.795 [job3] 00:11:45.795 filename=/dev/nvme0n4 00:11:45.795 Could not set queue depth (nvme0n1) 00:11:45.795 Could not set queue depth (nvme0n2) 00:11:45.795 Could not set queue depth (nvme0n3) 00:11:45.795 Could not set queue depth (nvme0n4) 00:11:46.363 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:46.363 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:46.363 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:46.363 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:46.363 fio-3.35 00:11:46.363 Starting 4 threads 00:11:47.307 00:11:47.307 job0: (groupid=0, jobs=1): err= 0: pid=1965001: Tue Nov 26 07:21:31 2024 00:11:47.307 read: IOPS=570, BW=2282KiB/s (2336kB/s)(2284KiB/1001msec) 00:11:47.307 slat (nsec): min=6083, max=70725, avg=26885.19, stdev=7266.10 00:11:47.307 clat (usec): min=329, max=1155, avg=765.76, stdev=130.62 00:11:47.307 lat (usec): min=336, max=1183, avg=792.64, stdev=130.61 00:11:47.307 clat percentiles (usec): 00:11:47.307 | 1.00th=[ 441], 5.00th=[ 545], 10.00th=[ 586], 20.00th=[ 660], 00:11:47.307 | 30.00th=[ 701], 40.00th=[ 742], 50.00th=[ 766], 60.00th=[ 799], 00:11:47.307 | 70.00th=[ 840], 80.00th=[ 881], 90.00th=[ 930], 95.00th=[ 955], 00:11:47.307 | 99.00th=[ 1057], 99.50th=[ 1123], 99.90th=[ 1156], 99.95th=[ 1156], 00:11:47.307 | 99.99th=[ 1156] 00:11:47.307 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:11:47.307 slat (nsec): min=8696, max=70400, avg=32518.50, stdev=9037.33 00:11:47.307 clat (usec): min=129, max=1628, avg=490.07, stdev=137.94 00:11:47.307 lat (usec): min=159, max=1663, avg=522.59, stdev=140.11 00:11:47.307 clat percentiles (usec): 00:11:47.307 | 1.00th=[ 212], 5.00th=[ 265], 10.00th=[ 306], 20.00th=[ 363], 00:11:47.307 | 30.00th=[ 416], 40.00th=[ 469], 50.00th=[ 502], 60.00th=[ 537], 00:11:47.307 | 70.00th=[ 570], 80.00th=[ 603], 90.00th=[ 652], 95.00th=[ 685], 00:11:47.307 | 99.00th=[ 766], 99.50th=[ 807], 99.90th=[ 1221], 99.95th=[ 1631], 00:11:47.307 | 99.99th=[ 1631] 00:11:47.307 bw ( KiB/s): min= 4096, max= 4096, per=31.15%, avg=4096.00, stdev= 0.00, samples=1 00:11:47.307 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:47.307 lat (usec) : 250=2.32%, 500=30.60%, 750=46.08%, 1000=20.06% 00:11:47.307 lat (msec) : 2=0.94% 00:11:47.307 cpu : usr=3.10%, sys=6.70%, ctx=1597, majf=0, minf=1 00:11:47.307 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:47.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.307 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.307 issued rwts: total=571,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:47.307 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:47.307 job1: (groupid=0, jobs=1): err= 0: pid=1965020: Tue Nov 26 07:21:31 2024 00:11:47.307 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:47.307 slat (nsec): min=8129, max=59966, avg=28409.82, stdev=3900.01 00:11:47.307 clat (usec): min=607, max=1240, avg=970.85, stdev=125.23 00:11:47.307 lat (usec): min=635, max=1268, avg=999.26, stdev=125.30 00:11:47.307 clat percentiles (usec): 00:11:47.307 | 1.00th=[ 660], 5.00th=[ 750], 10.00th=[ 799], 20.00th=[ 848], 00:11:47.307 | 30.00th=[ 906], 40.00th=[ 963], 50.00th=[ 996], 60.00th=[ 1020], 00:11:47.307 | 70.00th=[ 1045], 80.00th=[ 1090], 90.00th=[ 1123], 95.00th=[ 1156], 00:11:47.307 | 99.00th=[ 1205], 99.50th=[ 1205], 99.90th=[ 1237], 99.95th=[ 1237], 00:11:47.307 | 99.99th=[ 1237] 00:11:47.307 write: IOPS=727, BW=2909KiB/s (2979kB/s)(2912KiB/1001msec); 0 zone resets 00:11:47.307 slat (nsec): min=3387, max=54788, avg=31790.15, stdev=8770.44 00:11:47.307 clat (usec): min=240, max=1007, avg=625.43, stdev=118.43 00:11:47.307 lat (usec): min=274, max=1041, avg=657.22, stdev=122.05 00:11:47.307 clat percentiles (usec): 00:11:47.307 | 1.00th=[ 338], 5.00th=[ 416], 10.00th=[ 461], 20.00th=[ 519], 00:11:47.307 | 30.00th=[ 570], 40.00th=[ 603], 50.00th=[ 644], 60.00th=[ 676], 00:11:47.307 | 70.00th=[ 701], 80.00th=[ 725], 90.00th=[ 758], 95.00th=[ 791], 00:11:47.307 | 99.00th=[ 898], 99.50th=[ 906], 99.90th=[ 1004], 99.95th=[ 1004], 00:11:47.307 | 99.99th=[ 1004] 00:11:47.307 bw ( KiB/s): min= 4096, max= 4096, per=31.15%, avg=4096.00, stdev= 0.00, samples=1 00:11:47.307 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:47.307 lat (usec) : 250=0.08%, 500=8.95%, 750=45.16%, 1000=26.13% 00:11:47.307 lat (msec) : 2=19.68% 00:11:47.307 cpu : usr=2.70%, sys=4.90%, ctx=1244, majf=0, minf=1 00:11:47.307 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:47.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.307 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.307 issued rwts: total=512,728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:47.307 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:47.307 job2: (groupid=0, jobs=1): err= 0: pid=1965040: Tue Nov 26 07:21:31 2024 00:11:47.307 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:47.307 slat (nsec): min=26285, max=45296, avg=27315.71, stdev=2236.18 00:11:47.307 clat (usec): min=695, max=1390, avg=1026.17, stdev=108.93 00:11:47.307 lat (usec): min=723, max=1417, avg=1053.49, stdev=108.84 00:11:47.307 clat percentiles (usec): 00:11:47.307 | 1.00th=[ 750], 5.00th=[ 824], 10.00th=[ 881], 20.00th=[ 938], 00:11:47.307 | 30.00th=[ 979], 40.00th=[ 1004], 50.00th=[ 1037], 60.00th=[ 1074], 00:11:47.307 | 70.00th=[ 1090], 80.00th=[ 1123], 90.00th=[ 1156], 95.00th=[ 1188], 00:11:47.307 | 99.00th=[ 1221], 99.50th=[ 1319], 99.90th=[ 1385], 99.95th=[ 1385], 00:11:47.307 | 99.99th=[ 1385] 00:11:47.307 write: IOPS=691, BW=2765KiB/s (2832kB/s)(2768KiB/1001msec); 0 zone resets 00:11:47.307 slat (nsec): min=9776, max=55754, avg=32367.14, stdev=7898.72 00:11:47.307 clat (usec): min=264, max=3660, avg=618.56, stdev=254.60 00:11:47.307 lat (usec): min=280, max=3694, avg=650.93, stdev=256.08 00:11:47.307 clat percentiles (usec): 00:11:47.307 | 1.00th=[ 306], 5.00th=[ 383], 10.00th=[ 433], 20.00th=[ 486], 00:11:47.307 | 30.00th=[ 523], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 635], 00:11:47.307 | 70.00th=[ 668], 80.00th=[ 717], 90.00th=[ 783], 95.00th=[ 832], 00:11:47.307 | 99.00th=[ 971], 99.50th=[ 3130], 99.90th=[ 3654], 99.95th=[ 3654], 00:11:47.307 | 99.99th=[ 3654] 00:11:47.307 bw ( KiB/s): min= 4096, max= 4096, per=31.15%, avg=4096.00, stdev= 0.00, samples=1 00:11:47.307 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:47.307 lat (usec) : 500=14.29%, 750=35.30%, 1000=23.34% 00:11:47.307 lat (msec) : 2=26.74%, 4=0.33% 00:11:47.307 cpu : usr=1.70%, sys=3.90%, ctx=1205, majf=0, minf=1 00:11:47.307 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:47.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.307 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.307 issued rwts: total=512,692,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:47.307 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:47.308 job3: (groupid=0, jobs=1): err= 0: pid=1965047: Tue Nov 26 07:21:31 2024 00:11:47.308 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:47.308 slat (nsec): min=6480, max=48256, avg=27922.37, stdev=4470.17 00:11:47.308 clat (usec): min=470, max=1457, avg=927.32, stdev=176.70 00:11:47.308 lat (usec): min=499, max=1485, avg=955.25, stdev=176.92 00:11:47.308 clat percentiles (usec): 00:11:47.308 | 1.00th=[ 553], 5.00th=[ 627], 10.00th=[ 693], 20.00th=[ 775], 00:11:47.308 | 30.00th=[ 832], 40.00th=[ 881], 50.00th=[ 938], 60.00th=[ 979], 00:11:47.308 | 70.00th=[ 1020], 80.00th=[ 1090], 90.00th=[ 1156], 95.00th=[ 1221], 00:11:47.308 | 99.00th=[ 1319], 99.50th=[ 1336], 99.90th=[ 1450], 99.95th=[ 1450], 00:11:47.308 | 99.99th=[ 1450] 00:11:47.308 write: IOPS=846, BW=3385KiB/s (3466kB/s)(3388KiB/1001msec); 0 zone resets 00:11:47.308 slat (usec): min=8, max=107, avg=34.05, stdev= 8.00 00:11:47.308 clat (usec): min=149, max=987, avg=556.24, stdev=152.27 00:11:47.308 lat (usec): min=177, max=1038, avg=590.29, stdev=154.35 00:11:47.308 clat percentiles (usec): 00:11:47.308 | 1.00th=[ 241], 5.00th=[ 293], 10.00th=[ 351], 20.00th=[ 424], 00:11:47.308 | 30.00th=[ 478], 40.00th=[ 523], 50.00th=[ 562], 60.00th=[ 603], 00:11:47.308 | 70.00th=[ 635], 80.00th=[ 685], 90.00th=[ 758], 95.00th=[ 807], 00:11:47.308 | 99.00th=[ 898], 99.50th=[ 930], 99.90th=[ 988], 99.95th=[ 988], 00:11:47.308 | 99.99th=[ 988] 00:11:47.308 bw ( KiB/s): min= 4096, max= 4096, per=31.15%, avg=4096.00, stdev= 0.00, samples=1 00:11:47.308 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:47.308 lat (usec) : 250=0.88%, 500=21.93%, 750=39.37%, 1000=24.28% 00:11:47.308 lat (msec) : 2=13.54% 00:11:47.308 cpu : usr=2.70%, sys=5.90%, ctx=1361, majf=0, minf=1 00:11:47.308 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:47.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.308 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.308 issued rwts: total=512,847,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:47.308 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:47.308 00:11:47.308 Run status group 0 (all jobs): 00:11:47.308 READ: bw=8420KiB/s (8622kB/s), 2046KiB/s-2282KiB/s (2095kB/s-2336kB/s), io=8428KiB (8630kB), run=1001-1001msec 00:11:47.308 WRITE: bw=12.8MiB/s (13.5MB/s), 2765KiB/s-4092KiB/s (2832kB/s-4190kB/s), io=12.9MiB (13.5MB), run=1001-1001msec 00:11:47.308 00:11:47.308 Disk stats (read/write): 00:11:47.308 nvme0n1: ios=568/817, merge=0/0, ticks=385/287, in_queue=672, util=86.97% 00:11:47.308 nvme0n2: ios=546/512, merge=0/0, ticks=520/250, in_queue=770, util=91.03% 00:11:47.308 nvme0n3: ios=522/512, merge=0/0, ticks=625/306, in_queue=931, util=95.25% 00:11:47.308 nvme0n4: ios=566/592, merge=0/0, ticks=531/241, in_queue=772, util=97.22% 00:11:47.308 07:21:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:47.570 [global] 00:11:47.570 thread=1 00:11:47.570 invalidate=1 00:11:47.570 rw=write 00:11:47.570 time_based=1 00:11:47.570 runtime=1 00:11:47.570 ioengine=libaio 00:11:47.570 direct=1 00:11:47.570 bs=4096 00:11:47.570 iodepth=128 00:11:47.570 norandommap=0 00:11:47.570 numjobs=1 00:11:47.570 00:11:47.570 verify_dump=1 00:11:47.570 verify_backlog=512 00:11:47.570 verify_state_save=0 00:11:47.570 do_verify=1 00:11:47.570 verify=crc32c-intel 00:11:47.570 [job0] 00:11:47.570 filename=/dev/nvme0n1 00:11:47.570 [job1] 00:11:47.570 filename=/dev/nvme0n2 00:11:47.570 [job2] 00:11:47.570 filename=/dev/nvme0n3 00:11:47.570 [job3] 00:11:47.570 filename=/dev/nvme0n4 00:11:47.570 Could not set queue depth (nvme0n1) 00:11:47.570 Could not set queue depth (nvme0n2) 00:11:47.570 Could not set queue depth (nvme0n3) 00:11:47.570 Could not set queue depth (nvme0n4) 00:11:47.831 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:47.831 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:47.831 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:47.831 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:47.831 fio-3.35 00:11:47.831 Starting 4 threads 00:11:49.218 00:11:49.218 job0: (groupid=0, jobs=1): err= 0: pid=1965474: Tue Nov 26 07:21:33 2024 00:11:49.218 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:11:49.218 slat (nsec): min=892, max=18475k, avg=113115.82, stdev=754324.63 00:11:49.218 clat (usec): min=2845, max=44568, avg=14742.74, stdev=8075.66 00:11:49.218 lat (usec): min=3437, max=44593, avg=14855.85, stdev=8143.87 00:11:49.218 clat percentiles (usec): 00:11:49.218 | 1.00th=[ 5014], 5.00th=[ 5800], 10.00th=[ 6194], 20.00th=[ 7373], 00:11:49.218 | 30.00th=[ 8455], 40.00th=[ 9241], 50.00th=[12256], 60.00th=[15795], 00:11:49.218 | 70.00th=[19268], 80.00th=[22938], 90.00th=[27132], 95.00th=[29754], 00:11:49.218 | 99.00th=[35914], 99.50th=[35914], 99.90th=[36439], 99.95th=[38011], 00:11:49.218 | 99.99th=[44827] 00:11:49.218 write: IOPS=5068, BW=19.8MiB/s (20.8MB/s)(19.8MiB/1002msec); 0 zone resets 00:11:49.218 slat (nsec): min=1599, max=8738.1k, avg=82774.97, stdev=555329.16 00:11:49.218 clat (usec): min=736, max=51088, avg=11666.52, stdev=7512.97 00:11:49.218 lat (usec): min=747, max=51114, avg=11749.29, stdev=7555.76 00:11:49.218 clat percentiles (usec): 00:11:49.218 | 1.00th=[ 1926], 5.00th=[ 3621], 10.00th=[ 4293], 20.00th=[ 5669], 00:11:49.218 | 30.00th=[ 7308], 40.00th=[ 8160], 50.00th=[ 9503], 60.00th=[12911], 00:11:49.218 | 70.00th=[15008], 80.00th=[17433], 90.00th=[19006], 95.00th=[22414], 00:11:49.218 | 99.00th=[45351], 99.50th=[49546], 99.90th=[51119], 99.95th=[51119], 00:11:49.218 | 99.99th=[51119] 00:11:49.218 bw ( KiB/s): min=19136, max=20480, per=23.70%, avg=19808.00, stdev=950.35, samples=2 00:11:49.218 iops : min= 4784, max= 5120, avg=4952.00, stdev=237.59, samples=2 00:11:49.218 lat (usec) : 750=0.02%, 1000=0.01% 00:11:49.218 lat (msec) : 2=0.64%, 4=4.35%, 10=44.77%, 20=32.42%, 50=17.55% 00:11:49.218 lat (msec) : 100=0.24% 00:11:49.218 cpu : usr=3.80%, sys=5.39%, ctx=342, majf=0, minf=2 00:11:49.218 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:49.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:49.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:49.218 issued rwts: total=4608,5079,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:49.218 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:49.218 job1: (groupid=0, jobs=1): err= 0: pid=1965492: Tue Nov 26 07:21:33 2024 00:11:49.218 read: IOPS=5712, BW=22.3MiB/s (23.4MB/s)(22.4MiB/1003msec) 00:11:49.218 slat (nsec): min=913, max=8951.0k, avg=89628.13, stdev=610833.70 00:11:49.218 clat (usec): min=1162, max=44727, avg=11893.54, stdev=7042.80 00:11:49.218 lat (usec): min=2410, max=44734, avg=11983.16, stdev=7085.75 00:11:49.218 clat percentiles (usec): 00:11:49.218 | 1.00th=[ 2868], 5.00th=[ 5407], 10.00th=[ 6063], 20.00th=[ 7570], 00:11:49.218 | 30.00th=[ 8094], 40.00th=[ 8586], 50.00th=[ 9372], 60.00th=[10552], 00:11:49.218 | 70.00th=[12518], 80.00th=[14615], 90.00th=[22414], 95.00th=[27395], 00:11:49.218 | 99.00th=[40633], 99.50th=[44303], 99.90th=[44303], 99.95th=[44827], 00:11:49.218 | 99.99th=[44827] 00:11:49.218 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:11:49.218 slat (nsec): min=1611, max=7854.6k, avg=69886.59, stdev=471699.29 00:11:49.218 clat (usec): min=1053, max=28118, avg=9574.81, stdev=4767.09 00:11:49.218 lat (usec): min=1073, max=28125, avg=9644.70, stdev=4796.84 00:11:49.218 clat percentiles (usec): 00:11:49.218 | 1.00th=[ 3163], 5.00th=[ 4883], 10.00th=[ 5669], 20.00th=[ 6652], 00:11:49.218 | 30.00th=[ 7439], 40.00th=[ 7767], 50.00th=[ 7963], 60.00th=[ 8586], 00:11:49.218 | 70.00th=[ 9372], 80.00th=[11207], 90.00th=[16581], 95.00th=[22152], 00:11:49.218 | 99.00th=[25822], 99.50th=[26346], 99.90th=[27395], 99.95th=[27395], 00:11:49.218 | 99.99th=[28181] 00:11:49.218 bw ( KiB/s): min=24344, max=24576, per=29.26%, avg=24460.00, stdev=164.05, samples=2 00:11:49.218 iops : min= 6086, max= 6144, avg=6115.00, stdev=41.01, samples=2 00:11:49.218 lat (msec) : 2=0.12%, 4=2.00%, 10=62.52%, 20=26.31%, 50=9.05% 00:11:49.218 cpu : usr=3.99%, sys=7.78%, ctx=387, majf=0, minf=1 00:11:49.218 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:49.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:49.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:49.218 issued rwts: total=5730,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:49.218 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:49.218 job2: (groupid=0, jobs=1): err= 0: pid=1965515: Tue Nov 26 07:21:33 2024 00:11:49.218 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 00:11:49.218 slat (nsec): min=1032, max=7904.2k, avg=79018.17, stdev=521166.09 00:11:49.218 clat (usec): min=2971, max=30736, avg=10414.41, stdev=4922.93 00:11:49.218 lat (usec): min=2972, max=30744, avg=10493.43, stdev=4953.23 00:11:49.218 clat percentiles (usec): 00:11:49.218 | 1.00th=[ 4621], 5.00th=[ 5800], 10.00th=[ 6128], 20.00th=[ 7570], 00:11:49.218 | 30.00th=[ 8160], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9241], 00:11:49.218 | 70.00th=[10159], 80.00th=[11994], 90.00th=[17695], 95.00th=[22676], 00:11:49.218 | 99.00th=[29754], 99.50th=[30802], 99.90th=[30802], 99.95th=[30802], 00:11:49.218 | 99.99th=[30802] 00:11:49.218 write: IOPS=5725, BW=22.4MiB/s (23.5MB/s)(22.4MiB/1001msec); 0 zone resets 00:11:49.218 slat (nsec): min=1716, max=45451k, avg=90652.28, stdev=845594.00 00:11:49.218 clat (usec): min=565, max=78995, avg=11348.65, stdev=8433.28 00:11:49.218 lat (usec): min=1691, max=79005, avg=11439.30, stdev=8493.85 00:11:49.218 clat percentiles (usec): 00:11:49.218 | 1.00th=[ 4686], 5.00th=[ 5997], 10.00th=[ 6718], 20.00th=[ 7439], 00:11:49.218 | 30.00th=[ 7701], 40.00th=[ 7832], 50.00th=[ 8029], 60.00th=[ 8717], 00:11:49.218 | 70.00th=[10552], 80.00th=[14091], 90.00th=[18744], 95.00th=[23725], 00:11:49.218 | 99.00th=[53740], 99.50th=[53740], 99.90th=[79168], 99.95th=[79168], 00:11:49.218 | 99.99th=[79168] 00:11:49.218 bw ( KiB/s): min=24576, max=24576, per=29.40%, avg=24576.00, stdev= 0.00, samples=1 00:11:49.218 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:11:49.218 lat (usec) : 750=0.01% 00:11:49.218 lat (msec) : 2=0.04%, 4=0.45%, 10=68.12%, 20=22.92%, 50=7.35% 00:11:49.218 lat (msec) : 100=1.12% 00:11:49.218 cpu : usr=4.70%, sys=6.50%, ctx=453, majf=0, minf=1 00:11:49.218 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:49.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:49.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:49.218 issued rwts: total=5632,5731,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:49.218 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:49.218 job3: (groupid=0, jobs=1): err= 0: pid=1965523: Tue Nov 26 07:21:33 2024 00:11:49.218 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:11:49.218 slat (nsec): min=981, max=10243k, avg=104963.48, stdev=603051.42 00:11:49.218 clat (usec): min=4559, max=37083, avg=13365.63, stdev=5590.71 00:11:49.218 lat (usec): min=4565, max=37084, avg=13470.59, stdev=5605.73 00:11:49.218 clat percentiles (usec): 00:11:49.218 | 1.00th=[ 6521], 5.00th=[ 7177], 10.00th=[ 7701], 20.00th=[ 8979], 00:11:49.218 | 30.00th=[ 9765], 40.00th=[10290], 50.00th=[11731], 60.00th=[13566], 00:11:49.218 | 70.00th=[15008], 80.00th=[16712], 90.00th=[21365], 95.00th=[25297], 00:11:49.218 | 99.00th=[31589], 99.50th=[31589], 99.90th=[36963], 99.95th=[36963], 00:11:49.218 | 99.99th=[36963] 00:11:49.218 write: IOPS=3994, BW=15.6MiB/s (16.4MB/s)(15.6MiB/1003msec); 0 zone resets 00:11:49.218 slat (nsec): min=1630, max=56648k, avg=149613.30, stdev=1728248.59 00:11:49.218 clat (usec): min=487, max=196330, avg=16616.32, stdev=19075.58 00:11:49.218 lat (usec): min=1530, max=196341, avg=16765.93, stdev=19288.94 00:11:49.218 clat percentiles (msec): 00:11:49.218 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 8], 20.00th=[ 9], 00:11:49.218 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 13], 00:11:49.218 | 70.00th=[ 15], 80.00th=[ 17], 90.00th=[ 31], 95.00th=[ 50], 00:11:49.218 | 99.00th=[ 103], 99.50th=[ 150], 99.90th=[ 197], 99.95th=[ 197], 00:11:49.218 | 99.99th=[ 197] 00:11:49.218 bw ( KiB/s): min=12288, max=18736, per=18.56%, avg=15512.00, stdev=4559.42, samples=2 00:11:49.218 iops : min= 3072, max= 4684, avg=3878.00, stdev=1139.86, samples=2 00:11:49.218 lat (usec) : 500=0.01% 00:11:49.218 lat (msec) : 2=0.24%, 4=0.63%, 10=33.72%, 20=49.86%, 50=13.86% 00:11:49.218 lat (msec) : 100=0.84%, 250=0.84% 00:11:49.218 cpu : usr=2.89%, sys=4.39%, ctx=432, majf=0, minf=2 00:11:49.218 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:49.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:49.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:49.218 issued rwts: total=3584,4006,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:49.218 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:49.218 00:11:49.218 Run status group 0 (all jobs): 00:11:49.218 READ: bw=76.2MiB/s (79.9MB/s), 14.0MiB/s-22.3MiB/s (14.6MB/s-23.4MB/s), io=76.4MiB (80.1MB), run=1001-1003msec 00:11:49.218 WRITE: bw=81.6MiB/s (85.6MB/s), 15.6MiB/s-23.9MiB/s (16.4MB/s-25.1MB/s), io=81.9MiB (85.9MB), run=1001-1003msec 00:11:49.218 00:11:49.218 Disk stats (read/write): 00:11:49.218 nvme0n1: ios=4003/4096, merge=0/0, ticks=26526/21686, in_queue=48212, util=94.09% 00:11:49.219 nvme0n2: ios=4632/4780, merge=0/0, ticks=29318/21950, in_queue=51268, util=94.80% 00:11:49.219 nvme0n3: ios=5166/5143, merge=0/0, ticks=25447/20825, in_queue=46272, util=97.67% 00:11:49.219 nvme0n4: ios=3107/3279, merge=0/0, ticks=14275/12085, in_queue=26360, util=95.29% 00:11:49.219 07:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:49.219 [global] 00:11:49.219 thread=1 00:11:49.219 invalidate=1 00:11:49.219 rw=randwrite 00:11:49.219 time_based=1 00:11:49.219 runtime=1 00:11:49.219 ioengine=libaio 00:11:49.219 direct=1 00:11:49.219 bs=4096 00:11:49.219 iodepth=128 00:11:49.219 norandommap=0 00:11:49.219 numjobs=1 00:11:49.219 00:11:49.219 verify_dump=1 00:11:49.219 verify_backlog=512 00:11:49.219 verify_state_save=0 00:11:49.219 do_verify=1 00:11:49.219 verify=crc32c-intel 00:11:49.219 [job0] 00:11:49.219 filename=/dev/nvme0n1 00:11:49.219 [job1] 00:11:49.219 filename=/dev/nvme0n2 00:11:49.219 [job2] 00:11:49.219 filename=/dev/nvme0n3 00:11:49.219 [job3] 00:11:49.219 filename=/dev/nvme0n4 00:11:49.219 Could not set queue depth (nvme0n1) 00:11:49.219 Could not set queue depth (nvme0n2) 00:11:49.219 Could not set queue depth (nvme0n3) 00:11:49.219 Could not set queue depth (nvme0n4) 00:11:49.479 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:49.479 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:49.479 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:49.479 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:49.479 fio-3.35 00:11:49.479 Starting 4 threads 00:11:50.865 00:11:50.865 job0: (groupid=0, jobs=1): err= 0: pid=1965984: Tue Nov 26 07:21:34 2024 00:11:50.865 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.1MiB/1007msec) 00:11:50.865 slat (nsec): min=900, max=23589k, avg=97099.38, stdev=715350.60 00:11:50.865 clat (usec): min=1504, max=44734, avg=12711.32, stdev=8074.31 00:11:50.865 lat (usec): min=3472, max=44740, avg=12808.42, stdev=8120.97 00:11:50.865 clat percentiles (usec): 00:11:50.865 | 1.00th=[ 3916], 5.00th=[ 5473], 10.00th=[ 5800], 20.00th=[ 6521], 00:11:50.865 | 30.00th=[ 7832], 40.00th=[ 8225], 50.00th=[ 9503], 60.00th=[11863], 00:11:50.865 | 70.00th=[13960], 80.00th=[18220], 90.00th=[26084], 95.00th=[31065], 00:11:50.865 | 99.00th=[40633], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:11:50.865 | 99.99th=[44827] 00:11:50.865 write: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec); 0 zone resets 00:11:50.865 slat (nsec): min=1515, max=13096k, avg=84244.76, stdev=531423.89 00:11:50.865 clat (usec): min=1146, max=50817, avg=11086.57, stdev=7491.19 00:11:50.865 lat (usec): min=1156, max=50824, avg=11170.81, stdev=7534.37 00:11:50.865 clat percentiles (usec): 00:11:50.865 | 1.00th=[ 2245], 5.00th=[ 4047], 10.00th=[ 4555], 20.00th=[ 6325], 00:11:50.865 | 30.00th=[ 6783], 40.00th=[ 7242], 50.00th=[ 8717], 60.00th=[ 9503], 00:11:50.865 | 70.00th=[12125], 80.00th=[15401], 90.00th=[20317], 95.00th=[25297], 00:11:50.865 | 99.00th=[43254], 99.50th=[50594], 99.90th=[50594], 99.95th=[50594], 00:11:50.865 | 99.99th=[50594] 00:11:50.865 bw ( KiB/s): min=15480, max=28672, per=23.60%, avg=22076.00, stdev=9328.15, samples=2 00:11:50.865 iops : min= 3870, max= 7168, avg=5519.00, stdev=2332.04, samples=2 00:11:50.865 lat (msec) : 2=0.34%, 4=2.92%, 10=54.52%, 20=27.95%, 50=14.00% 00:11:50.865 lat (msec) : 100=0.28% 00:11:50.865 cpu : usr=4.27%, sys=4.77%, ctx=514, majf=0, minf=1 00:11:50.865 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:50.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.865 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:50.865 issued rwts: total=5135,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.865 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:50.865 job1: (groupid=0, jobs=1): err= 0: pid=1965994: Tue Nov 26 07:21:34 2024 00:11:50.865 read: IOPS=7699, BW=30.1MiB/s (31.5MB/s)(30.2MiB/1003msec) 00:11:50.865 slat (nsec): min=914, max=9885.7k, avg=63499.57, stdev=444483.50 00:11:50.866 clat (usec): min=2673, max=28096, avg=8073.76, stdev=3198.31 00:11:50.866 lat (usec): min=2962, max=28109, avg=8137.26, stdev=3240.61 00:11:50.866 clat percentiles (usec): 00:11:50.866 | 1.00th=[ 4015], 5.00th=[ 5473], 10.00th=[ 5932], 20.00th=[ 6652], 00:11:50.866 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7242], 60.00th=[ 7439], 00:11:50.866 | 70.00th=[ 7635], 80.00th=[ 8160], 90.00th=[10683], 95.00th=[17957], 00:11:50.866 | 99.00th=[20579], 99.50th=[21103], 99.90th=[24249], 99.95th=[27657], 00:11:50.866 | 99.99th=[28181] 00:11:50.866 write: IOPS=8167, BW=31.9MiB/s (33.5MB/s)(32.0MiB/1003msec); 0 zone resets 00:11:50.866 slat (nsec): min=1501, max=13398k, avg=55805.32, stdev=374902.46 00:11:50.866 clat (usec): min=3533, max=32980, avg=7900.26, stdev=3175.99 00:11:50.866 lat (usec): min=3536, max=32989, avg=7956.06, stdev=3204.84 00:11:50.866 clat percentiles (usec): 00:11:50.866 | 1.00th=[ 4178], 5.00th=[ 5276], 10.00th=[ 5604], 20.00th=[ 6128], 00:11:50.866 | 30.00th=[ 6652], 40.00th=[ 6915], 50.00th=[ 7046], 60.00th=[ 7308], 00:11:50.866 | 70.00th=[ 7701], 80.00th=[ 8225], 90.00th=[12125], 95.00th=[15139], 00:11:50.866 | 99.00th=[21103], 99.50th=[22676], 99.90th=[23462], 99.95th=[25560], 00:11:50.866 | 99.99th=[32900] 00:11:50.866 bw ( KiB/s): min=27128, max=37736, per=34.67%, avg=32432.00, stdev=7500.99, samples=2 00:11:50.866 iops : min= 6782, max= 9434, avg=8108.00, stdev=1875.25, samples=2 00:11:50.866 lat (msec) : 4=0.86%, 10=87.66%, 20=9.99%, 50=1.49% 00:11:50.866 cpu : usr=5.19%, sys=7.98%, ctx=764, majf=0, minf=1 00:11:50.866 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:50.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.866 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:50.866 issued rwts: total=7723,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.866 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:50.866 job2: (groupid=0, jobs=1): err= 0: pid=1966012: Tue Nov 26 07:21:34 2024 00:11:50.866 read: IOPS=2733, BW=10.7MiB/s (11.2MB/s)(10.8MiB/1007msec) 00:11:50.866 slat (nsec): min=978, max=27493k, avg=199951.02, stdev=1286376.58 00:11:50.866 clat (usec): min=1526, max=71649, avg=23992.54, stdev=14027.00 00:11:50.866 lat (usec): min=6365, max=71653, avg=24192.49, stdev=14087.22 00:11:50.866 clat percentiles (usec): 00:11:50.866 | 1.00th=[ 7570], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[13304], 00:11:50.866 | 30.00th=[14746], 40.00th=[16581], 50.00th=[19268], 60.00th=[21627], 00:11:50.866 | 70.00th=[26346], 80.00th=[38536], 90.00th=[43779], 95.00th=[53740], 00:11:50.866 | 99.00th=[66847], 99.50th=[71828], 99.90th=[71828], 99.95th=[71828], 00:11:50.866 | 99.99th=[71828] 00:11:50.866 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:11:50.866 slat (nsec): min=1641, max=22681k, avg=141536.20, stdev=954770.46 00:11:50.866 clat (usec): min=5442, max=65877, avg=19455.93, stdev=11110.03 00:11:50.866 lat (usec): min=5450, max=65887, avg=19597.46, stdev=11151.26 00:11:50.866 clat percentiles (usec): 00:11:50.866 | 1.00th=[ 6456], 5.00th=[ 7635], 10.00th=[ 9372], 20.00th=[11994], 00:11:50.866 | 30.00th=[13042], 40.00th=[15270], 50.00th=[16057], 60.00th=[17171], 00:11:50.866 | 70.00th=[20579], 80.00th=[26084], 90.00th=[34866], 95.00th=[43779], 00:11:50.866 | 99.00th=[62129], 99.50th=[62129], 99.90th=[65799], 99.95th=[65799], 00:11:50.866 | 99.99th=[65799] 00:11:50.866 bw ( KiB/s): min=11032, max=13544, per=13.13%, avg=12288.00, stdev=1776.25, samples=2 00:11:50.866 iops : min= 2758, max= 3386, avg=3072.00, stdev=444.06, samples=2 00:11:50.866 lat (msec) : 2=0.02%, 10=11.74%, 20=49.80%, 50=34.06%, 100=4.38% 00:11:50.866 cpu : usr=1.79%, sys=3.78%, ctx=287, majf=0, minf=1 00:11:50.866 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:11:50.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.866 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:50.866 issued rwts: total=2753,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.866 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:50.866 job3: (groupid=0, jobs=1): err= 0: pid=1966019: Tue Nov 26 07:21:34 2024 00:11:50.866 read: IOPS=6601, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1007msec) 00:11:50.866 slat (nsec): min=981, max=16726k, avg=78554.93, stdev=614818.93 00:11:50.866 clat (usec): min=1863, max=44376, avg=10141.35, stdev=4820.94 00:11:50.866 lat (usec): min=1868, max=44391, avg=10219.90, stdev=4868.75 00:11:50.866 clat percentiles (usec): 00:11:50.866 | 1.00th=[ 3621], 5.00th=[ 5800], 10.00th=[ 6849], 20.00th=[ 7177], 00:11:50.866 | 30.00th=[ 7963], 40.00th=[ 8291], 50.00th=[ 8717], 60.00th=[ 9110], 00:11:50.866 | 70.00th=[11076], 80.00th=[12780], 90.00th=[13698], 95.00th=[17957], 00:11:50.866 | 99.00th=[30540], 99.50th=[35914], 99.90th=[41681], 99.95th=[44303], 00:11:50.866 | 99.99th=[44303] 00:11:50.866 write: IOPS=6609, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1007msec); 0 zone resets 00:11:50.866 slat (nsec): min=1605, max=9827.7k, avg=59285.56, stdev=397257.69 00:11:50.866 clat (usec): min=721, max=44386, avg=9014.42, stdev=5695.24 00:11:50.866 lat (usec): min=730, max=44397, avg=9073.71, stdev=5736.36 00:11:50.866 clat percentiles (usec): 00:11:50.866 | 1.00th=[ 1663], 5.00th=[ 3687], 10.00th=[ 4752], 20.00th=[ 5932], 00:11:50.866 | 30.00th=[ 6652], 40.00th=[ 7504], 50.00th=[ 7767], 60.00th=[ 7963], 00:11:50.866 | 70.00th=[ 8455], 80.00th=[10945], 90.00th=[13304], 95.00th=[20841], 00:11:50.866 | 99.00th=[34866], 99.50th=[35390], 99.90th=[36439], 99.95th=[36439], 00:11:50.866 | 99.99th=[44303] 00:11:50.866 bw ( KiB/s): min=20480, max=32768, per=28.46%, avg=26624.00, stdev=8688.93, samples=2 00:11:50.866 iops : min= 5120, max= 8192, avg=6656.00, stdev=2172.23, samples=2 00:11:50.866 lat (usec) : 750=0.02% 00:11:50.866 lat (msec) : 2=0.77%, 4=3.82%, 10=65.48%, 20=24.87%, 50=5.04% 00:11:50.866 cpu : usr=6.16%, sys=7.16%, ctx=522, majf=0, minf=1 00:11:50.866 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:11:50.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.866 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:50.866 issued rwts: total=6648,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.866 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:50.866 00:11:50.866 Run status group 0 (all jobs): 00:11:50.866 READ: bw=86.3MiB/s (90.5MB/s), 10.7MiB/s-30.1MiB/s (11.2MB/s-31.5MB/s), io=86.9MiB (91.2MB), run=1003-1007msec 00:11:50.866 WRITE: bw=91.4MiB/s (95.8MB/s), 11.9MiB/s-31.9MiB/s (12.5MB/s-33.5MB/s), io=92.0MiB (96.5MB), run=1003-1007msec 00:11:50.866 00:11:50.866 Disk stats (read/write): 00:11:50.866 nvme0n1: ios=4803/5120, merge=0/0, ticks=26218/23072, in_queue=49290, util=95.29% 00:11:50.866 nvme0n2: ios=6180/6323, merge=0/0, ticks=25525/24764, in_queue=50289, util=87.56% 00:11:50.866 nvme0n3: ios=2128/2547, merge=0/0, ticks=16432/12730, in_queue=29162, util=98.42% 00:11:50.866 nvme0n4: ios=6044/6144, merge=0/0, ticks=52796/42401, in_queue=95197, util=98.82% 00:11:50.866 07:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:50.866 07:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1966146 00:11:50.866 07:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:50.866 07:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:50.866 [global] 00:11:50.866 thread=1 00:11:50.866 invalidate=1 00:11:50.866 rw=read 00:11:50.866 time_based=1 00:11:50.866 runtime=10 00:11:50.866 ioengine=libaio 00:11:50.866 direct=1 00:11:50.866 bs=4096 00:11:50.866 iodepth=1 00:11:50.866 norandommap=1 00:11:50.866 numjobs=1 00:11:50.866 00:11:50.866 [job0] 00:11:50.866 filename=/dev/nvme0n1 00:11:50.866 [job1] 00:11:50.866 filename=/dev/nvme0n2 00:11:50.866 [job2] 00:11:50.866 filename=/dev/nvme0n3 00:11:50.866 [job3] 00:11:50.866 filename=/dev/nvme0n4 00:11:50.866 Could not set queue depth (nvme0n1) 00:11:50.866 Could not set queue depth (nvme0n2) 00:11:50.866 Could not set queue depth (nvme0n3) 00:11:50.866 Could not set queue depth (nvme0n4) 00:11:51.127 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:51.127 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:51.127 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:51.127 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:51.127 fio-3.35 00:11:51.127 Starting 4 threads 00:11:53.672 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:53.933 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=10190848, buflen=4096 00:11:53.933 fio: pid=1966504, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:53.933 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:54.194 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=6201344, buflen=4096 00:11:54.194 fio: pid=1966498, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:54.194 07:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:54.194 07:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:54.455 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=6123520, buflen=4096 00:11:54.455 fio: pid=1966465, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:54.455 07:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:54.455 07:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:54.455 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=16064512, buflen=4096 00:11:54.455 fio: pid=1966479, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:54.455 07:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:54.455 07:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:54.455 00:11:54.455 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1966465: Tue Nov 26 07:21:38 2024 00:11:54.455 read: IOPS=502, BW=2007KiB/s (2055kB/s)(5980KiB/2980msec) 00:11:54.455 slat (usec): min=7, max=34479, avg=78.25, stdev=1123.32 00:11:54.455 clat (usec): min=620, max=42984, avg=1889.99, stdev=6025.67 00:11:54.455 lat (usec): min=651, max=43010, avg=1968.27, stdev=6122.07 00:11:54.455 clat percentiles (usec): 00:11:54.455 | 1.00th=[ 742], 5.00th=[ 816], 10.00th=[ 848], 20.00th=[ 914], 00:11:54.455 | 30.00th=[ 947], 40.00th=[ 971], 50.00th=[ 996], 60.00th=[ 1020], 00:11:54.455 | 70.00th=[ 1045], 80.00th=[ 1074], 90.00th=[ 1123], 95.00th=[ 1156], 00:11:54.455 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:11:54.455 | 99.99th=[42730] 00:11:54.455 bw ( KiB/s): min= 96, max= 3936, per=15.14%, avg=1800.00, stdev=1963.53, samples=5 00:11:54.455 iops : min= 24, max= 984, avg=450.00, stdev=490.88, samples=5 00:11:54.455 lat (usec) : 750=1.14%, 1000=51.80% 00:11:54.455 lat (msec) : 2=44.79%, 50=2.21% 00:11:54.455 cpu : usr=0.67%, sys=1.34%, ctx=1500, majf=0, minf=2 00:11:54.455 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:54.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.455 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.455 issued rwts: total=1496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:54.455 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:54.455 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1966479: Tue Nov 26 07:21:38 2024 00:11:54.455 read: IOPS=1238, BW=4952KiB/s (5071kB/s)(15.3MiB/3168msec) 00:11:54.455 slat (usec): min=6, max=26408, avg=51.46, stdev=755.80 00:11:54.455 clat (usec): min=296, max=1364, avg=743.52, stdev=107.31 00:11:54.455 lat (usec): min=302, max=27128, avg=794.99, stdev=764.32 00:11:54.455 clat percentiles (usec): 00:11:54.455 | 1.00th=[ 469], 5.00th=[ 562], 10.00th=[ 594], 20.00th=[ 652], 00:11:54.455 | 30.00th=[ 693], 40.00th=[ 725], 50.00th=[ 750], 60.00th=[ 783], 00:11:54.455 | 70.00th=[ 807], 80.00th=[ 832], 90.00th=[ 873], 95.00th=[ 906], 00:11:54.455 | 99.00th=[ 955], 99.50th=[ 979], 99.90th=[ 1074], 99.95th=[ 1221], 00:11:54.455 | 99.99th=[ 1369] 00:11:54.455 bw ( KiB/s): min= 4506, max= 5224, per=42.12%, avg=5009.67, stdev=256.03, samples=6 00:11:54.455 iops : min= 1126, max= 1306, avg=1252.33, stdev=64.20, samples=6 00:11:54.455 lat (usec) : 500=2.12%, 750=46.90%, 1000=50.73% 00:11:54.455 lat (msec) : 2=0.23% 00:11:54.455 cpu : usr=1.01%, sys=3.54%, ctx=3931, majf=0, minf=1 00:11:54.455 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:54.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.455 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.455 issued rwts: total=3923,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:54.455 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:54.455 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1966498: Tue Nov 26 07:21:38 2024 00:11:54.455 read: IOPS=542, BW=2168KiB/s (2220kB/s)(6056KiB/2794msec) 00:11:54.455 slat (usec): min=6, max=13779, avg=42.99, stdev=452.92 00:11:54.455 clat (usec): min=448, max=43035, avg=1781.08, stdev=5665.65 00:11:54.456 lat (usec): min=475, max=43062, avg=1824.07, stdev=5681.65 00:11:54.456 clat percentiles (usec): 00:11:54.456 | 1.00th=[ 685], 5.00th=[ 783], 10.00th=[ 840], 20.00th=[ 906], 00:11:54.456 | 30.00th=[ 938], 40.00th=[ 955], 50.00th=[ 979], 60.00th=[ 996], 00:11:54.456 | 70.00th=[ 1029], 80.00th=[ 1074], 90.00th=[ 1139], 95.00th=[ 1172], 00:11:54.456 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[43254], 00:11:54.456 | 99.99th=[43254] 00:11:54.456 bw ( KiB/s): min= 104, max= 3960, per=17.07%, avg=2030.40, stdev=1796.13, samples=5 00:11:54.456 iops : min= 26, max= 990, avg=507.60, stdev=449.03, samples=5 00:11:54.456 lat (usec) : 500=0.07%, 750=2.97%, 1000=57.95% 00:11:54.456 lat (msec) : 2=36.96%, 50=1.98% 00:11:54.456 cpu : usr=1.04%, sys=2.11%, ctx=1518, majf=0, minf=2 00:11:54.456 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:54.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.456 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.456 issued rwts: total=1515,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:54.456 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:54.456 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1966504: Tue Nov 26 07:21:38 2024 00:11:54.456 read: IOPS=953, BW=3813KiB/s (3905kB/s)(9952KiB/2610msec) 00:11:54.456 slat (nsec): min=8057, max=61845, avg=27118.67, stdev=2885.54 00:11:54.456 clat (usec): min=303, max=1294, avg=1003.88, stdev=91.81 00:11:54.456 lat (usec): min=330, max=1321, avg=1031.00, stdev=91.74 00:11:54.456 clat percentiles (usec): 00:11:54.456 | 1.00th=[ 766], 5.00th=[ 832], 10.00th=[ 881], 20.00th=[ 930], 00:11:54.456 | 30.00th=[ 971], 40.00th=[ 996], 50.00th=[ 1012], 60.00th=[ 1037], 00:11:54.456 | 70.00th=[ 1057], 80.00th=[ 1074], 90.00th=[ 1106], 95.00th=[ 1139], 00:11:54.456 | 99.00th=[ 1188], 99.50th=[ 1221], 99.90th=[ 1270], 99.95th=[ 1270], 00:11:54.456 | 99.99th=[ 1303] 00:11:54.456 bw ( KiB/s): min= 3824, max= 3904, per=32.43%, avg=3857.60, stdev=33.66, samples=5 00:11:54.456 iops : min= 956, max= 976, avg=964.40, stdev= 8.41, samples=5 00:11:54.456 lat (usec) : 500=0.08%, 750=0.56%, 1000=42.11% 00:11:54.456 lat (msec) : 2=57.21% 00:11:54.456 cpu : usr=0.57%, sys=3.49%, ctx=2493, majf=0, minf=2 00:11:54.456 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:54.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.456 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.456 issued rwts: total=2489,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:54.456 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:54.456 00:11:54.456 Run status group 0 (all jobs): 00:11:54.456 READ: bw=11.6MiB/s (12.2MB/s), 2007KiB/s-4952KiB/s (2055kB/s-5071kB/s), io=36.8MiB (38.6MB), run=2610-3168msec 00:11:54.456 00:11:54.456 Disk stats (read/write): 00:11:54.456 nvme0n1: ios=1396/0, merge=0/0, ticks=2682/0, in_queue=2682, util=92.69% 00:11:54.456 nvme0n2: ios=3852/0, merge=0/0, ticks=2772/0, in_queue=2772, util=92.26% 00:11:54.456 nvme0n3: ios=1352/0, merge=0/0, ticks=2465/0, in_queue=2465, util=96.03% 00:11:54.456 nvme0n4: ios=2526/0, merge=0/0, ticks=3332/0, in_queue=3332, util=100.00% 00:11:54.717 07:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:54.717 07:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:54.977 07:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:54.977 07:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:54.977 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:54.977 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:55.238 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:55.238 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:55.498 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:55.498 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1966146 00:11:55.498 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:55.498 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:55.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.498 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:55.498 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:55.498 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:55.498 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:55.498 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:55.498 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:55.498 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:55.498 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:55.498 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:55.498 nvmf hotplug test: fio failed as expected 00:11:55.498 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:55.759 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:55.760 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:55.760 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:55.760 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:55.760 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:55.760 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:55.760 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:55.760 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:55.760 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:55.760 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:55.760 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:55.760 rmmod nvme_tcp 00:11:55.760 rmmod nvme_fabrics 00:11:55.760 rmmod nvme_keyring 00:11:55.760 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:55.760 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:55.760 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:55.760 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1962613 ']' 00:11:55.760 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1962613 00:11:55.760 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1962613 ']' 00:11:55.760 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1962613 00:11:55.760 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:55.760 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:55.760 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1962613 00:11:56.021 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:56.021 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:56.021 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1962613' 00:11:56.021 killing process with pid 1962613 00:11:56.021 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1962613 00:11:56.021 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1962613 00:11:56.021 07:21:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:56.021 07:21:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:56.021 07:21:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:56.021 07:21:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:56.021 07:21:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:56.021 07:21:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:56.021 07:21:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:56.021 07:21:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:56.021 07:21:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:56.021 07:21:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.021 07:21:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.021 07:21:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:58.565 00:11:58.565 real 0m30.236s 00:11:58.565 user 2m38.794s 00:11:58.565 sys 0m10.390s 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.565 ************************************ 00:11:58.565 END TEST nvmf_fio_target 00:11:58.565 ************************************ 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:58.565 ************************************ 00:11:58.565 START TEST nvmf_bdevio 00:11:58.565 ************************************ 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:58.565 * Looking for test storage... 00:11:58.565 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:58.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.565 --rc genhtml_branch_coverage=1 00:11:58.565 --rc genhtml_function_coverage=1 00:11:58.565 --rc genhtml_legend=1 00:11:58.565 --rc geninfo_all_blocks=1 00:11:58.565 --rc geninfo_unexecuted_blocks=1 00:11:58.565 00:11:58.565 ' 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:58.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.565 --rc genhtml_branch_coverage=1 00:11:58.565 --rc genhtml_function_coverage=1 00:11:58.565 --rc genhtml_legend=1 00:11:58.565 --rc geninfo_all_blocks=1 00:11:58.565 --rc geninfo_unexecuted_blocks=1 00:11:58.565 00:11:58.565 ' 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:58.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.565 --rc genhtml_branch_coverage=1 00:11:58.565 --rc genhtml_function_coverage=1 00:11:58.565 --rc genhtml_legend=1 00:11:58.565 --rc geninfo_all_blocks=1 00:11:58.565 --rc geninfo_unexecuted_blocks=1 00:11:58.565 00:11:58.565 ' 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:58.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.565 --rc genhtml_branch_coverage=1 00:11:58.565 --rc genhtml_function_coverage=1 00:11:58.565 --rc genhtml_legend=1 00:11:58.565 --rc geninfo_all_blocks=1 00:11:58.565 --rc geninfo_unexecuted_blocks=1 00:11:58.565 00:11:58.565 ' 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:58.565 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:58.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:58.566 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:06.706 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:06.706 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:06.707 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:06.707 Found net devices under 0000:31:00.0: cvl_0_0 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:06.707 Found net devices under 0000:31:00.1: cvl_0_1 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:06.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:06.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.709 ms 00:12:06.707 00:12:06.707 --- 10.0.0.2 ping statistics --- 00:12:06.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.707 rtt min/avg/max/mdev = 0.709/0.709/0.709/0.000 ms 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:06.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:06.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:12:06.707 00:12:06.707 --- 10.0.0.1 ping statistics --- 00:12:06.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.707 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1972168 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:06.707 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1972168 00:12:06.708 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1972168 ']' 00:12:06.708 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.708 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:06.708 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.708 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:06.708 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:06.970 [2024-11-26 07:21:50.857339] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:12:06.970 [2024-11-26 07:21:50.857409] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:06.970 [2024-11-26 07:21:50.967907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:06.970 [2024-11-26 07:21:51.018482] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:06.970 [2024-11-26 07:21:51.018534] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:06.970 [2024-11-26 07:21:51.018543] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:06.970 [2024-11-26 07:21:51.018550] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:06.970 [2024-11-26 07:21:51.018557] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:06.970 [2024-11-26 07:21:51.020590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:06.970 [2024-11-26 07:21:51.020645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:06.970 [2024-11-26 07:21:51.020807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:06.970 [2024-11-26 07:21:51.020806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:07.914 07:21:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:07.914 07:21:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:12:07.914 07:21:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:07.914 07:21:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:07.914 07:21:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:07.914 07:21:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:07.914 07:21:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:07.914 07:21:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.914 07:21:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:07.914 [2024-11-26 07:21:51.743582] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:07.914 07:21:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.914 07:21:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:07.914 07:21:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.914 07:21:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:07.914 Malloc0 00:12:07.914 07:21:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.914 07:21:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:07.914 07:21:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.914 07:21:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:07.914 07:21:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.914 07:21:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:07.914 07:21:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.914 07:21:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:07.914 07:21:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.914 07:21:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:07.914 07:21:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.914 07:21:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:07.914 [2024-11-26 07:21:51.817141] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:07.914 07:21:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.914 07:21:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:07.914 07:21:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:07.914 07:21:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:12:07.914 07:21:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:12:07.914 07:21:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:07.914 07:21:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:07.914 { 00:12:07.914 "params": { 00:12:07.914 "name": "Nvme$subsystem", 00:12:07.914 "trtype": "$TEST_TRANSPORT", 00:12:07.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:07.914 "adrfam": "ipv4", 00:12:07.914 "trsvcid": "$NVMF_PORT", 00:12:07.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:07.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:07.914 "hdgst": ${hdgst:-false}, 00:12:07.914 "ddgst": ${ddgst:-false} 00:12:07.914 }, 00:12:07.914 "method": "bdev_nvme_attach_controller" 00:12:07.914 } 00:12:07.914 EOF 00:12:07.914 )") 00:12:07.914 07:21:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:12:07.914 07:21:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:12:07.914 07:21:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:12:07.914 07:21:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:07.914 "params": { 00:12:07.914 "name": "Nvme1", 00:12:07.914 "trtype": "tcp", 00:12:07.914 "traddr": "10.0.0.2", 00:12:07.914 "adrfam": "ipv4", 00:12:07.914 "trsvcid": "4420", 00:12:07.914 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:07.914 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:07.915 "hdgst": false, 00:12:07.915 "ddgst": false 00:12:07.915 }, 00:12:07.915 "method": "bdev_nvme_attach_controller" 00:12:07.915 }' 00:12:07.915 [2024-11-26 07:21:51.885230] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:12:07.915 [2024-11-26 07:21:51.885308] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1972401 ] 00:12:07.915 [2024-11-26 07:21:51.972857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:07.915 [2024-11-26 07:21:52.017315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.915 [2024-11-26 07:21:52.017440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:07.915 [2024-11-26 07:21:52.017443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.175 I/O targets: 00:12:08.175 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:08.175 00:12:08.175 00:12:08.176 CUnit - A unit testing framework for C - Version 2.1-3 00:12:08.176 http://cunit.sourceforge.net/ 00:12:08.176 00:12:08.176 00:12:08.176 Suite: bdevio tests on: Nvme1n1 00:12:08.176 Test: blockdev write read block ...passed 00:12:08.176 Test: blockdev write zeroes read block ...passed 00:12:08.176 Test: blockdev write zeroes read no split ...passed 00:12:08.437 Test: blockdev write zeroes read split ...passed 00:12:08.437 Test: blockdev write zeroes read split partial ...passed 00:12:08.437 Test: blockdev reset ...[2024-11-26 07:21:52.328395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:08.437 [2024-11-26 07:21:52.328460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc14b0 (9): Bad file descriptor 00:12:08.437 [2024-11-26 07:21:52.387152] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:08.437 passed 00:12:08.437 Test: blockdev write read 8 blocks ...passed 00:12:08.437 Test: blockdev write read size > 128k ...passed 00:12:08.437 Test: blockdev write read invalid size ...passed 00:12:08.437 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:08.437 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:08.437 Test: blockdev write read max offset ...passed 00:12:08.437 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:08.437 Test: blockdev writev readv 8 blocks ...passed 00:12:08.437 Test: blockdev writev readv 30 x 1block ...passed 00:12:08.698 Test: blockdev writev readv block ...passed 00:12:08.698 Test: blockdev writev readv size > 128k ...passed 00:12:08.698 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:08.698 Test: blockdev comparev and writev ...[2024-11-26 07:21:52.652883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:08.698 [2024-11-26 07:21:52.652909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:08.698 [2024-11-26 07:21:52.652920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:08.698 [2024-11-26 07:21:52.652926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:08.699 [2024-11-26 07:21:52.653441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:08.699 [2024-11-26 07:21:52.653449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:08.699 [2024-11-26 07:21:52.653459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:08.699 [2024-11-26 07:21:52.653464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:08.699 [2024-11-26 07:21:52.653917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:08.699 [2024-11-26 07:21:52.653925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:08.699 [2024-11-26 07:21:52.653935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:08.699 [2024-11-26 07:21:52.653941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:08.699 [2024-11-26 07:21:52.654418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:08.699 [2024-11-26 07:21:52.654425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:08.699 [2024-11-26 07:21:52.654435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:08.699 [2024-11-26 07:21:52.654440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:08.699 passed 00:12:08.699 Test: blockdev nvme passthru rw ...passed 00:12:08.699 Test: blockdev nvme passthru vendor specific ...[2024-11-26 07:21:52.738704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:08.699 [2024-11-26 07:21:52.738714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:08.699 [2024-11-26 07:21:52.739041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:08.699 [2024-11-26 07:21:52.739050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:08.699 [2024-11-26 07:21:52.739384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:08.699 [2024-11-26 07:21:52.739392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:08.699 [2024-11-26 07:21:52.739705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:08.699 [2024-11-26 07:21:52.739712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:08.699 passed 00:12:08.699 Test: blockdev nvme admin passthru ...passed 00:12:08.699 Test: blockdev copy ...passed 00:12:08.699 00:12:08.699 Run Summary: Type Total Ran Passed Failed Inactive 00:12:08.699 suites 1 1 n/a 0 0 00:12:08.699 tests 23 23 23 0 0 00:12:08.699 asserts 152 152 152 0 n/a 00:12:08.699 00:12:08.699 Elapsed time = 1.195 seconds 00:12:08.960 07:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:08.960 07:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.960 07:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:08.960 07:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.960 07:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:08.960 07:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:08.960 07:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:08.960 07:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:12:08.960 07:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:08.960 07:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:12:08.960 07:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:08.960 07:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:08.960 rmmod nvme_tcp 00:12:08.960 rmmod nvme_fabrics 00:12:08.960 rmmod nvme_keyring 00:12:08.960 07:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:08.960 07:21:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:12:08.960 07:21:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:12:08.960 07:21:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1972168 ']' 00:12:08.960 07:21:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1972168 00:12:08.960 07:21:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1972168 ']' 00:12:08.960 07:21:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1972168 00:12:08.960 07:21:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:12:08.960 07:21:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:08.960 07:21:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1972168 00:12:08.960 07:21:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:12:08.960 07:21:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:12:08.960 07:21:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1972168' 00:12:08.960 killing process with pid 1972168 00:12:08.960 07:21:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1972168 00:12:08.960 07:21:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1972168 00:12:09.221 07:21:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:09.221 07:21:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:09.221 07:21:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:09.221 07:21:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:12:09.221 07:21:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:12:09.221 07:21:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:12:09.221 07:21:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:09.221 07:21:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:09.221 07:21:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:09.221 07:21:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.221 07:21:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:09.221 07:21:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:11.771 00:12:11.771 real 0m13.057s 00:12:11.771 user 0m13.241s 00:12:11.771 sys 0m6.857s 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:11.771 ************************************ 00:12:11.771 END TEST nvmf_bdevio 00:12:11.771 ************************************ 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:11.771 00:12:11.771 real 5m15.879s 00:12:11.771 user 11m56.887s 00:12:11.771 sys 1m59.277s 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:11.771 ************************************ 00:12:11.771 END TEST nvmf_target_core 00:12:11.771 ************************************ 00:12:11.771 07:21:55 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:11.771 07:21:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:11.771 07:21:55 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.771 07:21:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:11.771 ************************************ 00:12:11.771 START TEST nvmf_target_extra 00:12:11.771 ************************************ 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:11.771 * Looking for test storage... 00:12:11.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:11.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.771 --rc genhtml_branch_coverage=1 00:12:11.771 --rc genhtml_function_coverage=1 00:12:11.771 --rc genhtml_legend=1 00:12:11.771 --rc geninfo_all_blocks=1 00:12:11.771 --rc geninfo_unexecuted_blocks=1 00:12:11.771 00:12:11.771 ' 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:11.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.771 --rc genhtml_branch_coverage=1 00:12:11.771 --rc genhtml_function_coverage=1 00:12:11.771 --rc genhtml_legend=1 00:12:11.771 --rc geninfo_all_blocks=1 00:12:11.771 --rc geninfo_unexecuted_blocks=1 00:12:11.771 00:12:11.771 ' 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:11.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.771 --rc genhtml_branch_coverage=1 00:12:11.771 --rc genhtml_function_coverage=1 00:12:11.771 --rc genhtml_legend=1 00:12:11.771 --rc geninfo_all_blocks=1 00:12:11.771 --rc geninfo_unexecuted_blocks=1 00:12:11.771 00:12:11.771 ' 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:11.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.771 --rc genhtml_branch_coverage=1 00:12:11.771 --rc genhtml_function_coverage=1 00:12:11.771 --rc genhtml_legend=1 00:12:11.771 --rc geninfo_all_blocks=1 00:12:11.771 --rc geninfo_unexecuted_blocks=1 00:12:11.771 00:12:11.771 ' 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:11.771 07:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:11.772 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:11.772 ************************************ 00:12:11.772 START TEST nvmf_example 00:12:11.772 ************************************ 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:11.772 * Looking for test storage... 00:12:11.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:11.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.772 --rc genhtml_branch_coverage=1 00:12:11.772 --rc genhtml_function_coverage=1 00:12:11.772 --rc genhtml_legend=1 00:12:11.772 --rc geninfo_all_blocks=1 00:12:11.772 --rc geninfo_unexecuted_blocks=1 00:12:11.772 00:12:11.772 ' 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:11.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.772 --rc genhtml_branch_coverage=1 00:12:11.772 --rc genhtml_function_coverage=1 00:12:11.772 --rc genhtml_legend=1 00:12:11.772 --rc geninfo_all_blocks=1 00:12:11.772 --rc geninfo_unexecuted_blocks=1 00:12:11.772 00:12:11.772 ' 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:11.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.772 --rc genhtml_branch_coverage=1 00:12:11.772 --rc genhtml_function_coverage=1 00:12:11.772 --rc genhtml_legend=1 00:12:11.772 --rc geninfo_all_blocks=1 00:12:11.772 --rc geninfo_unexecuted_blocks=1 00:12:11.772 00:12:11.772 ' 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:11.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.772 --rc genhtml_branch_coverage=1 00:12:11.772 --rc genhtml_function_coverage=1 00:12:11.772 --rc genhtml_legend=1 00:12:11.772 --rc geninfo_all_blocks=1 00:12:11.772 --rc geninfo_unexecuted_blocks=1 00:12:11.772 00:12:11.772 ' 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:11.772 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:11.773 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:11.773 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:11.773 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:11.773 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:12:11.773 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:11.773 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:11.773 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:11.773 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.773 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.773 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.773 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:12:11.773 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.773 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:12:11.773 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:11.773 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:11.773 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:11.773 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:11.773 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:11.773 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:11.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:11.773 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:11.773 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:11.773 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:11.773 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:12:11.773 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:12.035 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:12.035 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:12:12.035 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:12:12.035 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:12:12.035 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:12:12.035 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:12:12.035 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:12.035 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:12.035 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:12:12.035 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:12.035 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:12.035 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:12.035 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:12.035 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:12.035 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.035 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.035 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.035 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:12.035 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:12.035 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:12:12.035 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:20.185 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:20.185 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:12:20.185 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:20.185 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:20.185 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:20.185 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:20.185 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:20.185 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:12:20.185 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:20.185 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:12:20.185 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:12:20.185 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:12:20.185 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:12:20.185 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:12:20.185 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:12:20.185 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:20.185 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:20.185 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:20.185 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:20.186 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:20.186 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:20.186 Found net devices under 0000:31:00.0: cvl_0_0 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:20.186 Found net devices under 0000:31:00.1: cvl_0_1 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:20.186 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:20.447 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:20.447 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:20.447 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:20.447 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:20.447 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:20.447 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:20.447 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:20.447 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:20.447 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:20.447 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.723 ms 00:12:20.447 00:12:20.447 --- 10.0.0.2 ping statistics --- 00:12:20.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.447 rtt min/avg/max/mdev = 0.723/0.723/0.723/0.000 ms 00:12:20.447 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:20.447 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:20.447 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:12:20.447 00:12:20.447 --- 10.0.0.1 ping statistics --- 00:12:20.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.447 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:12:20.447 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:20.447 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:12:20.447 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:20.447 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:20.447 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:20.447 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:20.447 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:20.447 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:20.447 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:20.447 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:12:20.447 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:12:20.447 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:20.447 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:20.447 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:12:20.447 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:12:20.447 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1977491 00:12:20.447 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:20.447 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:12:20.447 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1977491 00:12:20.447 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1977491 ']' 00:12:20.447 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.447 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:20.447 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.447 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:20.447 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:21.389 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:21.389 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:12:21.389 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:12:21.389 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:21.389 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:21.649 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:21.649 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.649 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:21.649 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.649 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:12:21.649 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.649 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:21.649 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.649 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:12:21.649 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:21.649 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.649 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:21.649 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.649 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:12:21.649 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:21.649 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.649 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:21.649 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.649 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:21.650 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.650 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:21.650 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.650 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:12:21.650 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:33.910 Initializing NVMe Controllers 00:12:33.910 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:33.910 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:33.910 Initialization complete. Launching workers. 00:12:33.910 ======================================================== 00:12:33.910 Latency(us) 00:12:33.910 Device Information : IOPS MiB/s Average min max 00:12:33.910 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19118.03 74.68 3347.25 643.02 15355.57 00:12:33.910 ======================================================== 00:12:33.910 Total : 19118.03 74.68 3347.25 643.02 15355.57 00:12:33.910 00:12:33.910 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:33.910 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:33.910 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:33.910 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:12:33.910 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:33.910 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:12:33.910 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:33.910 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:33.910 rmmod nvme_tcp 00:12:33.910 rmmod nvme_fabrics 00:12:33.910 rmmod nvme_keyring 00:12:33.910 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:33.910 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:12:33.910 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:12:33.910 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1977491 ']' 00:12:33.910 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1977491 00:12:33.910 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1977491 ']' 00:12:33.910 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1977491 00:12:33.910 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:12:33.910 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:33.910 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1977491 00:12:33.910 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:12:33.910 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:12:33.910 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1977491' 00:12:33.910 killing process with pid 1977491 00:12:33.910 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1977491 00:12:33.910 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1977491 00:12:33.910 nvmf threads initialize successfully 00:12:33.910 bdev subsystem init successfully 00:12:33.910 created a nvmf target service 00:12:33.910 create targets's poll groups done 00:12:33.910 all subsystems of target started 00:12:33.910 nvmf target is running 00:12:33.910 all subsystems of target stopped 00:12:33.910 destroy targets's poll groups done 00:12:33.910 destroyed the nvmf target service 00:12:33.910 bdev subsystem finish successfully 00:12:33.910 nvmf threads destroy successfully 00:12:33.910 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:33.910 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:33.910 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:33.910 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:12:33.910 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:12:33.910 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:33.910 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:12:33.910 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:33.910 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:33.910 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.910 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:33.910 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.171 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:34.171 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:34.171 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:34.171 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:34.171 00:12:34.171 real 0m22.568s 00:12:34.171 user 0m47.355s 00:12:34.171 sys 0m7.572s 00:12:34.171 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:34.171 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:34.171 ************************************ 00:12:34.171 END TEST nvmf_example 00:12:34.171 ************************************ 00:12:34.171 07:22:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:34.171 07:22:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:34.171 07:22:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:34.171 07:22:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:34.434 ************************************ 00:12:34.434 START TEST nvmf_filesystem 00:12:34.434 ************************************ 00:12:34.434 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:34.434 * Looking for test storage... 00:12:34.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:34.434 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:34.434 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:12:34.434 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:34.434 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:34.434 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:34.434 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:34.434 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:34.434 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:34.434 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:34.434 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:34.434 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:34.434 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:34.434 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:34.434 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:34.434 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:34.434 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:34.434 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:34.434 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:34.434 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:34.434 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:34.434 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:34.434 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:34.434 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:34.434 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:34.434 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:34.434 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:34.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.435 --rc genhtml_branch_coverage=1 00:12:34.435 --rc genhtml_function_coverage=1 00:12:34.435 --rc genhtml_legend=1 00:12:34.435 --rc geninfo_all_blocks=1 00:12:34.435 --rc geninfo_unexecuted_blocks=1 00:12:34.435 00:12:34.435 ' 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:34.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.435 --rc genhtml_branch_coverage=1 00:12:34.435 --rc genhtml_function_coverage=1 00:12:34.435 --rc genhtml_legend=1 00:12:34.435 --rc geninfo_all_blocks=1 00:12:34.435 --rc geninfo_unexecuted_blocks=1 00:12:34.435 00:12:34.435 ' 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:34.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.435 --rc genhtml_branch_coverage=1 00:12:34.435 --rc genhtml_function_coverage=1 00:12:34.435 --rc genhtml_legend=1 00:12:34.435 --rc geninfo_all_blocks=1 00:12:34.435 --rc geninfo_unexecuted_blocks=1 00:12:34.435 00:12:34.435 ' 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:34.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.435 --rc genhtml_branch_coverage=1 00:12:34.435 --rc genhtml_function_coverage=1 00:12:34.435 --rc genhtml_legend=1 00:12:34.435 --rc geninfo_all_blocks=1 00:12:34.435 --rc geninfo_unexecuted_blocks=1 00:12:34.435 00:12:34.435 ' 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:34.435 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:34.436 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:34.436 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:34.436 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:34.436 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:34.436 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:34.436 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:34.436 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:34.436 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:34.436 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:34.436 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:34.436 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:34.436 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:34.436 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:34.436 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:34.436 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:34.436 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:34.436 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:34.436 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:34.436 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:34.436 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:34.436 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:34.436 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:34.436 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:34.436 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:34.436 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:34.436 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:34.436 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:12:34.436 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:34.436 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:34.436 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:34.436 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:34.436 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:34.436 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:34.436 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:34.436 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:12:34.436 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:34.436 #define SPDK_CONFIG_H 00:12:34.436 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:34.436 #define SPDK_CONFIG_APPS 1 00:12:34.436 #define SPDK_CONFIG_ARCH native 00:12:34.436 #undef SPDK_CONFIG_ASAN 00:12:34.436 #undef SPDK_CONFIG_AVAHI 00:12:34.436 #undef SPDK_CONFIG_CET 00:12:34.436 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:34.436 #define SPDK_CONFIG_COVERAGE 1 00:12:34.436 #define SPDK_CONFIG_CROSS_PREFIX 00:12:34.436 #undef SPDK_CONFIG_CRYPTO 00:12:34.436 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:34.436 #undef SPDK_CONFIG_CUSTOMOCF 00:12:34.436 #undef SPDK_CONFIG_DAOS 00:12:34.436 #define SPDK_CONFIG_DAOS_DIR 00:12:34.436 #define SPDK_CONFIG_DEBUG 1 00:12:34.436 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:34.436 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:34.436 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:34.436 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:34.436 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:34.436 #undef SPDK_CONFIG_DPDK_UADK 00:12:34.436 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:34.436 #define SPDK_CONFIG_EXAMPLES 1 00:12:34.436 #undef SPDK_CONFIG_FC 00:12:34.436 #define SPDK_CONFIG_FC_PATH 00:12:34.436 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:34.436 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:34.436 #define SPDK_CONFIG_FSDEV 1 00:12:34.436 #undef SPDK_CONFIG_FUSE 00:12:34.436 #undef SPDK_CONFIG_FUZZER 00:12:34.436 #define SPDK_CONFIG_FUZZER_LIB 00:12:34.436 #undef SPDK_CONFIG_GOLANG 00:12:34.436 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:34.436 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:34.436 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:34.436 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:34.436 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:34.436 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:34.436 #undef SPDK_CONFIG_HAVE_LZ4 00:12:34.436 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:34.436 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:34.436 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:34.436 #define SPDK_CONFIG_IDXD 1 00:12:34.436 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:34.436 #undef SPDK_CONFIG_IPSEC_MB 00:12:34.436 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:34.436 #define SPDK_CONFIG_ISAL 1 00:12:34.436 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:34.436 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:34.436 #define SPDK_CONFIG_LIBDIR 00:12:34.436 #undef SPDK_CONFIG_LTO 00:12:34.436 #define SPDK_CONFIG_MAX_LCORES 128 00:12:34.436 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:34.436 #define SPDK_CONFIG_NVME_CUSE 1 00:12:34.436 #undef SPDK_CONFIG_OCF 00:12:34.436 #define SPDK_CONFIG_OCF_PATH 00:12:34.436 #define SPDK_CONFIG_OPENSSL_PATH 00:12:34.437 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:34.437 #define SPDK_CONFIG_PGO_DIR 00:12:34.437 #undef SPDK_CONFIG_PGO_USE 00:12:34.437 #define SPDK_CONFIG_PREFIX /usr/local 00:12:34.437 #undef SPDK_CONFIG_RAID5F 00:12:34.437 #undef SPDK_CONFIG_RBD 00:12:34.437 #define SPDK_CONFIG_RDMA 1 00:12:34.437 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:34.437 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:34.437 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:34.437 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:34.437 #define SPDK_CONFIG_SHARED 1 00:12:34.437 #undef SPDK_CONFIG_SMA 00:12:34.437 #define SPDK_CONFIG_TESTS 1 00:12:34.437 #undef SPDK_CONFIG_TSAN 00:12:34.437 #define SPDK_CONFIG_UBLK 1 00:12:34.437 #define SPDK_CONFIG_UBSAN 1 00:12:34.437 #undef SPDK_CONFIG_UNIT_TESTS 00:12:34.437 #undef SPDK_CONFIG_URING 00:12:34.437 #define SPDK_CONFIG_URING_PATH 00:12:34.437 #undef SPDK_CONFIG_URING_ZNS 00:12:34.437 #undef SPDK_CONFIG_USDT 00:12:34.437 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:34.437 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:34.437 #define SPDK_CONFIG_VFIO_USER 1 00:12:34.437 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:34.437 #define SPDK_CONFIG_VHOST 1 00:12:34.437 #define SPDK_CONFIG_VIRTIO 1 00:12:34.437 #undef SPDK_CONFIG_VTUNE 00:12:34.437 #define SPDK_CONFIG_VTUNE_DIR 00:12:34.437 #define SPDK_CONFIG_WERROR 1 00:12:34.437 #define SPDK_CONFIG_WPDK_DIR 00:12:34.437 #undef SPDK_CONFIG_XNVME 00:12:34.437 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:34.437 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:34.437 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:34.437 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:34.437 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:34.437 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.437 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.437 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.437 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.437 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.437 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:34.437 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.437 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:34.437 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:34.437 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:34.437 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:34.437 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:34.702 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:34.703 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:34.704 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1980287 ]] 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1980287 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.WjE9j5 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.WjE9j5/tests/target /tmp/spdk.WjE9j5 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=122248122368 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356550144 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=7108427776 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64666906624 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678273024 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:34.705 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847697408 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871310848 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23613440 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=175104 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=328704 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64677597184 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678277120 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=679936 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935639040 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935651328 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:34.706 * Looking for test storage... 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=122248122368 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9323020288 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:34.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:34.706 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:34.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.707 --rc genhtml_branch_coverage=1 00:12:34.707 --rc genhtml_function_coverage=1 00:12:34.707 --rc genhtml_legend=1 00:12:34.707 --rc geninfo_all_blocks=1 00:12:34.707 --rc geninfo_unexecuted_blocks=1 00:12:34.707 00:12:34.707 ' 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:34.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.707 --rc genhtml_branch_coverage=1 00:12:34.707 --rc genhtml_function_coverage=1 00:12:34.707 --rc genhtml_legend=1 00:12:34.707 --rc geninfo_all_blocks=1 00:12:34.707 --rc geninfo_unexecuted_blocks=1 00:12:34.707 00:12:34.707 ' 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:34.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.707 --rc genhtml_branch_coverage=1 00:12:34.707 --rc genhtml_function_coverage=1 00:12:34.707 --rc genhtml_legend=1 00:12:34.707 --rc geninfo_all_blocks=1 00:12:34.707 --rc geninfo_unexecuted_blocks=1 00:12:34.707 00:12:34.707 ' 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:34.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.707 --rc genhtml_branch_coverage=1 00:12:34.707 --rc genhtml_function_coverage=1 00:12:34.707 --rc genhtml_legend=1 00:12:34.707 --rc geninfo_all_blocks=1 00:12:34.707 --rc geninfo_unexecuted_blocks=1 00:12:34.707 00:12:34.707 ' 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:34.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:34.707 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:34.708 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:34.708 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:34.708 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:34.708 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:34.708 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:34.708 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:34.708 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:34.708 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:34.708 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:34.708 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.708 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:34.708 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.708 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:34.708 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:34.708 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:12:34.708 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:42.854 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:42.854 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:42.854 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:43.115 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.115 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:43.115 Found net devices under 0000:31:00.0: cvl_0_0 00:12:43.115 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.115 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:43.115 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.115 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:43.115 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:43.115 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:43.115 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:43.115 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.115 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:43.115 Found net devices under 0000:31:00.1: cvl_0_1 00:12:43.115 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.115 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:43.115 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:12:43.115 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:43.115 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:43.115 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:43.115 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:43.115 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:43.115 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:43.115 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:43.115 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:43.115 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:43.115 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:43.115 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:43.115 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:43.115 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:43.115 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:43.115 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:43.115 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:43.116 07:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:43.116 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:43.116 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:43.116 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:43.116 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:43.116 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:43.377 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:43.377 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:43.377 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:43.377 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:43.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:43.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:12:43.377 00:12:43.377 --- 10.0.0.2 ping statistics --- 00:12:43.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.377 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:12:43.377 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:43.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:43.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:12:43.377 00:12:43.377 --- 10.0.0.1 ping statistics --- 00:12:43.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.377 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:12:43.377 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:43.377 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:12:43.377 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:43.377 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:43.377 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:43.377 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:43.377 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:43.377 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:43.377 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:43.377 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:43.377 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:43.377 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:43.377 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:43.377 ************************************ 00:12:43.377 START TEST nvmf_filesystem_no_in_capsule 00:12:43.377 ************************************ 00:12:43.377 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:12:43.377 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:43.377 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:43.377 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:43.377 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:43.377 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:43.377 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1984621 00:12:43.377 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1984621 00:12:43.377 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:43.377 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1984621 ']' 00:12:43.377 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.377 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:43.377 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.377 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:43.377 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:43.377 [2024-11-26 07:22:27.441400] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:12:43.377 [2024-11-26 07:22:27.441457] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:43.638 [2024-11-26 07:22:27.532818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:43.638 [2024-11-26 07:22:27.570936] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:43.638 [2024-11-26 07:22:27.570969] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:43.638 [2024-11-26 07:22:27.570978] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:43.638 [2024-11-26 07:22:27.570985] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:43.638 [2024-11-26 07:22:27.570991] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:43.638 [2024-11-26 07:22:27.572547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:43.638 [2024-11-26 07:22:27.572667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:43.638 [2024-11-26 07:22:27.572827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.638 [2024-11-26 07:22:27.572828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:44.209 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:44.209 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:44.209 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:44.209 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:44.209 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:44.209 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:44.209 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:44.209 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:44.209 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.209 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:44.209 [2024-11-26 07:22:28.282738] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:44.209 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.209 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:44.209 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.209 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:44.471 Malloc1 00:12:44.471 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.471 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:44.471 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.471 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:44.471 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.471 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:44.471 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.471 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:44.471 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.471 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:44.471 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.471 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:44.471 [2024-11-26 07:22:28.409476] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:44.471 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.471 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:44.471 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:44.471 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:44.471 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:44.471 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:44.471 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:44.471 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.471 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:44.471 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.471 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:44.471 { 00:12:44.471 "name": "Malloc1", 00:12:44.471 "aliases": [ 00:12:44.471 "f093128a-1b9e-428e-a7dc-7a2a0997ba4e" 00:12:44.471 ], 00:12:44.471 "product_name": "Malloc disk", 00:12:44.471 "block_size": 512, 00:12:44.471 "num_blocks": 1048576, 00:12:44.471 "uuid": "f093128a-1b9e-428e-a7dc-7a2a0997ba4e", 00:12:44.471 "assigned_rate_limits": { 00:12:44.471 "rw_ios_per_sec": 0, 00:12:44.471 "rw_mbytes_per_sec": 0, 00:12:44.471 "r_mbytes_per_sec": 0, 00:12:44.471 "w_mbytes_per_sec": 0 00:12:44.471 }, 00:12:44.471 "claimed": true, 00:12:44.471 "claim_type": "exclusive_write", 00:12:44.471 "zoned": false, 00:12:44.471 "supported_io_types": { 00:12:44.471 "read": true, 00:12:44.471 "write": true, 00:12:44.471 "unmap": true, 00:12:44.471 "flush": true, 00:12:44.471 "reset": true, 00:12:44.471 "nvme_admin": false, 00:12:44.471 "nvme_io": false, 00:12:44.471 "nvme_io_md": false, 00:12:44.471 "write_zeroes": true, 00:12:44.471 "zcopy": true, 00:12:44.471 "get_zone_info": false, 00:12:44.471 "zone_management": false, 00:12:44.471 "zone_append": false, 00:12:44.471 "compare": false, 00:12:44.471 "compare_and_write": false, 00:12:44.471 "abort": true, 00:12:44.471 "seek_hole": false, 00:12:44.471 "seek_data": false, 00:12:44.471 "copy": true, 00:12:44.471 "nvme_iov_md": false 00:12:44.471 }, 00:12:44.471 "memory_domains": [ 00:12:44.471 { 00:12:44.471 "dma_device_id": "system", 00:12:44.471 "dma_device_type": 1 00:12:44.471 }, 00:12:44.471 { 00:12:44.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.471 "dma_device_type": 2 00:12:44.471 } 00:12:44.471 ], 00:12:44.471 "driver_specific": {} 00:12:44.471 } 00:12:44.471 ]' 00:12:44.471 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:44.471 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:44.471 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:44.471 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:44.471 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:44.471 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:44.471 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:44.471 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:46.386 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:46.386 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:46.386 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:46.386 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:46.386 07:22:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:48.303 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:48.303 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:48.303 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:48.303 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:48.303 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:48.303 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:48.303 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:48.303 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:48.303 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:48.303 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:48.303 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:48.303 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:48.303 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:48.303 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:48.303 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:48.303 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:48.303 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:48.303 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:48.564 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:49.953 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:49.954 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:49.954 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:49.954 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:49.954 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:49.954 ************************************ 00:12:49.954 START TEST filesystem_ext4 00:12:49.954 ************************************ 00:12:49.954 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:49.954 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:49.954 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:49.954 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:49.954 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:49.954 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:49.954 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:49.954 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:49.954 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:49.954 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:49.954 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:49.954 mke2fs 1.47.0 (5-Feb-2023) 00:12:49.954 Discarding device blocks: 0/522240 done 00:12:49.954 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:49.954 Filesystem UUID: 924ee154-4337-46de-91e6-d33897b67f9d 00:12:49.954 Superblock backups stored on blocks: 00:12:49.954 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:49.954 00:12:49.954 Allocating group tables: 0/64 done 00:12:49.954 Writing inode tables: 0/64 done 00:12:49.954 Creating journal (8192 blocks): done 00:12:52.276 Writing superblocks and filesystem accounting information: 0/6426/64 done 00:12:52.276 00:12:52.276 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:52.276 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:57.563 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:57.563 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:57.563 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:57.563 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:57.563 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:57.563 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:57.563 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1984621 00:12:57.563 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:57.563 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:57.563 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:57.563 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:57.563 00:12:57.563 real 0m7.991s 00:12:57.563 user 0m0.024s 00:12:57.563 sys 0m0.091s 00:12:57.563 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:57.563 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:57.563 ************************************ 00:12:57.563 END TEST filesystem_ext4 00:12:57.563 ************************************ 00:12:57.827 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:57.827 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:57.827 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.827 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:57.827 ************************************ 00:12:57.827 START TEST filesystem_btrfs 00:12:57.827 ************************************ 00:12:57.827 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:57.827 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:57.827 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:57.827 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:57.827 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:57.827 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:57.827 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:57.827 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:57.827 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:57.827 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:57.827 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:57.827 btrfs-progs v6.8.1 00:12:57.827 See https://btrfs.readthedocs.io for more information. 00:12:57.827 00:12:57.827 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:57.827 NOTE: several default settings have changed in version 5.15, please make sure 00:12:57.827 this does not affect your deployments: 00:12:57.827 - DUP for metadata (-m dup) 00:12:57.827 - enabled no-holes (-O no-holes) 00:12:57.827 - enabled free-space-tree (-R free-space-tree) 00:12:57.827 00:12:57.827 Label: (null) 00:12:57.827 UUID: 64a8a4e0-6d02-409e-99f3-6509f912c1c4 00:12:57.827 Node size: 16384 00:12:57.827 Sector size: 4096 (CPU page size: 4096) 00:12:57.827 Filesystem size: 510.00MiB 00:12:57.827 Block group profiles: 00:12:57.827 Data: single 8.00MiB 00:12:57.827 Metadata: DUP 32.00MiB 00:12:57.827 System: DUP 8.00MiB 00:12:57.827 SSD detected: yes 00:12:57.827 Zoned device: no 00:12:57.827 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:57.827 Checksum: crc32c 00:12:57.827 Number of devices: 1 00:12:57.827 Devices: 00:12:57.827 ID SIZE PATH 00:12:57.827 1 510.00MiB /dev/nvme0n1p1 00:12:57.827 00:12:57.827 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:57.827 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:58.399 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:58.399 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:58.399 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:58.399 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:58.399 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:58.399 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:58.399 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1984621 00:12:58.399 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:58.399 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:58.399 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:58.399 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:58.399 00:12:58.399 real 0m0.673s 00:12:58.399 user 0m0.025s 00:12:58.399 sys 0m0.126s 00:12:58.399 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.399 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:58.399 ************************************ 00:12:58.399 END TEST filesystem_btrfs 00:12:58.399 ************************************ 00:12:58.399 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:58.399 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:58.399 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.399 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:58.399 ************************************ 00:12:58.399 START TEST filesystem_xfs 00:12:58.399 ************************************ 00:12:58.399 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:58.399 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:58.399 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:58.399 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:58.399 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:58.399 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:58.399 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:58.399 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:58.399 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:58.399 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:58.399 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:58.660 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:58.660 = sectsz=512 attr=2, projid32bit=1 00:12:58.660 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:58.660 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:58.660 data = bsize=4096 blocks=130560, imaxpct=25 00:12:58.660 = sunit=0 swidth=0 blks 00:12:58.660 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:58.660 log =internal log bsize=4096 blocks=16384, version=2 00:12:58.660 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:58.660 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:59.231 Discarding blocks...Done. 00:12:59.231 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:59.231 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:01.778 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:01.778 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:13:01.778 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:01.778 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:13:01.778 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:13:01.778 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:01.778 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1984621 00:13:01.778 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:01.778 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:01.778 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:01.778 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:01.778 00:13:01.778 real 0m3.208s 00:13:01.778 user 0m0.031s 00:13:01.778 sys 0m0.079s 00:13:01.778 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:01.778 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:01.778 ************************************ 00:13:01.778 END TEST filesystem_xfs 00:13:01.778 ************************************ 00:13:01.778 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:01.778 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:01.778 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:02.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.040 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:02.040 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:13:02.040 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:02.040 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:02.040 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:02.040 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:02.040 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:13:02.040 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:02.040 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.040 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:02.040 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.040 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:02.040 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1984621 00:13:02.040 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1984621 ']' 00:13:02.040 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1984621 00:13:02.040 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:13:02.040 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:02.040 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1984621 00:13:02.040 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:02.040 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:02.040 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1984621' 00:13:02.040 killing process with pid 1984621 00:13:02.040 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1984621 00:13:02.040 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1984621 00:13:02.302 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:02.302 00:13:02.302 real 0m18.923s 00:13:02.302 user 1m14.772s 00:13:02.302 sys 0m1.471s 00:13:02.302 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:02.302 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:02.302 ************************************ 00:13:02.302 END TEST nvmf_filesystem_no_in_capsule 00:13:02.302 ************************************ 00:13:02.302 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:02.302 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:02.302 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:02.302 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:02.302 ************************************ 00:13:02.302 START TEST nvmf_filesystem_in_capsule 00:13:02.302 ************************************ 00:13:02.302 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:13:02.302 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:02.302 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:02.302 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:02.302 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:02.302 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:02.302 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1988537 00:13:02.302 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1988537 00:13:02.302 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:02.302 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1988537 ']' 00:13:02.302 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.302 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:02.302 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.302 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:02.302 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:02.563 [2024-11-26 07:22:46.455271] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:13:02.564 [2024-11-26 07:22:46.455336] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.564 [2024-11-26 07:22:46.550847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:02.564 [2024-11-26 07:22:46.592492] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.564 [2024-11-26 07:22:46.592529] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.564 [2024-11-26 07:22:46.592537] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:02.564 [2024-11-26 07:22:46.592544] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:02.564 [2024-11-26 07:22:46.592550] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.564 [2024-11-26 07:22:46.594444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.564 [2024-11-26 07:22:46.594560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.564 [2024-11-26 07:22:46.594722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.564 [2024-11-26 07:22:46.594722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:03.134 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:03.134 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:13:03.134 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:03.134 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:03.134 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:03.396 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.396 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:03.396 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:13:03.396 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.396 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:03.396 [2024-11-26 07:22:47.297314] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:03.396 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.396 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:03.396 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.396 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:03.396 Malloc1 00:13:03.396 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.396 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:03.396 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.396 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:03.396 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.396 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:03.396 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.396 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:03.396 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.396 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.396 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.396 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:03.396 [2024-11-26 07:22:47.423313] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.396 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.396 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:03.396 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:13:03.396 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:13:03.396 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:13:03.396 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:13:03.396 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:03.396 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.396 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:03.396 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.396 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:13:03.396 { 00:13:03.396 "name": "Malloc1", 00:13:03.396 "aliases": [ 00:13:03.396 "62707c52-40f0-482a-a448-dac24e543d62" 00:13:03.396 ], 00:13:03.396 "product_name": "Malloc disk", 00:13:03.396 "block_size": 512, 00:13:03.396 "num_blocks": 1048576, 00:13:03.396 "uuid": "62707c52-40f0-482a-a448-dac24e543d62", 00:13:03.396 "assigned_rate_limits": { 00:13:03.396 "rw_ios_per_sec": 0, 00:13:03.396 "rw_mbytes_per_sec": 0, 00:13:03.396 "r_mbytes_per_sec": 0, 00:13:03.396 "w_mbytes_per_sec": 0 00:13:03.396 }, 00:13:03.396 "claimed": true, 00:13:03.396 "claim_type": "exclusive_write", 00:13:03.396 "zoned": false, 00:13:03.396 "supported_io_types": { 00:13:03.396 "read": true, 00:13:03.396 "write": true, 00:13:03.396 "unmap": true, 00:13:03.396 "flush": true, 00:13:03.396 "reset": true, 00:13:03.396 "nvme_admin": false, 00:13:03.396 "nvme_io": false, 00:13:03.396 "nvme_io_md": false, 00:13:03.396 "write_zeroes": true, 00:13:03.396 "zcopy": true, 00:13:03.396 "get_zone_info": false, 00:13:03.396 "zone_management": false, 00:13:03.396 "zone_append": false, 00:13:03.396 "compare": false, 00:13:03.396 "compare_and_write": false, 00:13:03.396 "abort": true, 00:13:03.396 "seek_hole": false, 00:13:03.396 "seek_data": false, 00:13:03.396 "copy": true, 00:13:03.396 "nvme_iov_md": false 00:13:03.396 }, 00:13:03.396 "memory_domains": [ 00:13:03.396 { 00:13:03.396 "dma_device_id": "system", 00:13:03.396 "dma_device_type": 1 00:13:03.396 }, 00:13:03.396 { 00:13:03.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.396 "dma_device_type": 2 00:13:03.396 } 00:13:03.396 ], 00:13:03.396 "driver_specific": {} 00:13:03.396 } 00:13:03.396 ]' 00:13:03.396 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:13:03.396 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:13:03.396 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:13:03.656 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:13:03.656 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:13:03.656 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:13:03.656 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:03.656 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:05.036 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:05.036 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:13:05.036 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:05.036 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:05.036 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:13:07.576 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:07.576 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:07.576 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:07.576 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:07.576 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:07.576 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:13:07.576 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:07.576 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:07.576 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:07.576 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:07.576 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:07.576 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:07.576 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:07.576 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:07.576 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:07.576 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:07.576 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:07.576 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:08.149 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:09.093 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:09.093 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:09.093 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:09.093 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.093 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:09.354 ************************************ 00:13:09.354 START TEST filesystem_in_capsule_ext4 00:13:09.354 ************************************ 00:13:09.354 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:09.354 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:09.354 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:09.354 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:09.354 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:13:09.354 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:09.354 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:13:09.354 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:13:09.354 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:13:09.354 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:13:09.354 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:09.354 mke2fs 1.47.0 (5-Feb-2023) 00:13:09.354 Discarding device blocks: 0/522240 done 00:13:09.354 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:09.354 Filesystem UUID: e2874433-13d2-4917-9b18-b1deb7d40bc6 00:13:09.354 Superblock backups stored on blocks: 00:13:09.354 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:09.354 00:13:09.354 Allocating group tables: 0/64 done 00:13:09.355 Writing inode tables: 0/64 done 00:13:09.616 Creating journal (8192 blocks): done 00:13:11.944 Writing superblocks and filesystem accounting information: 0/64 1/64 done 00:13:11.944 00:13:11.945 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:13:11.945 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:18.531 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:18.531 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:18.531 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:18.531 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:18.531 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:18.531 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:18.531 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1988537 00:13:18.531 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:18.531 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:18.531 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:18.531 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:18.531 00:13:18.531 real 0m8.688s 00:13:18.531 user 0m0.032s 00:13:18.531 sys 0m0.080s 00:13:18.531 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:18.531 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:18.531 ************************************ 00:13:18.531 END TEST filesystem_in_capsule_ext4 00:13:18.531 ************************************ 00:13:18.531 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:18.531 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:18.531 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:18.531 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:18.531 ************************************ 00:13:18.531 START TEST filesystem_in_capsule_btrfs 00:13:18.531 ************************************ 00:13:18.532 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:18.532 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:18.532 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:18.532 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:18.532 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:13:18.532 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:18.532 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:13:18.532 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:13:18.532 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:13:18.532 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:13:18.532 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:18.532 btrfs-progs v6.8.1 00:13:18.532 See https://btrfs.readthedocs.io for more information. 00:13:18.532 00:13:18.532 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:18.532 NOTE: several default settings have changed in version 5.15, please make sure 00:13:18.532 this does not affect your deployments: 00:13:18.532 - DUP for metadata (-m dup) 00:13:18.532 - enabled no-holes (-O no-holes) 00:13:18.532 - enabled free-space-tree (-R free-space-tree) 00:13:18.532 00:13:18.532 Label: (null) 00:13:18.532 UUID: e68a3f48-4ad2-4801-b606-f1780451fe88 00:13:18.532 Node size: 16384 00:13:18.532 Sector size: 4096 (CPU page size: 4096) 00:13:18.532 Filesystem size: 510.00MiB 00:13:18.532 Block group profiles: 00:13:18.532 Data: single 8.00MiB 00:13:18.532 Metadata: DUP 32.00MiB 00:13:18.532 System: DUP 8.00MiB 00:13:18.532 SSD detected: yes 00:13:18.532 Zoned device: no 00:13:18.532 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:18.532 Checksum: crc32c 00:13:18.532 Number of devices: 1 00:13:18.532 Devices: 00:13:18.532 ID SIZE PATH 00:13:18.532 1 510.00MiB /dev/nvme0n1p1 00:13:18.532 00:13:18.532 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:13:18.532 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:18.532 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:18.532 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:18.532 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:18.532 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:18.532 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:18.532 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:18.532 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1988537 00:13:18.532 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:18.532 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:18.532 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:18.532 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:18.532 00:13:18.532 real 0m0.635s 00:13:18.532 user 0m0.035s 00:13:18.532 sys 0m0.113s 00:13:18.532 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:18.532 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:18.532 ************************************ 00:13:18.532 END TEST filesystem_in_capsule_btrfs 00:13:18.532 ************************************ 00:13:18.795 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:18.795 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:18.795 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:18.795 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:18.795 ************************************ 00:13:18.795 START TEST filesystem_in_capsule_xfs 00:13:18.795 ************************************ 00:13:18.795 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:13:18.795 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:18.795 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:18.795 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:18.795 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:13:18.795 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:18.795 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:13:18.795 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:13:18.795 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:13:18.795 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:13:18.795 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:18.795 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:18.795 = sectsz=512 attr=2, projid32bit=1 00:13:18.795 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:18.795 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:18.795 data = bsize=4096 blocks=130560, imaxpct=25 00:13:18.795 = sunit=0 swidth=0 blks 00:13:18.795 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:18.795 log =internal log bsize=4096 blocks=16384, version=2 00:13:18.795 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:18.795 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:19.738 Discarding blocks...Done. 00:13:19.738 07:23:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:13:19.738 07:23:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:21.653 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:21.915 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:13:21.915 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:21.915 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:13:21.915 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:13:21.915 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:21.915 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1988537 00:13:21.915 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:21.915 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:21.915 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:21.915 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:21.915 00:13:21.915 real 0m3.132s 00:13:21.915 user 0m0.020s 00:13:21.915 sys 0m0.087s 00:13:21.915 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:21.915 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:21.915 ************************************ 00:13:21.915 END TEST filesystem_in_capsule_xfs 00:13:21.915 ************************************ 00:13:21.915 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:22.177 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:22.748 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:22.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.748 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:22.748 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:13:22.748 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:22.748 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.748 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:22.748 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.748 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:13:22.748 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.748 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.748 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:22.748 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.748 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:22.748 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1988537 00:13:22.748 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1988537 ']' 00:13:22.748 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1988537 00:13:22.748 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:13:22.748 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:22.748 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1988537 00:13:22.748 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:22.748 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:22.748 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1988537' 00:13:22.748 killing process with pid 1988537 00:13:22.748 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1988537 00:13:22.748 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1988537 00:13:23.009 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:23.009 00:13:23.009 real 0m20.624s 00:13:23.009 user 1m21.458s 00:13:23.009 sys 0m1.518s 00:13:23.009 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:23.009 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:23.009 ************************************ 00:13:23.009 END TEST nvmf_filesystem_in_capsule 00:13:23.009 ************************************ 00:13:23.009 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:13:23.009 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:23.009 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:13:23.009 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:23.009 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:13:23.009 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:23.009 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:23.009 rmmod nvme_tcp 00:13:23.009 rmmod nvme_fabrics 00:13:23.009 rmmod nvme_keyring 00:13:23.009 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:23.009 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:13:23.009 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:13:23.009 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:13:23.009 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:23.009 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:23.009 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:23.009 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:13:23.009 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:13:23.009 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:23.009 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:13:23.009 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:23.009 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:23.009 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.009 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:23.009 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:25.557 00:13:25.557 real 0m50.890s 00:13:25.557 user 2m38.873s 00:13:25.557 sys 0m9.657s 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:25.557 ************************************ 00:13:25.557 END TEST nvmf_filesystem 00:13:25.557 ************************************ 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:25.557 ************************************ 00:13:25.557 START TEST nvmf_target_discovery 00:13:25.557 ************************************ 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:25.557 * Looking for test storage... 00:13:25.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:25.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.557 --rc genhtml_branch_coverage=1 00:13:25.557 --rc genhtml_function_coverage=1 00:13:25.557 --rc genhtml_legend=1 00:13:25.557 --rc geninfo_all_blocks=1 00:13:25.557 --rc geninfo_unexecuted_blocks=1 00:13:25.557 00:13:25.557 ' 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:25.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.557 --rc genhtml_branch_coverage=1 00:13:25.557 --rc genhtml_function_coverage=1 00:13:25.557 --rc genhtml_legend=1 00:13:25.557 --rc geninfo_all_blocks=1 00:13:25.557 --rc geninfo_unexecuted_blocks=1 00:13:25.557 00:13:25.557 ' 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:25.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.557 --rc genhtml_branch_coverage=1 00:13:25.557 --rc genhtml_function_coverage=1 00:13:25.557 --rc genhtml_legend=1 00:13:25.557 --rc geninfo_all_blocks=1 00:13:25.557 --rc geninfo_unexecuted_blocks=1 00:13:25.557 00:13:25.557 ' 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:25.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.557 --rc genhtml_branch_coverage=1 00:13:25.557 --rc genhtml_function_coverage=1 00:13:25.557 --rc genhtml_legend=1 00:13:25.557 --rc geninfo_all_blocks=1 00:13:25.557 --rc geninfo_unexecuted_blocks=1 00:13:25.557 00:13:25.557 ' 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:25.557 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:25.558 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:13:25.558 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:33.703 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:33.703 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:33.703 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:33.704 Found net devices under 0000:31:00.0: cvl_0_0 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:33.704 Found net devices under 0000:31:00.1: cvl_0_1 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:33.704 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:33.966 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:33.966 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:33.966 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:33.966 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:33.966 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:33.966 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:33.966 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:33.966 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:33.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:33.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.768 ms 00:13:33.966 00:13:33.966 --- 10.0.0.2 ping statistics --- 00:13:33.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.966 rtt min/avg/max/mdev = 0.768/0.768/0.768/0.000 ms 00:13:33.966 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:33.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:33.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:13:33.966 00:13:33.966 --- 10.0.0.1 ping statistics --- 00:13:33.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.966 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:13:33.966 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:33.966 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:13:33.966 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:33.966 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:33.966 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:33.966 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:33.966 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:33.966 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:33.966 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:34.228 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:34.228 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:34.228 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:34.228 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:34.228 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1997548 00:13:34.228 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1997548 00:13:34.228 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:34.228 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1997548 ']' 00:13:34.228 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.228 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:34.228 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.228 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:34.228 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:34.228 [2024-11-26 07:23:18.185298] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:13:34.228 [2024-11-26 07:23:18.185366] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:34.228 [2024-11-26 07:23:18.276725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:34.228 [2024-11-26 07:23:18.318601] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:34.228 [2024-11-26 07:23:18.318640] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:34.228 [2024-11-26 07:23:18.318648] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:34.228 [2024-11-26 07:23:18.318654] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:34.228 [2024-11-26 07:23:18.318661] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:34.228 [2024-11-26 07:23:18.320539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:34.228 [2024-11-26 07:23:18.320656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:34.228 [2024-11-26 07:23:18.320812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:34.228 [2024-11-26 07:23:18.320813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.174 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:35.174 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:13:35.174 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:35.174 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:35.174 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.174 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:35.174 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:35.174 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.174 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.174 [2024-11-26 07:23:19.047843] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:35.174 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.174 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:35.174 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:35.174 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:35.174 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.174 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.174 Null1 00:13:35.174 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.174 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:35.174 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.174 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.174 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.174 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:35.174 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.174 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.174 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.174 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.175 [2024-11-26 07:23:19.108183] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.175 Null2 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.175 Null3 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.175 Null4 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.175 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:13:35.437 00:13:35.437 Discovery Log Number of Records 6, Generation counter 6 00:13:35.437 =====Discovery Log Entry 0====== 00:13:35.437 trtype: tcp 00:13:35.437 adrfam: ipv4 00:13:35.437 subtype: current discovery subsystem 00:13:35.437 treq: not required 00:13:35.437 portid: 0 00:13:35.437 trsvcid: 4420 00:13:35.437 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:35.437 traddr: 10.0.0.2 00:13:35.437 eflags: explicit discovery connections, duplicate discovery information 00:13:35.437 sectype: none 00:13:35.437 =====Discovery Log Entry 1====== 00:13:35.437 trtype: tcp 00:13:35.437 adrfam: ipv4 00:13:35.437 subtype: nvme subsystem 00:13:35.437 treq: not required 00:13:35.437 portid: 0 00:13:35.437 trsvcid: 4420 00:13:35.437 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:35.437 traddr: 10.0.0.2 00:13:35.437 eflags: none 00:13:35.437 sectype: none 00:13:35.437 =====Discovery Log Entry 2====== 00:13:35.437 trtype: tcp 00:13:35.437 adrfam: ipv4 00:13:35.437 subtype: nvme subsystem 00:13:35.437 treq: not required 00:13:35.437 portid: 0 00:13:35.437 trsvcid: 4420 00:13:35.437 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:35.437 traddr: 10.0.0.2 00:13:35.437 eflags: none 00:13:35.437 sectype: none 00:13:35.437 =====Discovery Log Entry 3====== 00:13:35.437 trtype: tcp 00:13:35.437 adrfam: ipv4 00:13:35.437 subtype: nvme subsystem 00:13:35.437 treq: not required 00:13:35.437 portid: 0 00:13:35.437 trsvcid: 4420 00:13:35.437 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:35.437 traddr: 10.0.0.2 00:13:35.437 eflags: none 00:13:35.437 sectype: none 00:13:35.437 =====Discovery Log Entry 4====== 00:13:35.437 trtype: tcp 00:13:35.437 adrfam: ipv4 00:13:35.437 subtype: nvme subsystem 00:13:35.437 treq: not required 00:13:35.437 portid: 0 00:13:35.437 trsvcid: 4420 00:13:35.437 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:35.437 traddr: 10.0.0.2 00:13:35.437 eflags: none 00:13:35.437 sectype: none 00:13:35.437 =====Discovery Log Entry 5====== 00:13:35.437 trtype: tcp 00:13:35.437 adrfam: ipv4 00:13:35.437 subtype: discovery subsystem referral 00:13:35.437 treq: not required 00:13:35.437 portid: 0 00:13:35.437 trsvcid: 4430 00:13:35.437 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:35.437 traddr: 10.0.0.2 00:13:35.437 eflags: none 00:13:35.437 sectype: none 00:13:35.437 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:35.437 Perform nvmf subsystem discovery via RPC 00:13:35.437 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:35.437 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.437 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.437 [ 00:13:35.437 { 00:13:35.437 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:35.437 "subtype": "Discovery", 00:13:35.437 "listen_addresses": [ 00:13:35.437 { 00:13:35.437 "trtype": "TCP", 00:13:35.437 "adrfam": "IPv4", 00:13:35.437 "traddr": "10.0.0.2", 00:13:35.437 "trsvcid": "4420" 00:13:35.437 } 00:13:35.437 ], 00:13:35.437 "allow_any_host": true, 00:13:35.437 "hosts": [] 00:13:35.437 }, 00:13:35.437 { 00:13:35.437 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:35.437 "subtype": "NVMe", 00:13:35.437 "listen_addresses": [ 00:13:35.437 { 00:13:35.437 "trtype": "TCP", 00:13:35.437 "adrfam": "IPv4", 00:13:35.437 "traddr": "10.0.0.2", 00:13:35.437 "trsvcid": "4420" 00:13:35.437 } 00:13:35.437 ], 00:13:35.437 "allow_any_host": true, 00:13:35.437 "hosts": [], 00:13:35.437 "serial_number": "SPDK00000000000001", 00:13:35.437 "model_number": "SPDK bdev Controller", 00:13:35.437 "max_namespaces": 32, 00:13:35.437 "min_cntlid": 1, 00:13:35.437 "max_cntlid": 65519, 00:13:35.437 "namespaces": [ 00:13:35.437 { 00:13:35.437 "nsid": 1, 00:13:35.437 "bdev_name": "Null1", 00:13:35.437 "name": "Null1", 00:13:35.437 "nguid": "BA72D9E64BF74D9F84E58E9CC188ABA3", 00:13:35.437 "uuid": "ba72d9e6-4bf7-4d9f-84e5-8e9cc188aba3" 00:13:35.437 } 00:13:35.437 ] 00:13:35.437 }, 00:13:35.437 { 00:13:35.437 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:35.437 "subtype": "NVMe", 00:13:35.437 "listen_addresses": [ 00:13:35.437 { 00:13:35.437 "trtype": "TCP", 00:13:35.437 "adrfam": "IPv4", 00:13:35.437 "traddr": "10.0.0.2", 00:13:35.437 "trsvcid": "4420" 00:13:35.437 } 00:13:35.437 ], 00:13:35.437 "allow_any_host": true, 00:13:35.437 "hosts": [], 00:13:35.437 "serial_number": "SPDK00000000000002", 00:13:35.437 "model_number": "SPDK bdev Controller", 00:13:35.437 "max_namespaces": 32, 00:13:35.437 "min_cntlid": 1, 00:13:35.437 "max_cntlid": 65519, 00:13:35.437 "namespaces": [ 00:13:35.437 { 00:13:35.437 "nsid": 1, 00:13:35.437 "bdev_name": "Null2", 00:13:35.437 "name": "Null2", 00:13:35.437 "nguid": "7862FAE1C6F3489F8EF0C1CE3EAC25B4", 00:13:35.437 "uuid": "7862fae1-c6f3-489f-8ef0-c1ce3eac25b4" 00:13:35.437 } 00:13:35.437 ] 00:13:35.437 }, 00:13:35.437 { 00:13:35.437 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:35.437 "subtype": "NVMe", 00:13:35.437 "listen_addresses": [ 00:13:35.437 { 00:13:35.437 "trtype": "TCP", 00:13:35.437 "adrfam": "IPv4", 00:13:35.437 "traddr": "10.0.0.2", 00:13:35.437 "trsvcid": "4420" 00:13:35.437 } 00:13:35.437 ], 00:13:35.437 "allow_any_host": true, 00:13:35.437 "hosts": [], 00:13:35.437 "serial_number": "SPDK00000000000003", 00:13:35.437 "model_number": "SPDK bdev Controller", 00:13:35.437 "max_namespaces": 32, 00:13:35.437 "min_cntlid": 1, 00:13:35.437 "max_cntlid": 65519, 00:13:35.437 "namespaces": [ 00:13:35.437 { 00:13:35.437 "nsid": 1, 00:13:35.437 "bdev_name": "Null3", 00:13:35.437 "name": "Null3", 00:13:35.437 "nguid": "2D2CC2DFF58943678B20CBD972EB7A38", 00:13:35.437 "uuid": "2d2cc2df-f589-4367-8b20-cbd972eb7a38" 00:13:35.437 } 00:13:35.437 ] 00:13:35.437 }, 00:13:35.437 { 00:13:35.437 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:35.437 "subtype": "NVMe", 00:13:35.437 "listen_addresses": [ 00:13:35.437 { 00:13:35.437 "trtype": "TCP", 00:13:35.438 "adrfam": "IPv4", 00:13:35.438 "traddr": "10.0.0.2", 00:13:35.438 "trsvcid": "4420" 00:13:35.438 } 00:13:35.438 ], 00:13:35.438 "allow_any_host": true, 00:13:35.438 "hosts": [], 00:13:35.438 "serial_number": "SPDK00000000000004", 00:13:35.438 "model_number": "SPDK bdev Controller", 00:13:35.438 "max_namespaces": 32, 00:13:35.438 "min_cntlid": 1, 00:13:35.438 "max_cntlid": 65519, 00:13:35.438 "namespaces": [ 00:13:35.438 { 00:13:35.438 "nsid": 1, 00:13:35.438 "bdev_name": "Null4", 00:13:35.438 "name": "Null4", 00:13:35.438 "nguid": "95AC04938EA441EA82B20916DC404921", 00:13:35.438 "uuid": "95ac0493-8ea4-41ea-82b2-0916dc404921" 00:13:35.438 } 00:13:35.438 ] 00:13:35.438 } 00:13:35.438 ] 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.438 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.699 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.699 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:35.699 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:35.699 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:35.699 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:35.699 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:35.699 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:13:35.699 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:35.699 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:13:35.699 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:35.699 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:35.699 rmmod nvme_tcp 00:13:35.699 rmmod nvme_fabrics 00:13:35.699 rmmod nvme_keyring 00:13:35.699 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:35.699 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:13:35.699 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:13:35.699 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1997548 ']' 00:13:35.699 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1997548 00:13:35.699 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1997548 ']' 00:13:35.699 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1997548 00:13:35.699 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:13:35.699 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:35.699 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1997548 00:13:35.699 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:35.699 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:35.699 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1997548' 00:13:35.699 killing process with pid 1997548 00:13:35.699 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1997548 00:13:35.700 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1997548 00:13:35.961 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:35.961 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:35.961 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:35.961 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:13:35.961 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:13:35.961 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:13:35.961 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:35.961 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:35.961 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:35.961 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.961 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:35.961 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.875 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:37.875 00:13:37.875 real 0m12.650s 00:13:37.875 user 0m8.819s 00:13:37.875 sys 0m6.877s 00:13:37.875 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:37.875 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:37.875 ************************************ 00:13:37.875 END TEST nvmf_target_discovery 00:13:37.875 ************************************ 00:13:37.875 07:23:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:37.875 07:23:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:37.875 07:23:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:37.875 07:23:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:38.136 ************************************ 00:13:38.136 START TEST nvmf_referrals 00:13:38.136 ************************************ 00:13:38.136 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:38.136 * Looking for test storage... 00:13:38.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:38.136 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:38.136 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:13:38.136 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:38.136 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:38.136 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:38.136 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:38.136 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:38.136 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:13:38.136 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:13:38.136 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:13:38.136 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:13:38.136 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:13:38.136 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:13:38.136 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:13:38.136 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:38.136 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:13:38.136 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:13:38.136 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:38.136 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:38.136 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:38.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.137 --rc genhtml_branch_coverage=1 00:13:38.137 --rc genhtml_function_coverage=1 00:13:38.137 --rc genhtml_legend=1 00:13:38.137 --rc geninfo_all_blocks=1 00:13:38.137 --rc geninfo_unexecuted_blocks=1 00:13:38.137 00:13:38.137 ' 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:38.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.137 --rc genhtml_branch_coverage=1 00:13:38.137 --rc genhtml_function_coverage=1 00:13:38.137 --rc genhtml_legend=1 00:13:38.137 --rc geninfo_all_blocks=1 00:13:38.137 --rc geninfo_unexecuted_blocks=1 00:13:38.137 00:13:38.137 ' 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:38.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.137 --rc genhtml_branch_coverage=1 00:13:38.137 --rc genhtml_function_coverage=1 00:13:38.137 --rc genhtml_legend=1 00:13:38.137 --rc geninfo_all_blocks=1 00:13:38.137 --rc geninfo_unexecuted_blocks=1 00:13:38.137 00:13:38.137 ' 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:38.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.137 --rc genhtml_branch_coverage=1 00:13:38.137 --rc genhtml_function_coverage=1 00:13:38.137 --rc genhtml_legend=1 00:13:38.137 --rc geninfo_all_blocks=1 00:13:38.137 --rc geninfo_unexecuted_blocks=1 00:13:38.137 00:13:38.137 ' 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:38.137 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:38.138 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:38.138 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:38.138 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:38.138 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:38.138 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:38.138 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:38.138 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:38.138 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:38.138 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:38.138 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:38.138 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:38.138 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:38.138 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:38.138 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:38.138 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:38.138 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:38.138 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.138 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:38.138 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.138 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:38.138 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:38.138 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:13:38.138 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:46.281 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:46.281 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:46.281 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:46.282 Found net devices under 0000:31:00.0: cvl_0_0 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:46.282 Found net devices under 0000:31:00.1: cvl_0_1 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:46.282 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:46.544 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:46.544 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:46.544 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:46.544 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:46.544 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:46.544 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:46.544 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:46.544 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:46.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:46.544 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.685 ms 00:13:46.544 00:13:46.544 --- 10.0.0.2 ping statistics --- 00:13:46.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.544 rtt min/avg/max/mdev = 0.685/0.685/0.685/0.000 ms 00:13:46.544 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:46.544 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:46.544 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:13:46.544 00:13:46.544 --- 10.0.0.1 ping statistics --- 00:13:46.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.544 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:13:46.544 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:46.544 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:13:46.544 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:46.544 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:46.544 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:46.544 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:46.544 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:46.544 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:46.544 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:46.544 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:46.544 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:46.544 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:46.544 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:46.544 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2002638 00:13:46.544 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2002638 00:13:46.544 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2002638 ']' 00:13:46.544 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.544 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:46.544 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.544 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:46.544 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:46.544 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:46.544 [2024-11-26 07:23:30.660134] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:13:46.544 [2024-11-26 07:23:30.660204] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:46.806 [2024-11-26 07:23:30.752148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:46.806 [2024-11-26 07:23:30.793901] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:46.806 [2024-11-26 07:23:30.793937] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:46.806 [2024-11-26 07:23:30.793945] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:46.806 [2024-11-26 07:23:30.793952] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:46.806 [2024-11-26 07:23:30.793958] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:46.806 [2024-11-26 07:23:30.795573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.806 [2024-11-26 07:23:30.795694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:46.806 [2024-11-26 07:23:30.795854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:46.806 [2024-11-26 07:23:30.795855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.377 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:47.377 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:13:47.377 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:47.377 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:47.377 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:47.638 [2024-11-26 07:23:31.518604] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:47.638 [2024-11-26 07:23:31.534805] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:47.638 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:47.898 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:47.898 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:47.898 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:47.898 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.898 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:47.898 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.898 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:47.898 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.898 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:47.898 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.898 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:47.898 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.898 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:47.898 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.898 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:47.898 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:47.898 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.898 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:47.898 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.898 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:47.898 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:47.898 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:47.898 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:47.898 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:47.898 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:47.898 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:48.159 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:48.160 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:48.160 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:48.160 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.160 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:48.160 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.160 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:48.160 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.160 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:48.160 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.160 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:48.160 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:48.160 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:48.160 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:48.160 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.160 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:48.160 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:48.160 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.160 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:48.160 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:48.160 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:48.160 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:48.160 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:48.160 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:48.160 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:48.160 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:48.429 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:48.429 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:48.429 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:48.429 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:48.429 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:48.429 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:48.429 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:48.689 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:48.689 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:48.689 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:48.689 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:48.689 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:48.689 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:48.689 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:48.689 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:48.689 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.689 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:48.689 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.689 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:48.689 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:48.689 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:48.689 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:48.689 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.689 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:48.689 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:48.949 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.949 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:48.949 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:48.949 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:48.949 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:48.949 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:48.949 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:48.949 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:48.949 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:48.949 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:48.949 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:48.949 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:48.949 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:48.949 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:48.949 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:48.949 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:49.209 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:49.209 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:49.209 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:49.209 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:49.209 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:49.209 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:49.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:49.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:49.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:49.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:49.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:49.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:49.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:49.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:49.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:49.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:49.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:49.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:49.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:49.733 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:49.733 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:49.733 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:49.733 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:49.733 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:49.733 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:13:49.733 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:49.733 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:13:49.733 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:49.733 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:49.733 rmmod nvme_tcp 00:13:49.733 rmmod nvme_fabrics 00:13:49.733 rmmod nvme_keyring 00:13:49.733 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:49.733 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:13:49.733 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:13:49.733 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2002638 ']' 00:13:49.733 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2002638 00:13:49.733 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2002638 ']' 00:13:49.733 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2002638 00:13:49.733 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:13:49.733 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:49.733 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2002638 00:13:49.733 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:49.733 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:49.733 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2002638' 00:13:49.733 killing process with pid 2002638 00:13:49.733 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2002638 00:13:49.733 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2002638 00:13:50.035 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:50.035 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:50.035 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:50.035 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:13:50.035 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:13:50.035 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:50.035 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:13:50.035 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:50.035 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:50.035 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.035 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:50.035 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.025 07:23:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:52.025 00:13:52.025 real 0m13.986s 00:13:52.025 user 0m15.923s 00:13:52.025 sys 0m7.035s 00:13:52.025 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:52.025 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:52.025 ************************************ 00:13:52.025 END TEST nvmf_referrals 00:13:52.025 ************************************ 00:13:52.025 07:23:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:52.025 07:23:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:52.025 07:23:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:52.025 07:23:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:52.025 ************************************ 00:13:52.025 START TEST nvmf_connect_disconnect 00:13:52.025 ************************************ 00:13:52.025 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:52.286 * Looking for test storage... 00:13:52.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:52.286 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:52.286 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:13:52.286 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:52.286 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:52.286 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:52.286 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:52.286 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:52.286 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:13:52.286 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:13:52.286 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:13:52.286 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:13:52.286 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:13:52.286 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:13:52.286 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:13:52.286 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:52.286 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:13:52.286 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:13:52.286 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:52.286 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:52.286 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:13:52.286 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:13:52.286 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:52.286 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:13:52.286 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:13:52.286 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:13:52.286 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:13:52.286 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:52.286 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:13:52.286 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:13:52.286 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:52.286 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:52.286 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:13:52.286 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:52.286 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:52.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.286 --rc genhtml_branch_coverage=1 00:13:52.286 --rc genhtml_function_coverage=1 00:13:52.286 --rc genhtml_legend=1 00:13:52.286 --rc geninfo_all_blocks=1 00:13:52.286 --rc geninfo_unexecuted_blocks=1 00:13:52.286 00:13:52.286 ' 00:13:52.286 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:52.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.286 --rc genhtml_branch_coverage=1 00:13:52.287 --rc genhtml_function_coverage=1 00:13:52.287 --rc genhtml_legend=1 00:13:52.287 --rc geninfo_all_blocks=1 00:13:52.287 --rc geninfo_unexecuted_blocks=1 00:13:52.287 00:13:52.287 ' 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:52.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.287 --rc genhtml_branch_coverage=1 00:13:52.287 --rc genhtml_function_coverage=1 00:13:52.287 --rc genhtml_legend=1 00:13:52.287 --rc geninfo_all_blocks=1 00:13:52.287 --rc geninfo_unexecuted_blocks=1 00:13:52.287 00:13:52.287 ' 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:52.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.287 --rc genhtml_branch_coverage=1 00:13:52.287 --rc genhtml_function_coverage=1 00:13:52.287 --rc genhtml_legend=1 00:13:52.287 --rc geninfo_all_blocks=1 00:13:52.287 --rc geninfo_unexecuted_blocks=1 00:13:52.287 00:13:52.287 ' 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:52.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:13:52.287 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:00.428 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:00.428 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:14:00.428 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:00.428 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:00.428 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:00.428 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:00.428 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:00.428 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:14:00.428 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:00.428 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:14:00.428 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:14:00.428 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:14:00.428 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:14:00.428 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:14:00.428 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:14:00.428 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:00.428 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:00.428 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:00.428 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:00.428 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:00.428 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:00.428 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:00.428 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:00.428 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:00.428 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:00.428 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:00.428 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:00.428 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:00.428 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:00.428 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:00.429 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:00.429 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:00.429 Found net devices under 0000:31:00.0: cvl_0_0 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:00.429 Found net devices under 0000:31:00.1: cvl_0_1 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:00.429 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:00.692 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:00.692 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:00.692 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:00.692 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:00.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:00.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.670 ms 00:14:00.692 00:14:00.692 --- 10.0.0.2 ping statistics --- 00:14:00.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.692 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:14:00.692 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:00.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:00.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:14:00.692 00:14:00.692 --- 10.0.0.1 ping statistics --- 00:14:00.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.692 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:14:00.692 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:00.692 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:14:00.692 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:00.692 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:00.692 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:00.692 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:00.692 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:00.692 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:00.692 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:00.692 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:14:00.692 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:00.692 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:00.692 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:00.692 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2008112 00:14:00.692 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2008112 00:14:00.692 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:00.692 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2008112 ']' 00:14:00.692 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.692 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:00.692 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.692 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:00.692 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:00.692 [2024-11-26 07:23:44.740011] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:14:00.692 [2024-11-26 07:23:44.740078] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.954 [2024-11-26 07:23:44.831656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:00.954 [2024-11-26 07:23:44.873630] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:00.954 [2024-11-26 07:23:44.873674] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:00.954 [2024-11-26 07:23:44.873682] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:00.954 [2024-11-26 07:23:44.873689] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:00.954 [2024-11-26 07:23:44.873694] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:00.954 [2024-11-26 07:23:44.875560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:00.954 [2024-11-26 07:23:44.875678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:00.954 [2024-11-26 07:23:44.875838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.954 [2024-11-26 07:23:44.875839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:01.527 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:01.527 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:14:01.527 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:01.527 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:01.527 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:01.527 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:01.527 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:01.527 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.527 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:01.527 [2024-11-26 07:23:45.598537] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:01.527 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.527 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:14:01.527 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.527 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:01.527 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.527 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:14:01.527 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:01.527 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.527 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:01.527 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.527 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:01.527 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.527 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:01.788 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.788 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:01.788 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.788 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:01.788 [2024-11-26 07:23:45.673163] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:01.788 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.788 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:14:01.788 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:14:01.788 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:14:05.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.597 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.102 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:20.102 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:20.102 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:20.102 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:14:20.102 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:20.102 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:14:20.102 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:20.102 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:20.102 rmmod nvme_tcp 00:14:20.102 rmmod nvme_fabrics 00:14:20.102 rmmod nvme_keyring 00:14:20.102 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:20.102 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:14:20.102 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:14:20.102 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2008112 ']' 00:14:20.102 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2008112 00:14:20.102 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2008112 ']' 00:14:20.102 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2008112 00:14:20.102 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:14:20.102 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:20.102 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2008112 00:14:20.102 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:20.102 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:20.102 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2008112' 00:14:20.102 killing process with pid 2008112 00:14:20.102 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2008112 00:14:20.102 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2008112 00:14:20.362 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:20.362 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:20.362 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:20.362 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:14:20.362 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:14:20.362 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:20.362 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:14:20.362 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:20.362 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:20.362 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.362 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:20.362 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.274 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:22.274 00:14:22.274 real 0m30.246s 00:14:22.274 user 1m19.398s 00:14:22.274 sys 0m7.867s 00:14:22.274 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:22.274 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:22.274 ************************************ 00:14:22.274 END TEST nvmf_connect_disconnect 00:14:22.274 ************************************ 00:14:22.274 07:24:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:22.274 07:24:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:22.274 07:24:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:22.274 07:24:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:22.535 ************************************ 00:14:22.535 START TEST nvmf_multitarget 00:14:22.535 ************************************ 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:22.535 * Looking for test storage... 00:14:22.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:22.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.535 --rc genhtml_branch_coverage=1 00:14:22.535 --rc genhtml_function_coverage=1 00:14:22.535 --rc genhtml_legend=1 00:14:22.535 --rc geninfo_all_blocks=1 00:14:22.535 --rc geninfo_unexecuted_blocks=1 00:14:22.535 00:14:22.535 ' 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:22.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.535 --rc genhtml_branch_coverage=1 00:14:22.535 --rc genhtml_function_coverage=1 00:14:22.535 --rc genhtml_legend=1 00:14:22.535 --rc geninfo_all_blocks=1 00:14:22.535 --rc geninfo_unexecuted_blocks=1 00:14:22.535 00:14:22.535 ' 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:22.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.535 --rc genhtml_branch_coverage=1 00:14:22.535 --rc genhtml_function_coverage=1 00:14:22.535 --rc genhtml_legend=1 00:14:22.535 --rc geninfo_all_blocks=1 00:14:22.535 --rc geninfo_unexecuted_blocks=1 00:14:22.535 00:14:22.535 ' 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:22.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.535 --rc genhtml_branch_coverage=1 00:14:22.535 --rc genhtml_function_coverage=1 00:14:22.535 --rc genhtml_legend=1 00:14:22.535 --rc geninfo_all_blocks=1 00:14:22.535 --rc geninfo_unexecuted_blocks=1 00:14:22.535 00:14:22.535 ' 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:22.535 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:14:22.536 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:22.536 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:22.536 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:22.536 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.536 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.536 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.536 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:14:22.536 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.536 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:14:22.536 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:22.536 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:22.536 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:22.536 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:22.536 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:22.536 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:22.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:22.536 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:22.536 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:22.536 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:22.536 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:22.536 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:14:22.536 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:22.536 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:22.536 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:22.536 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:22.536 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:22.536 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.536 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:22.536 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.536 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:22.536 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:22.536 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:14:22.536 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:30.676 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:30.676 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:30.676 Found net devices under 0000:31:00.0: cvl_0_0 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:30.676 Found net devices under 0000:31:00.1: cvl_0_1 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:30.676 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:30.937 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:30.937 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:30.937 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:30.937 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:30.937 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:30.937 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:31.197 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:31.197 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:31.197 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:31.197 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:31.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:31.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:14:31.197 00:14:31.197 --- 10.0.0.2 ping statistics --- 00:14:31.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.197 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:14:31.197 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:31.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:31.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:14:31.197 00:14:31.197 --- 10.0.0.1 ping statistics --- 00:14:31.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.197 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:14:31.197 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:31.197 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:14:31.197 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:31.197 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:31.197 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:31.197 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:31.197 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:31.197 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:31.197 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:31.197 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:31.197 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:31.197 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:31.197 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:31.197 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2017334 00:14:31.197 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2017334 00:14:31.197 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:31.197 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2017334 ']' 00:14:31.197 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.197 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:31.197 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.197 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:31.197 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:31.197 [2024-11-26 07:24:15.217049] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:14:31.197 [2024-11-26 07:24:15.217102] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.197 [2024-11-26 07:24:15.305207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:31.459 [2024-11-26 07:24:15.342685] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.459 [2024-11-26 07:24:15.342721] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.459 [2024-11-26 07:24:15.342730] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:31.459 [2024-11-26 07:24:15.342736] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:31.459 [2024-11-26 07:24:15.342742] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.459 [2024-11-26 07:24:15.344490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.459 [2024-11-26 07:24:15.344606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:31.459 [2024-11-26 07:24:15.344760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.459 [2024-11-26 07:24:15.344761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:32.030 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:32.030 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:14:32.030 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:32.030 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:32.030 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:32.030 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.030 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:32.030 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:32.030 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:14:32.291 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:32.291 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:32.291 "nvmf_tgt_1" 00:14:32.291 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:32.291 "nvmf_tgt_2" 00:14:32.291 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:32.291 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:14:32.552 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:32.552 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:32.552 true 00:14:32.552 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:32.552 true 00:14:32.813 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:32.813 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:14:32.813 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:32.813 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:32.813 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:14:32.813 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:32.813 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:14:32.813 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:32.813 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:14:32.813 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:32.813 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:32.813 rmmod nvme_tcp 00:14:32.813 rmmod nvme_fabrics 00:14:32.813 rmmod nvme_keyring 00:14:32.813 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:32.813 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:14:32.813 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:14:32.813 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2017334 ']' 00:14:32.813 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2017334 00:14:32.813 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2017334 ']' 00:14:32.813 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2017334 00:14:32.813 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:14:32.813 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:32.813 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2017334 00:14:32.813 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:32.813 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:32.813 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2017334' 00:14:32.813 killing process with pid 2017334 00:14:32.813 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2017334 00:14:32.813 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2017334 00:14:33.074 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:33.074 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:33.074 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:33.074 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:14:33.074 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:14:33.074 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:33.074 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:14:33.074 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:33.074 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:33.074 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.074 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:33.074 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.623 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:35.623 00:14:35.623 real 0m12.725s 00:14:35.623 user 0m10.117s 00:14:35.623 sys 0m6.824s 00:14:35.623 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:35.623 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:35.623 ************************************ 00:14:35.623 END TEST nvmf_multitarget 00:14:35.623 ************************************ 00:14:35.623 07:24:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:35.623 07:24:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:35.623 07:24:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:35.623 07:24:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:35.623 ************************************ 00:14:35.623 START TEST nvmf_rpc 00:14:35.623 ************************************ 00:14:35.623 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:35.623 * Looking for test storage... 00:14:35.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:35.623 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:35.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.624 --rc genhtml_branch_coverage=1 00:14:35.624 --rc genhtml_function_coverage=1 00:14:35.624 --rc genhtml_legend=1 00:14:35.624 --rc geninfo_all_blocks=1 00:14:35.624 --rc geninfo_unexecuted_blocks=1 00:14:35.624 00:14:35.624 ' 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:35.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.624 --rc genhtml_branch_coverage=1 00:14:35.624 --rc genhtml_function_coverage=1 00:14:35.624 --rc genhtml_legend=1 00:14:35.624 --rc geninfo_all_blocks=1 00:14:35.624 --rc geninfo_unexecuted_blocks=1 00:14:35.624 00:14:35.624 ' 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:35.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.624 --rc genhtml_branch_coverage=1 00:14:35.624 --rc genhtml_function_coverage=1 00:14:35.624 --rc genhtml_legend=1 00:14:35.624 --rc geninfo_all_blocks=1 00:14:35.624 --rc geninfo_unexecuted_blocks=1 00:14:35.624 00:14:35.624 ' 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:35.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.624 --rc genhtml_branch_coverage=1 00:14:35.624 --rc genhtml_function_coverage=1 00:14:35.624 --rc genhtml_legend=1 00:14:35.624 --rc geninfo_all_blocks=1 00:14:35.624 --rc geninfo_unexecuted_blocks=1 00:14:35.624 00:14:35.624 ' 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:35.624 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:35.624 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:35.625 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:35.625 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.625 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:35.625 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.625 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:35.625 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:35.625 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:14:35.625 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.763 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:43.763 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:14:43.763 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:43.763 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:43.763 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:43.763 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:43.763 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:43.763 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:14:43.763 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:43.763 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:14:43.763 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:14:43.763 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:43.764 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:43.764 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:43.764 Found net devices under 0000:31:00.0: cvl_0_0 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:43.764 Found net devices under 0000:31:00.1: cvl_0_1 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:43.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:43.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:14:43.764 00:14:43.764 --- 10.0.0.2 ping statistics --- 00:14:43.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.764 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:14:43.764 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:43.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:43.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:14:43.764 00:14:43.764 --- 10.0.0.1 ping statistics --- 00:14:43.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.764 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:14:43.765 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:43.765 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:14:43.765 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:43.765 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:43.765 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:43.765 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:43.765 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:43.765 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:43.765 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:44.026 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:44.026 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:44.026 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:44.026 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.026 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2022394 00:14:44.026 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2022394 00:14:44.026 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:44.026 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2022394 ']' 00:14:44.026 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.026 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:44.026 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.026 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:44.026 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.026 [2024-11-26 07:24:27.975371] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:14:44.027 [2024-11-26 07:24:27.975441] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:44.027 [2024-11-26 07:24:28.066690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:44.027 [2024-11-26 07:24:28.107845] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:44.027 [2024-11-26 07:24:28.107886] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:44.027 [2024-11-26 07:24:28.107895] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:44.027 [2024-11-26 07:24:28.107902] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:44.027 [2024-11-26 07:24:28.107907] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:44.027 [2024-11-26 07:24:28.109521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.027 [2024-11-26 07:24:28.109638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:44.027 [2024-11-26 07:24:28.109795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.027 [2024-11-26 07:24:28.109796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:44.969 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:44.969 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:44.969 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:44.969 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:44.969 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.969 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:44.969 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:44.969 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.969 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.969 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.969 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:44.969 "tick_rate": 2400000000, 00:14:44.969 "poll_groups": [ 00:14:44.969 { 00:14:44.969 "name": "nvmf_tgt_poll_group_000", 00:14:44.969 "admin_qpairs": 0, 00:14:44.969 "io_qpairs": 0, 00:14:44.969 "current_admin_qpairs": 0, 00:14:44.969 "current_io_qpairs": 0, 00:14:44.969 "pending_bdev_io": 0, 00:14:44.969 "completed_nvme_io": 0, 00:14:44.969 "transports": [] 00:14:44.969 }, 00:14:44.969 { 00:14:44.969 "name": "nvmf_tgt_poll_group_001", 00:14:44.969 "admin_qpairs": 0, 00:14:44.969 "io_qpairs": 0, 00:14:44.969 "current_admin_qpairs": 0, 00:14:44.969 "current_io_qpairs": 0, 00:14:44.969 "pending_bdev_io": 0, 00:14:44.969 "completed_nvme_io": 0, 00:14:44.969 "transports": [] 00:14:44.969 }, 00:14:44.969 { 00:14:44.969 "name": "nvmf_tgt_poll_group_002", 00:14:44.969 "admin_qpairs": 0, 00:14:44.969 "io_qpairs": 0, 00:14:44.969 "current_admin_qpairs": 0, 00:14:44.969 "current_io_qpairs": 0, 00:14:44.969 "pending_bdev_io": 0, 00:14:44.969 "completed_nvme_io": 0, 00:14:44.969 "transports": [] 00:14:44.969 }, 00:14:44.969 { 00:14:44.969 "name": "nvmf_tgt_poll_group_003", 00:14:44.969 "admin_qpairs": 0, 00:14:44.969 "io_qpairs": 0, 00:14:44.969 "current_admin_qpairs": 0, 00:14:44.969 "current_io_qpairs": 0, 00:14:44.969 "pending_bdev_io": 0, 00:14:44.969 "completed_nvme_io": 0, 00:14:44.969 "transports": [] 00:14:44.969 } 00:14:44.969 ] 00:14:44.969 }' 00:14:44.969 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:44.969 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:44.969 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:44.969 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:44.969 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:44.969 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:44.969 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:44.969 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:44.969 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.969 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.969 [2024-11-26 07:24:28.952773] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:44.969 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.969 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:44.969 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.969 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.969 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.969 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:44.969 "tick_rate": 2400000000, 00:14:44.969 "poll_groups": [ 00:14:44.969 { 00:14:44.969 "name": "nvmf_tgt_poll_group_000", 00:14:44.969 "admin_qpairs": 0, 00:14:44.969 "io_qpairs": 0, 00:14:44.969 "current_admin_qpairs": 0, 00:14:44.969 "current_io_qpairs": 0, 00:14:44.969 "pending_bdev_io": 0, 00:14:44.969 "completed_nvme_io": 0, 00:14:44.969 "transports": [ 00:14:44.969 { 00:14:44.969 "trtype": "TCP" 00:14:44.969 } 00:14:44.969 ] 00:14:44.969 }, 00:14:44.969 { 00:14:44.969 "name": "nvmf_tgt_poll_group_001", 00:14:44.969 "admin_qpairs": 0, 00:14:44.969 "io_qpairs": 0, 00:14:44.969 "current_admin_qpairs": 0, 00:14:44.969 "current_io_qpairs": 0, 00:14:44.969 "pending_bdev_io": 0, 00:14:44.969 "completed_nvme_io": 0, 00:14:44.969 "transports": [ 00:14:44.969 { 00:14:44.969 "trtype": "TCP" 00:14:44.969 } 00:14:44.969 ] 00:14:44.969 }, 00:14:44.969 { 00:14:44.969 "name": "nvmf_tgt_poll_group_002", 00:14:44.969 "admin_qpairs": 0, 00:14:44.969 "io_qpairs": 0, 00:14:44.969 "current_admin_qpairs": 0, 00:14:44.969 "current_io_qpairs": 0, 00:14:44.969 "pending_bdev_io": 0, 00:14:44.969 "completed_nvme_io": 0, 00:14:44.969 "transports": [ 00:14:44.969 { 00:14:44.969 "trtype": "TCP" 00:14:44.969 } 00:14:44.969 ] 00:14:44.969 }, 00:14:44.969 { 00:14:44.969 "name": "nvmf_tgt_poll_group_003", 00:14:44.969 "admin_qpairs": 0, 00:14:44.969 "io_qpairs": 0, 00:14:44.969 "current_admin_qpairs": 0, 00:14:44.969 "current_io_qpairs": 0, 00:14:44.969 "pending_bdev_io": 0, 00:14:44.969 "completed_nvme_io": 0, 00:14:44.969 "transports": [ 00:14:44.969 { 00:14:44.969 "trtype": "TCP" 00:14:44.969 } 00:14:44.969 ] 00:14:44.969 } 00:14:44.969 ] 00:14:44.969 }' 00:14:44.969 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:44.969 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:44.969 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:44.969 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:44.969 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:44.969 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:44.969 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:44.969 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:44.969 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:44.969 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:44.969 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:44.969 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:44.969 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:44.969 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:44.969 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.969 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.232 Malloc1 00:14:45.232 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.232 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:45.232 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.232 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.232 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.232 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:45.232 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.232 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.232 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.232 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:45.232 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.232 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.232 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.232 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:45.232 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.232 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.232 [2024-11-26 07:24:29.156151] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:45.232 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.232 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:14:45.232 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:14:45.232 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:14:45.232 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:14:45.232 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:45.232 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:14:45.232 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:45.232 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:14:45.232 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:45.232 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:14:45.232 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:14:45.232 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:14:45.232 [2024-11-26 07:24:29.192987] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:14:45.232 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:45.232 could not add new controller: failed to write to nvme-fabrics device 00:14:45.232 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:14:45.232 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:45.232 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:45.232 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:45.232 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:45.232 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.232 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.232 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.232 07:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:47.144 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:47.144 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:47.144 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:47.144 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:47.144 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:49.056 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:49.056 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:49.056 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:49.056 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:49.056 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:49.056 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:49.056 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:49.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.056 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:49.056 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:49.056 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:49.056 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:49.056 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:49.056 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:49.056 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:49.056 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:49.056 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.056 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:49.056 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.056 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:49.056 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:14:49.056 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:49.056 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:14:49.056 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:49.056 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:14:49.056 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:49.056 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:14:49.056 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:49.056 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:14:49.056 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:14:49.056 07:24:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:49.056 [2024-11-26 07:24:32.978994] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:14:49.056 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:49.056 could not add new controller: failed to write to nvme-fabrics device 00:14:49.056 07:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:14:49.056 07:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:49.056 07:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:49.056 07:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:49.056 07:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:49.056 07:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.056 07:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:49.056 07:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.056 07:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:50.967 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:50.967 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:50.967 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:50.967 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:50.967 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:52.880 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:52.881 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:52.881 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:52.881 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:52.881 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:52.881 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:52.881 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:52.881 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.881 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:52.881 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:52.881 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:52.881 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:52.881 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:52.881 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:52.881 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:52.881 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:52.881 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.881 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:52.881 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.881 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:52.881 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:52.881 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:52.881 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.881 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:52.881 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.881 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:52.881 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.881 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:52.881 [2024-11-26 07:24:36.743872] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:52.881 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.881 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:52.881 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.881 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:52.881 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.881 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:52.881 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.881 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:52.881 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.881 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:54.265 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:54.265 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:54.265 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:54.265 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:54.265 07:24:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:56.177 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:56.177 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:56.177 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:56.177 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:56.177 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:56.177 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:56.177 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:56.438 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.438 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:56.438 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:56.438 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:56.438 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:56.438 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:56.438 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:56.438 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:56.438 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:56.438 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.438 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.438 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.438 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:56.438 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.438 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.438 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.438 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:56.438 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:56.438 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.438 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.438 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.438 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:56.438 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.438 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.438 [2024-11-26 07:24:40.455221] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:56.438 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.438 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:56.438 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.438 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.438 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.438 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:56.438 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.438 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.438 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.438 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:58.452 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:58.452 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:58.452 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:58.452 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:58.452 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:00.368 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:00.368 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:00.368 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:00.369 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:00.369 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:00.369 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:00.369 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:00.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.369 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:00.369 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:00.369 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:00.369 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:00.369 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:00.369 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:00.369 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:00.369 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:00.369 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.369 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.369 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.369 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:00.369 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.369 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.369 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.369 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:00.369 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:00.369 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.369 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.369 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.369 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:00.369 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.369 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.369 [2024-11-26 07:24:44.218692] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:00.369 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.369 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:00.369 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.369 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.369 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.369 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:00.369 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.369 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.369 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.369 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:01.755 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:01.755 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:01.755 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:01.755 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:01.755 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:04.301 07:24:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:04.301 07:24:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:04.301 07:24:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:04.301 07:24:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:04.301 07:24:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:04.301 07:24:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:04.301 07:24:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:04.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.301 07:24:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:04.301 07:24:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:04.301 07:24:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:04.301 07:24:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:04.301 07:24:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:04.301 07:24:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:04.301 07:24:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:04.301 07:24:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:04.301 07:24:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.301 07:24:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.301 07:24:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.301 07:24:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:04.301 07:24:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.301 07:24:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.301 07:24:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.301 07:24:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:04.301 07:24:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:04.301 07:24:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.301 07:24:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.301 07:24:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.301 07:24:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:04.301 07:24:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.301 07:24:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.301 [2024-11-26 07:24:47.974436] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:04.301 07:24:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.301 07:24:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:04.301 07:24:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.301 07:24:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.301 07:24:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.301 07:24:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:04.301 07:24:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.301 07:24:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.301 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.301 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:05.698 07:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:05.698 07:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:05.698 07:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:05.698 07:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:05.698 07:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:07.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.616 [2024-11-26 07:24:51.690054] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.616 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:09.531 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:09.531 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:09.531 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:09.531 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:09.531 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:11.446 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.446 [2024-11-26 07:24:55.429061] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.446 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.447 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.447 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:11.447 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:11.447 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.447 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.447 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.447 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:11.447 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.447 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.447 [2024-11-26 07:24:55.497234] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:11.447 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.447 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:11.447 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.447 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.447 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.447 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:11.447 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.447 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.447 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.447 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.447 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.447 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.447 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.447 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:11.447 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.447 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.447 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.447 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:11.447 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:11.447 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.447 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.447 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.447 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:11.447 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.447 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.447 [2024-11-26 07:24:55.561410] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:11.447 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.447 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:11.447 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.447 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.708 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.708 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:11.708 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.708 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.708 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.708 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.708 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.708 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.708 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.708 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:11.708 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.708 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.708 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.708 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:11.708 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:11.708 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.708 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.708 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.708 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:11.708 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.708 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.708 [2024-11-26 07:24:55.629629] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:11.708 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.708 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.709 [2024-11-26 07:24:55.693870] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:15:11.709 "tick_rate": 2400000000, 00:15:11.709 "poll_groups": [ 00:15:11.709 { 00:15:11.709 "name": "nvmf_tgt_poll_group_000", 00:15:11.709 "admin_qpairs": 0, 00:15:11.709 "io_qpairs": 224, 00:15:11.709 "current_admin_qpairs": 0, 00:15:11.709 "current_io_qpairs": 0, 00:15:11.709 "pending_bdev_io": 0, 00:15:11.709 "completed_nvme_io": 225, 00:15:11.709 "transports": [ 00:15:11.709 { 00:15:11.709 "trtype": "TCP" 00:15:11.709 } 00:15:11.709 ] 00:15:11.709 }, 00:15:11.709 { 00:15:11.709 "name": "nvmf_tgt_poll_group_001", 00:15:11.709 "admin_qpairs": 1, 00:15:11.709 "io_qpairs": 223, 00:15:11.709 "current_admin_qpairs": 0, 00:15:11.709 "current_io_qpairs": 0, 00:15:11.709 "pending_bdev_io": 0, 00:15:11.709 "completed_nvme_io": 223, 00:15:11.709 "transports": [ 00:15:11.709 { 00:15:11.709 "trtype": "TCP" 00:15:11.709 } 00:15:11.709 ] 00:15:11.709 }, 00:15:11.709 { 00:15:11.709 "name": "nvmf_tgt_poll_group_002", 00:15:11.709 "admin_qpairs": 6, 00:15:11.709 "io_qpairs": 218, 00:15:11.709 "current_admin_qpairs": 0, 00:15:11.709 "current_io_qpairs": 0, 00:15:11.709 "pending_bdev_io": 0, 00:15:11.709 "completed_nvme_io": 269, 00:15:11.709 "transports": [ 00:15:11.709 { 00:15:11.709 "trtype": "TCP" 00:15:11.709 } 00:15:11.709 ] 00:15:11.709 }, 00:15:11.709 { 00:15:11.709 "name": "nvmf_tgt_poll_group_003", 00:15:11.709 "admin_qpairs": 0, 00:15:11.709 "io_qpairs": 224, 00:15:11.709 "current_admin_qpairs": 0, 00:15:11.709 "current_io_qpairs": 0, 00:15:11.709 "pending_bdev_io": 0, 00:15:11.709 "completed_nvme_io": 522, 00:15:11.709 "transports": [ 00:15:11.709 { 00:15:11.709 "trtype": "TCP" 00:15:11.709 } 00:15:11.709 ] 00:15:11.709 } 00:15:11.709 ] 00:15:11.709 }' 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:11.709 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:11.970 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:15:11.971 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:15:11.971 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:11.971 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:15:11.971 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:11.971 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:15:11.971 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:11.971 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:15:11.971 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:11.971 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:11.971 rmmod nvme_tcp 00:15:11.971 rmmod nvme_fabrics 00:15:11.971 rmmod nvme_keyring 00:15:11.971 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:11.971 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:15:11.971 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:15:11.971 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2022394 ']' 00:15:11.971 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2022394 00:15:11.971 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2022394 ']' 00:15:11.971 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2022394 00:15:11.971 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:15:11.971 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:11.971 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2022394 00:15:11.971 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:11.971 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:11.971 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2022394' 00:15:11.971 killing process with pid 2022394 00:15:11.971 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2022394 00:15:11.971 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2022394 00:15:12.232 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:12.232 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:12.232 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:12.232 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:15:12.232 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:12.232 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:15:12.232 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:15:12.232 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:12.232 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:12.232 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.232 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:12.232 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.144 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:14.144 00:15:14.144 real 0m38.984s 00:15:14.144 user 1m54.213s 00:15:14.144 sys 0m8.617s 00:15:14.144 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:14.144 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.144 ************************************ 00:15:14.144 END TEST nvmf_rpc 00:15:14.144 ************************************ 00:15:14.144 07:24:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:14.144 07:24:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:14.144 07:24:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:14.144 07:24:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:14.407 ************************************ 00:15:14.407 START TEST nvmf_invalid 00:15:14.407 ************************************ 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:14.407 * Looking for test storage... 00:15:14.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:14.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.407 --rc genhtml_branch_coverage=1 00:15:14.407 --rc genhtml_function_coverage=1 00:15:14.407 --rc genhtml_legend=1 00:15:14.407 --rc geninfo_all_blocks=1 00:15:14.407 --rc geninfo_unexecuted_blocks=1 00:15:14.407 00:15:14.407 ' 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:14.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.407 --rc genhtml_branch_coverage=1 00:15:14.407 --rc genhtml_function_coverage=1 00:15:14.407 --rc genhtml_legend=1 00:15:14.407 --rc geninfo_all_blocks=1 00:15:14.407 --rc geninfo_unexecuted_blocks=1 00:15:14.407 00:15:14.407 ' 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:14.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.407 --rc genhtml_branch_coverage=1 00:15:14.407 --rc genhtml_function_coverage=1 00:15:14.407 --rc genhtml_legend=1 00:15:14.407 --rc geninfo_all_blocks=1 00:15:14.407 --rc geninfo_unexecuted_blocks=1 00:15:14.407 00:15:14.407 ' 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:14.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.407 --rc genhtml_branch_coverage=1 00:15:14.407 --rc genhtml_function_coverage=1 00:15:14.407 --rc genhtml_legend=1 00:15:14.407 --rc geninfo_all_blocks=1 00:15:14.407 --rc geninfo_unexecuted_blocks=1 00:15:14.407 00:15:14.407 ' 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:14.407 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:14.408 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:14.408 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:14.408 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:14.408 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:14.408 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:14.408 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:14.408 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:14.408 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:14.408 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:14.408 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:15:14.408 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:14.408 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:14.408 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:14.408 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.408 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.408 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.408 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:15:14.408 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.408 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:15:14.408 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:14.408 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:14.408 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:14.408 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:14.408 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:14.408 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:14.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:14.408 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:14.408 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:14.408 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:14.670 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:14.670 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:14.670 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:14.670 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:15:14.670 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:15:14.670 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:15:14.670 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:14.670 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:14.670 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:14.670 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:14.670 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:14.670 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.670 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:14.670 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.670 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:14.670 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:14.670 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:15:14.670 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:22.819 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:22.819 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:22.819 Found net devices under 0000:31:00.0: cvl_0_0 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.819 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:22.820 Found net devices under 0000:31:00.1: cvl_0_1 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:22.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:22.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.524 ms 00:15:22.820 00:15:22.820 --- 10.0.0.2 ping statistics --- 00:15:22.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.820 rtt min/avg/max/mdev = 0.524/0.524/0.524/0.000 ms 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:22.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:22.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:15:22.820 00:15:22.820 --- 10.0.0.1 ping statistics --- 00:15:22.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.820 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2032627 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2032627 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2032627 ']' 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:22.820 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:22.820 [2024-11-26 07:25:06.888308] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:15:22.820 [2024-11-26 07:25:06.888376] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:23.082 [2024-11-26 07:25:06.979995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:23.082 [2024-11-26 07:25:07.021656] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:23.082 [2024-11-26 07:25:07.021694] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:23.082 [2024-11-26 07:25:07.021702] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:23.082 [2024-11-26 07:25:07.021709] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:23.082 [2024-11-26 07:25:07.021715] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:23.082 [2024-11-26 07:25:07.023349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:23.082 [2024-11-26 07:25:07.023476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:23.082 [2024-11-26 07:25:07.023631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.082 [2024-11-26 07:25:07.023632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:23.654 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:23.654 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:15:23.654 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:23.654 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:23.654 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:23.654 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:23.654 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:23.654 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode12135 00:15:23.914 [2024-11-26 07:25:07.894820] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:23.914 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:15:23.914 { 00:15:23.914 "nqn": "nqn.2016-06.io.spdk:cnode12135", 00:15:23.914 "tgt_name": "foobar", 00:15:23.914 "method": "nvmf_create_subsystem", 00:15:23.914 "req_id": 1 00:15:23.914 } 00:15:23.914 Got JSON-RPC error response 00:15:23.914 response: 00:15:23.914 { 00:15:23.914 "code": -32603, 00:15:23.914 "message": "Unable to find target foobar" 00:15:23.914 }' 00:15:23.914 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:15:23.914 { 00:15:23.914 "nqn": "nqn.2016-06.io.spdk:cnode12135", 00:15:23.914 "tgt_name": "foobar", 00:15:23.914 "method": "nvmf_create_subsystem", 00:15:23.914 "req_id": 1 00:15:23.914 } 00:15:23.914 Got JSON-RPC error response 00:15:23.914 response: 00:15:23.914 { 00:15:23.914 "code": -32603, 00:15:23.914 "message": "Unable to find target foobar" 00:15:23.914 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:23.914 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:23.914 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode13196 00:15:24.176 [2024-11-26 07:25:08.087485] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13196: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:24.176 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:15:24.176 { 00:15:24.176 "nqn": "nqn.2016-06.io.spdk:cnode13196", 00:15:24.176 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:24.176 "method": "nvmf_create_subsystem", 00:15:24.176 "req_id": 1 00:15:24.176 } 00:15:24.176 Got JSON-RPC error response 00:15:24.176 response: 00:15:24.176 { 00:15:24.176 "code": -32602, 00:15:24.176 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:24.176 }' 00:15:24.176 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:15:24.176 { 00:15:24.176 "nqn": "nqn.2016-06.io.spdk:cnode13196", 00:15:24.176 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:24.176 "method": "nvmf_create_subsystem", 00:15:24.176 "req_id": 1 00:15:24.176 } 00:15:24.176 Got JSON-RPC error response 00:15:24.176 response: 00:15:24.176 { 00:15:24.176 "code": -32602, 00:15:24.176 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:24.176 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:24.176 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:24.176 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode6572 00:15:24.176 [2024-11-26 07:25:08.280093] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6572: invalid model number 'SPDK_Controller' 00:15:24.437 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:15:24.437 { 00:15:24.437 "nqn": "nqn.2016-06.io.spdk:cnode6572", 00:15:24.437 "model_number": "SPDK_Controller\u001f", 00:15:24.437 "method": "nvmf_create_subsystem", 00:15:24.437 "req_id": 1 00:15:24.437 } 00:15:24.437 Got JSON-RPC error response 00:15:24.437 response: 00:15:24.437 { 00:15:24.437 "code": -32602, 00:15:24.437 "message": "Invalid MN SPDK_Controller\u001f" 00:15:24.437 }' 00:15:24.437 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:15:24.437 { 00:15:24.437 "nqn": "nqn.2016-06.io.spdk:cnode6572", 00:15:24.437 "model_number": "SPDK_Controller\u001f", 00:15:24.437 "method": "nvmf_create_subsystem", 00:15:24.437 "req_id": 1 00:15:24.437 } 00:15:24.437 Got JSON-RPC error response 00:15:24.437 response: 00:15:24.437 { 00:15:24.437 "code": -32602, 00:15:24.437 "message": "Invalid MN SPDK_Controller\u001f" 00:15:24.437 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:24.437 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:15:24.437 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:15:24.437 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:24.437 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:24.437 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:24.437 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:24.437 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.437 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:15:24.437 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:15:24.437 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:15:24.437 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.437 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.437 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:15:24.437 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:15:24.437 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:15:24.437 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.437 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.437 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:15:24.437 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:15:24.437 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:15:24.437 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.437 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.437 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:15:24.437 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ F == \- ]] 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'F,[K}i.PT,C_U4MN{Q }[' 00:15:24.438 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'F,[K}i.PT,C_U4MN{Q }[' nqn.2016-06.io.spdk:cnode25463 00:15:24.700 [2024-11-26 07:25:08.633220] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25463: invalid serial number 'F,[K}i.PT,C_U4MN{Q }[' 00:15:24.700 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:15:24.700 { 00:15:24.700 "nqn": "nqn.2016-06.io.spdk:cnode25463", 00:15:24.700 "serial_number": "F,[K}i.PT,C_U4MN{Q }[", 00:15:24.700 "method": "nvmf_create_subsystem", 00:15:24.700 "req_id": 1 00:15:24.700 } 00:15:24.700 Got JSON-RPC error response 00:15:24.700 response: 00:15:24.700 { 00:15:24.700 "code": -32602, 00:15:24.700 "message": "Invalid SN F,[K}i.PT,C_U4MN{Q }[" 00:15:24.700 }' 00:15:24.700 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:15:24.700 { 00:15:24.700 "nqn": "nqn.2016-06.io.spdk:cnode25463", 00:15:24.700 "serial_number": "F,[K}i.PT,C_U4MN{Q }[", 00:15:24.700 "method": "nvmf_create_subsystem", 00:15:24.700 "req_id": 1 00:15:24.700 } 00:15:24.700 Got JSON-RPC error response 00:15:24.700 response: 00:15:24.700 { 00:15:24.700 "code": -32602, 00:15:24.700 "message": "Invalid SN F,[K}i.PT,C_U4MN{Q }[" 00:15:24.700 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:24.700 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:15:24.700 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:15:24.700 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.701 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:15:24.702 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:15:24.702 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:15:24.702 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.702 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:15:24.963 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ T == \- ]] 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Tu4iG7*RrZyZq*ZF&qgoI'\''i7yxxm/Q,aiiolp~N-(' 00:15:24.964 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Tu4iG7*RrZyZq*ZF&qgoI'\''i7yxxm/Q,aiiolp~N-(' nqn.2016-06.io.spdk:cnode10738 00:15:25.231 [2024-11-26 07:25:09.142879] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10738: invalid model number 'Tu4iG7*RrZyZq*ZF&qgoI'i7yxxm/Q,aiiolp~N-(' 00:15:25.231 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:15:25.231 { 00:15:25.231 "nqn": "nqn.2016-06.io.spdk:cnode10738", 00:15:25.231 "model_number": "Tu4iG7*RrZyZq*ZF&qgoI'\''i7yxxm/Q,aiiolp~N-(", 00:15:25.231 "method": "nvmf_create_subsystem", 00:15:25.231 "req_id": 1 00:15:25.231 } 00:15:25.231 Got JSON-RPC error response 00:15:25.231 response: 00:15:25.231 { 00:15:25.231 "code": -32602, 00:15:25.231 "message": "Invalid MN Tu4iG7*RrZyZq*ZF&qgoI'\''i7yxxm/Q,aiiolp~N-(" 00:15:25.231 }' 00:15:25.231 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:15:25.231 { 00:15:25.231 "nqn": "nqn.2016-06.io.spdk:cnode10738", 00:15:25.231 "model_number": "Tu4iG7*RrZyZq*ZF&qgoI'i7yxxm/Q,aiiolp~N-(", 00:15:25.231 "method": "nvmf_create_subsystem", 00:15:25.231 "req_id": 1 00:15:25.231 } 00:15:25.231 Got JSON-RPC error response 00:15:25.231 response: 00:15:25.231 { 00:15:25.231 "code": -32602, 00:15:25.231 "message": "Invalid MN Tu4iG7*RrZyZq*ZF&qgoI'i7yxxm/Q,aiiolp~N-(" 00:15:25.231 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:25.231 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:15:25.231 [2024-11-26 07:25:09.327573] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:25.231 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:15:25.492 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:15:25.492 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:15:25.492 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:15:25.492 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:15:25.492 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:15:25.753 [2024-11-26 07:25:09.710203] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:15:25.753 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:15:25.753 { 00:15:25.753 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:25.753 "listen_address": { 00:15:25.753 "trtype": "tcp", 00:15:25.753 "traddr": "", 00:15:25.753 "trsvcid": "4421" 00:15:25.753 }, 00:15:25.753 "method": "nvmf_subsystem_remove_listener", 00:15:25.753 "req_id": 1 00:15:25.753 } 00:15:25.753 Got JSON-RPC error response 00:15:25.753 response: 00:15:25.753 { 00:15:25.753 "code": -32602, 00:15:25.753 "message": "Invalid parameters" 00:15:25.753 }' 00:15:25.753 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:15:25.753 { 00:15:25.753 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:25.753 "listen_address": { 00:15:25.753 "trtype": "tcp", 00:15:25.753 "traddr": "", 00:15:25.753 "trsvcid": "4421" 00:15:25.753 }, 00:15:25.753 "method": "nvmf_subsystem_remove_listener", 00:15:25.753 "req_id": 1 00:15:25.753 } 00:15:25.753 Got JSON-RPC error response 00:15:25.753 response: 00:15:25.753 { 00:15:25.753 "code": -32602, 00:15:25.753 "message": "Invalid parameters" 00:15:25.753 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:15:25.753 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4750 -i 0 00:15:26.015 [2024-11-26 07:25:09.898775] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4750: invalid cntlid range [0-65519] 00:15:26.015 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:15:26.015 { 00:15:26.015 "nqn": "nqn.2016-06.io.spdk:cnode4750", 00:15:26.015 "min_cntlid": 0, 00:15:26.015 "method": "nvmf_create_subsystem", 00:15:26.015 "req_id": 1 00:15:26.015 } 00:15:26.015 Got JSON-RPC error response 00:15:26.015 response: 00:15:26.015 { 00:15:26.015 "code": -32602, 00:15:26.015 "message": "Invalid cntlid range [0-65519]" 00:15:26.015 }' 00:15:26.015 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:15:26.015 { 00:15:26.015 "nqn": "nqn.2016-06.io.spdk:cnode4750", 00:15:26.015 "min_cntlid": 0, 00:15:26.015 "method": "nvmf_create_subsystem", 00:15:26.015 "req_id": 1 00:15:26.015 } 00:15:26.015 Got JSON-RPC error response 00:15:26.015 response: 00:15:26.015 { 00:15:26.015 "code": -32602, 00:15:26.015 "message": "Invalid cntlid range [0-65519]" 00:15:26.015 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:26.015 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17004 -i 65520 00:15:26.015 [2024-11-26 07:25:10.079381] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17004: invalid cntlid range [65520-65519] 00:15:26.015 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:15:26.015 { 00:15:26.015 "nqn": "nqn.2016-06.io.spdk:cnode17004", 00:15:26.015 "min_cntlid": 65520, 00:15:26.015 "method": "nvmf_create_subsystem", 00:15:26.015 "req_id": 1 00:15:26.015 } 00:15:26.015 Got JSON-RPC error response 00:15:26.015 response: 00:15:26.015 { 00:15:26.015 "code": -32602, 00:15:26.015 "message": "Invalid cntlid range [65520-65519]" 00:15:26.015 }' 00:15:26.015 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:15:26.015 { 00:15:26.015 "nqn": "nqn.2016-06.io.spdk:cnode17004", 00:15:26.015 "min_cntlid": 65520, 00:15:26.015 "method": "nvmf_create_subsystem", 00:15:26.015 "req_id": 1 00:15:26.015 } 00:15:26.015 Got JSON-RPC error response 00:15:26.015 response: 00:15:26.015 { 00:15:26.015 "code": -32602, 00:15:26.015 "message": "Invalid cntlid range [65520-65519]" 00:15:26.015 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:26.015 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26413 -I 0 00:15:26.276 [2024-11-26 07:25:10.267929] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26413: invalid cntlid range [1-0] 00:15:26.276 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:15:26.276 { 00:15:26.276 "nqn": "nqn.2016-06.io.spdk:cnode26413", 00:15:26.276 "max_cntlid": 0, 00:15:26.276 "method": "nvmf_create_subsystem", 00:15:26.276 "req_id": 1 00:15:26.276 } 00:15:26.276 Got JSON-RPC error response 00:15:26.276 response: 00:15:26.276 { 00:15:26.276 "code": -32602, 00:15:26.276 "message": "Invalid cntlid range [1-0]" 00:15:26.276 }' 00:15:26.276 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:15:26.276 { 00:15:26.276 "nqn": "nqn.2016-06.io.spdk:cnode26413", 00:15:26.276 "max_cntlid": 0, 00:15:26.276 "method": "nvmf_create_subsystem", 00:15:26.276 "req_id": 1 00:15:26.276 } 00:15:26.276 Got JSON-RPC error response 00:15:26.276 response: 00:15:26.276 { 00:15:26.276 "code": -32602, 00:15:26.276 "message": "Invalid cntlid range [1-0]" 00:15:26.276 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:26.276 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2537 -I 65520 00:15:26.537 [2024-11-26 07:25:10.456529] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2537: invalid cntlid range [1-65520] 00:15:26.537 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:15:26.537 { 00:15:26.537 "nqn": "nqn.2016-06.io.spdk:cnode2537", 00:15:26.537 "max_cntlid": 65520, 00:15:26.537 "method": "nvmf_create_subsystem", 00:15:26.537 "req_id": 1 00:15:26.537 } 00:15:26.537 Got JSON-RPC error response 00:15:26.537 response: 00:15:26.537 { 00:15:26.537 "code": -32602, 00:15:26.537 "message": "Invalid cntlid range [1-65520]" 00:15:26.537 }' 00:15:26.537 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:15:26.537 { 00:15:26.537 "nqn": "nqn.2016-06.io.spdk:cnode2537", 00:15:26.537 "max_cntlid": 65520, 00:15:26.537 "method": "nvmf_create_subsystem", 00:15:26.537 "req_id": 1 00:15:26.537 } 00:15:26.537 Got JSON-RPC error response 00:15:26.537 response: 00:15:26.537 { 00:15:26.537 "code": -32602, 00:15:26.537 "message": "Invalid cntlid range [1-65520]" 00:15:26.537 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:26.537 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20207 -i 6 -I 5 00:15:26.537 [2024-11-26 07:25:10.645130] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20207: invalid cntlid range [6-5] 00:15:26.799 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:15:26.799 { 00:15:26.799 "nqn": "nqn.2016-06.io.spdk:cnode20207", 00:15:26.799 "min_cntlid": 6, 00:15:26.799 "max_cntlid": 5, 00:15:26.799 "method": "nvmf_create_subsystem", 00:15:26.799 "req_id": 1 00:15:26.799 } 00:15:26.799 Got JSON-RPC error response 00:15:26.799 response: 00:15:26.799 { 00:15:26.799 "code": -32602, 00:15:26.799 "message": "Invalid cntlid range [6-5]" 00:15:26.799 }' 00:15:26.799 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:15:26.799 { 00:15:26.799 "nqn": "nqn.2016-06.io.spdk:cnode20207", 00:15:26.799 "min_cntlid": 6, 00:15:26.799 "max_cntlid": 5, 00:15:26.799 "method": "nvmf_create_subsystem", 00:15:26.799 "req_id": 1 00:15:26.799 } 00:15:26.799 Got JSON-RPC error response 00:15:26.799 response: 00:15:26.799 { 00:15:26.799 "code": -32602, 00:15:26.799 "message": "Invalid cntlid range [6-5]" 00:15:26.799 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:26.799 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:15:26.799 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:15:26.799 { 00:15:26.799 "name": "foobar", 00:15:26.799 "method": "nvmf_delete_target", 00:15:26.799 "req_id": 1 00:15:26.799 } 00:15:26.799 Got JSON-RPC error response 00:15:26.799 response: 00:15:26.799 { 00:15:26.799 "code": -32602, 00:15:26.799 "message": "The specified target doesn'\''t exist, cannot delete it." 00:15:26.799 }' 00:15:26.799 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:15:26.799 { 00:15:26.799 "name": "foobar", 00:15:26.799 "method": "nvmf_delete_target", 00:15:26.799 "req_id": 1 00:15:26.799 } 00:15:26.799 Got JSON-RPC error response 00:15:26.799 response: 00:15:26.799 { 00:15:26.799 "code": -32602, 00:15:26.799 "message": "The specified target doesn't exist, cannot delete it." 00:15:26.799 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:15:26.799 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:15:26.799 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:15:26.799 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:26.799 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:15:26.799 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:26.799 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:15:26.799 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:26.799 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:26.799 rmmod nvme_tcp 00:15:26.799 rmmod nvme_fabrics 00:15:26.799 rmmod nvme_keyring 00:15:26.799 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:26.799 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:15:26.799 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:15:26.799 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2032627 ']' 00:15:26.799 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2032627 00:15:26.799 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2032627 ']' 00:15:26.799 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2032627 00:15:26.799 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:15:26.799 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:26.799 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2032627 00:15:26.799 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:26.799 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:26.799 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2032627' 00:15:26.799 killing process with pid 2032627 00:15:26.799 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2032627 00:15:26.799 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2032627 00:15:27.061 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:27.061 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:27.061 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:27.061 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:15:27.061 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:15:27.061 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:27.061 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:15:27.061 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:27.061 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:27.061 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.061 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:27.061 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:29.608 00:15:29.608 real 0m14.813s 00:15:29.608 user 0m20.824s 00:15:29.608 sys 0m7.227s 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:29.608 ************************************ 00:15:29.608 END TEST nvmf_invalid 00:15:29.608 ************************************ 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:29.608 ************************************ 00:15:29.608 START TEST nvmf_connect_stress 00:15:29.608 ************************************ 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:29.608 * Looking for test storage... 00:15:29.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:29.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.608 --rc genhtml_branch_coverage=1 00:15:29.608 --rc genhtml_function_coverage=1 00:15:29.608 --rc genhtml_legend=1 00:15:29.608 --rc geninfo_all_blocks=1 00:15:29.608 --rc geninfo_unexecuted_blocks=1 00:15:29.608 00:15:29.608 ' 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:29.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.608 --rc genhtml_branch_coverage=1 00:15:29.608 --rc genhtml_function_coverage=1 00:15:29.608 --rc genhtml_legend=1 00:15:29.608 --rc geninfo_all_blocks=1 00:15:29.608 --rc geninfo_unexecuted_blocks=1 00:15:29.608 00:15:29.608 ' 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:29.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.608 --rc genhtml_branch_coverage=1 00:15:29.608 --rc genhtml_function_coverage=1 00:15:29.608 --rc genhtml_legend=1 00:15:29.608 --rc geninfo_all_blocks=1 00:15:29.608 --rc geninfo_unexecuted_blocks=1 00:15:29.608 00:15:29.608 ' 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:29.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.608 --rc genhtml_branch_coverage=1 00:15:29.608 --rc genhtml_function_coverage=1 00:15:29.608 --rc genhtml_legend=1 00:15:29.608 --rc geninfo_all_blocks=1 00:15:29.608 --rc geninfo_unexecuted_blocks=1 00:15:29.608 00:15:29.608 ' 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:29.608 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:29.609 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:29.609 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:15:29.609 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:29.609 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:29.609 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:29.609 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.609 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.609 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.609 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:29.609 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.609 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:15:29.609 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:29.609 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:29.609 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:29.609 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:29.609 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:29.609 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:29.609 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:29.609 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:29.609 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:29.609 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:29.609 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:29.609 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:29.609 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:29.609 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:29.609 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:29.609 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:29.609 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.609 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:29.609 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.609 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:29.609 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:29.609 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:15:29.609 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:37.756 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:37.756 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:37.756 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:37.756 Found net devices under 0000:31:00.0: cvl_0_0 00:15:37.757 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:37.757 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:37.757 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:37.757 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:37.757 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:37.757 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:37.757 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:37.757 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:37.757 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:37.757 Found net devices under 0000:31:00.1: cvl_0_1 00:15:37.757 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:37.757 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:37.757 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:15:37.757 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:37.757 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:37.757 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:37.757 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:37.757 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:37.757 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:37.757 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:37.757 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:37.757 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:37.757 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:37.757 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:37.757 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:37.757 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:37.757 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:37.757 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:37.757 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:37.757 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:37.757 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:37.757 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:37.757 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:37.757 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:37.757 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:37.757 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:37.757 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:37.757 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:38.019 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:38.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:38.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:15:38.019 00:15:38.019 --- 10.0.0.2 ping statistics --- 00:15:38.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.019 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:15:38.019 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:38.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:38.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:15:38.019 00:15:38.019 --- 10.0.0.1 ping statistics --- 00:15:38.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.019 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:15:38.019 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:38.019 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:15:38.019 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:38.019 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:38.019 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:38.019 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:38.019 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:38.019 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:38.019 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:38.019 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:38.019 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:38.020 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:38.020 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.020 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2038471 00:15:38.020 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2038471 00:15:38.020 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:38.020 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2038471 ']' 00:15:38.020 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.020 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:38.020 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.020 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:38.020 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.020 [2024-11-26 07:25:22.017426] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:15:38.020 [2024-11-26 07:25:22.017476] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.020 [2024-11-26 07:25:22.121685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:38.281 [2024-11-26 07:25:22.161455] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.281 [2024-11-26 07:25:22.161494] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.281 [2024-11-26 07:25:22.161502] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:38.281 [2024-11-26 07:25:22.161509] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:38.281 [2024-11-26 07:25:22.161515] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.281 [2024-11-26 07:25:22.163022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:38.281 [2024-11-26 07:25:22.163179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.281 [2024-11-26 07:25:22.163179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:38.852 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:38.852 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:15:38.852 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:38.852 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:38.852 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.852 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:38.852 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:38.852 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.852 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.852 [2024-11-26 07:25:22.860251] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:38.852 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.852 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:38.852 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.852 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.852 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.852 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.853 [2024-11-26 07:25:22.884687] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.853 NULL1 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2038580 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.853 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:39.114 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:39.114 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:39.114 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:39.114 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:39.114 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:39.114 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:39.114 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:39.114 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:39.114 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:39.114 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:39.114 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2038580 00:15:39.114 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:39.114 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.114 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.376 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.376 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2038580 00:15:39.376 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:39.376 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.376 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.636 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.636 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2038580 00:15:39.636 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:39.636 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.636 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.897 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.897 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2038580 00:15:39.897 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:39.897 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.897 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.468 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.468 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2038580 00:15:40.468 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.468 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.468 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.729 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.729 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2038580 00:15:40.729 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.729 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.729 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.990 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.990 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2038580 00:15:40.990 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.990 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.990 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.251 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.251 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2038580 00:15:41.251 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.251 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.251 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.512 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.512 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2038580 00:15:41.512 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.512 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.512 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.083 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.083 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2038580 00:15:42.083 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.083 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.083 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.399 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.399 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2038580 00:15:42.399 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.399 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.399 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.660 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.660 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2038580 00:15:42.660 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.661 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.661 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.922 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.922 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2038580 00:15:42.922 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.922 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.922 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.184 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.184 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2038580 00:15:43.184 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.184 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.184 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.445 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.445 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2038580 00:15:43.445 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.445 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.445 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.018 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.018 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2038580 00:15:44.018 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.018 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.018 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.279 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.279 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2038580 00:15:44.279 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.279 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.279 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.539 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.539 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2038580 00:15:44.539 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.539 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.539 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.800 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.800 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2038580 00:15:44.800 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.800 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.800 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:45.060 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.060 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2038580 00:15:45.060 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:45.060 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.060 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:45.628 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.628 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2038580 00:15:45.628 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:45.628 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.628 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:45.890 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.890 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2038580 00:15:45.890 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:45.890 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.890 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:46.150 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.150 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2038580 00:15:46.150 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:46.150 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.150 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:46.412 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.412 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2038580 00:15:46.412 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:46.412 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.412 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:46.984 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.984 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2038580 00:15:46.984 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:46.984 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.984 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:47.245 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.245 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2038580 00:15:47.245 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:47.245 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.245 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:47.506 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.506 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2038580 00:15:47.506 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:47.506 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.506 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:47.767 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.767 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2038580 00:15:47.767 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:47.767 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.767 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:48.028 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.029 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2038580 00:15:48.029 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:48.029 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.029 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:48.599 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.599 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2038580 00:15:48.599 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:48.599 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.599 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:48.860 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.860 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2038580 00:15:48.860 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:48.860 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.860 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:49.122 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:49.122 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.122 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2038580 00:15:49.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2038580) - No such process 00:15:49.122 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2038580 00:15:49.122 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:49.122 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:49.122 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:49.122 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:49.122 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:15:49.122 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:49.122 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:15:49.122 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:49.122 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:49.122 rmmod nvme_tcp 00:15:49.122 rmmod nvme_fabrics 00:15:49.122 rmmod nvme_keyring 00:15:49.122 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:49.122 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:15:49.123 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:15:49.123 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2038471 ']' 00:15:49.123 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2038471 00:15:49.123 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2038471 ']' 00:15:49.123 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2038471 00:15:49.123 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:15:49.123 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:49.123 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2038471 00:15:49.123 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:49.123 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:49.123 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2038471' 00:15:49.123 killing process with pid 2038471 00:15:49.123 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2038471 00:15:49.123 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2038471 00:15:49.383 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:49.383 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:49.383 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:49.383 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:15:49.383 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:15:49.383 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:49.383 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:15:49.383 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:49.383 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:49.384 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.384 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:49.384 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.296 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:51.296 00:15:51.296 real 0m22.224s 00:15:51.296 user 0m42.381s 00:15:51.296 sys 0m9.985s 00:15:51.297 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:51.297 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.297 ************************************ 00:15:51.297 END TEST nvmf_connect_stress 00:15:51.297 ************************************ 00:15:51.556 07:25:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:51.556 07:25:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:51.556 07:25:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:51.556 07:25:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:51.556 ************************************ 00:15:51.556 START TEST nvmf_fused_ordering 00:15:51.556 ************************************ 00:15:51.556 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:51.556 * Looking for test storage... 00:15:51.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:51.556 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:51.556 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:15:51.556 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:51.556 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:51.556 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:51.556 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:51.556 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:51.556 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:15:51.556 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:15:51.556 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:15:51.556 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:15:51.556 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:15:51.556 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:15:51.556 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:15:51.556 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:51.556 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:15:51.556 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:15:51.557 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:51.557 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:51.557 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:15:51.557 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:15:51.557 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:51.557 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:15:51.557 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:15:51.557 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:15:51.557 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:15:51.557 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:51.902 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:15:51.902 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:15:51.902 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:51.902 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:51.902 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:15:51.902 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:51.902 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:51.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.902 --rc genhtml_branch_coverage=1 00:15:51.902 --rc genhtml_function_coverage=1 00:15:51.902 --rc genhtml_legend=1 00:15:51.902 --rc geninfo_all_blocks=1 00:15:51.902 --rc geninfo_unexecuted_blocks=1 00:15:51.902 00:15:51.902 ' 00:15:51.902 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:51.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.902 --rc genhtml_branch_coverage=1 00:15:51.902 --rc genhtml_function_coverage=1 00:15:51.902 --rc genhtml_legend=1 00:15:51.903 --rc geninfo_all_blocks=1 00:15:51.903 --rc geninfo_unexecuted_blocks=1 00:15:51.903 00:15:51.903 ' 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:51.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.903 --rc genhtml_branch_coverage=1 00:15:51.903 --rc genhtml_function_coverage=1 00:15:51.903 --rc genhtml_legend=1 00:15:51.903 --rc geninfo_all_blocks=1 00:15:51.903 --rc geninfo_unexecuted_blocks=1 00:15:51.903 00:15:51.903 ' 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:51.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.903 --rc genhtml_branch_coverage=1 00:15:51.903 --rc genhtml_function_coverage=1 00:15:51.903 --rc genhtml_legend=1 00:15:51.903 --rc geninfo_all_blocks=1 00:15:51.903 --rc geninfo_unexecuted_blocks=1 00:15:51.903 00:15:51.903 ' 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:51.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:15:51.903 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:00.113 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:00.113 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:00.113 Found net devices under 0000:31:00.0: cvl_0_0 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:00.113 Found net devices under 0000:31:00.1: cvl_0_1 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:00.113 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:00.113 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:00.113 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:00.113 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:00.113 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:00.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:00.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:16:00.114 00:16:00.114 --- 10.0.0.2 ping statistics --- 00:16:00.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.114 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:16:00.114 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:00.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:00.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:16:00.114 00:16:00.114 --- 10.0.0.1 ping statistics --- 00:16:00.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.114 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:16:00.114 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:00.114 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:16:00.114 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:00.114 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:00.114 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:00.114 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:00.114 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:00.114 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:00.114 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:00.114 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:00.114 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:00.114 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:00.114 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:00.114 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2045439 00:16:00.114 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2045439 00:16:00.114 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:00.114 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2045439 ']' 00:16:00.114 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.114 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:00.114 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.114 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:00.114 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:00.114 [2024-11-26 07:25:44.236225] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:16:00.114 [2024-11-26 07:25:44.236292] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:00.376 [2024-11-26 07:25:44.347929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.376 [2024-11-26 07:25:44.396097] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:00.376 [2024-11-26 07:25:44.396142] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:00.376 [2024-11-26 07:25:44.396151] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:00.376 [2024-11-26 07:25:44.396158] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:00.376 [2024-11-26 07:25:44.396164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:00.376 [2024-11-26 07:25:44.396967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:00.948 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:00.948 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:16:00.948 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:00.948 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:00.948 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:01.209 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:01.209 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:01.209 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.209 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:01.209 [2024-11-26 07:25:45.098255] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:01.209 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.210 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:01.210 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.210 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:01.210 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.210 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:01.210 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.210 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:01.210 [2024-11-26 07:25:45.122544] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:01.210 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.210 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:01.210 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.210 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:01.210 NULL1 00:16:01.210 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.210 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:01.210 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.210 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:01.210 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.210 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:01.210 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.210 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:01.210 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.210 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:01.210 [2024-11-26 07:25:45.191309] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:16:01.210 [2024-11-26 07:25:45.191371] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2045597 ] 00:16:01.781 Attached to nqn.2016-06.io.spdk:cnode1 00:16:01.781 Namespace ID: 1 size: 1GB 00:16:01.781 fused_ordering(0) 00:16:01.781 fused_ordering(1) 00:16:01.781 fused_ordering(2) 00:16:01.781 fused_ordering(3) 00:16:01.781 fused_ordering(4) 00:16:01.781 fused_ordering(5) 00:16:01.781 fused_ordering(6) 00:16:01.781 fused_ordering(7) 00:16:01.781 fused_ordering(8) 00:16:01.781 fused_ordering(9) 00:16:01.781 fused_ordering(10) 00:16:01.781 fused_ordering(11) 00:16:01.781 fused_ordering(12) 00:16:01.781 fused_ordering(13) 00:16:01.781 fused_ordering(14) 00:16:01.781 fused_ordering(15) 00:16:01.781 fused_ordering(16) 00:16:01.781 fused_ordering(17) 00:16:01.781 fused_ordering(18) 00:16:01.781 fused_ordering(19) 00:16:01.781 fused_ordering(20) 00:16:01.781 fused_ordering(21) 00:16:01.781 fused_ordering(22) 00:16:01.781 fused_ordering(23) 00:16:01.781 fused_ordering(24) 00:16:01.781 fused_ordering(25) 00:16:01.781 fused_ordering(26) 00:16:01.781 fused_ordering(27) 00:16:01.781 fused_ordering(28) 00:16:01.781 fused_ordering(29) 00:16:01.781 fused_ordering(30) 00:16:01.781 fused_ordering(31) 00:16:01.781 fused_ordering(32) 00:16:01.781 fused_ordering(33) 00:16:01.781 fused_ordering(34) 00:16:01.781 fused_ordering(35) 00:16:01.781 fused_ordering(36) 00:16:01.781 fused_ordering(37) 00:16:01.781 fused_ordering(38) 00:16:01.781 fused_ordering(39) 00:16:01.781 fused_ordering(40) 00:16:01.781 fused_ordering(41) 00:16:01.781 fused_ordering(42) 00:16:01.781 fused_ordering(43) 00:16:01.781 fused_ordering(44) 00:16:01.781 fused_ordering(45) 00:16:01.781 fused_ordering(46) 00:16:01.781 fused_ordering(47) 00:16:01.781 fused_ordering(48) 00:16:01.781 fused_ordering(49) 00:16:01.781 fused_ordering(50) 00:16:01.781 fused_ordering(51) 00:16:01.781 fused_ordering(52) 00:16:01.781 fused_ordering(53) 00:16:01.781 fused_ordering(54) 00:16:01.781 fused_ordering(55) 00:16:01.781 fused_ordering(56) 00:16:01.781 fused_ordering(57) 00:16:01.781 fused_ordering(58) 00:16:01.781 fused_ordering(59) 00:16:01.781 fused_ordering(60) 00:16:01.781 fused_ordering(61) 00:16:01.781 fused_ordering(62) 00:16:01.781 fused_ordering(63) 00:16:01.781 fused_ordering(64) 00:16:01.781 fused_ordering(65) 00:16:01.781 fused_ordering(66) 00:16:01.781 fused_ordering(67) 00:16:01.781 fused_ordering(68) 00:16:01.781 fused_ordering(69) 00:16:01.781 fused_ordering(70) 00:16:01.781 fused_ordering(71) 00:16:01.781 fused_ordering(72) 00:16:01.781 fused_ordering(73) 00:16:01.781 fused_ordering(74) 00:16:01.781 fused_ordering(75) 00:16:01.781 fused_ordering(76) 00:16:01.781 fused_ordering(77) 00:16:01.781 fused_ordering(78) 00:16:01.781 fused_ordering(79) 00:16:01.781 fused_ordering(80) 00:16:01.781 fused_ordering(81) 00:16:01.781 fused_ordering(82) 00:16:01.781 fused_ordering(83) 00:16:01.781 fused_ordering(84) 00:16:01.781 fused_ordering(85) 00:16:01.781 fused_ordering(86) 00:16:01.781 fused_ordering(87) 00:16:01.781 fused_ordering(88) 00:16:01.781 fused_ordering(89) 00:16:01.781 fused_ordering(90) 00:16:01.781 fused_ordering(91) 00:16:01.781 fused_ordering(92) 00:16:01.781 fused_ordering(93) 00:16:01.781 fused_ordering(94) 00:16:01.781 fused_ordering(95) 00:16:01.781 fused_ordering(96) 00:16:01.781 fused_ordering(97) 00:16:01.781 fused_ordering(98) 00:16:01.781 fused_ordering(99) 00:16:01.781 fused_ordering(100) 00:16:01.781 fused_ordering(101) 00:16:01.781 fused_ordering(102) 00:16:01.781 fused_ordering(103) 00:16:01.781 fused_ordering(104) 00:16:01.781 fused_ordering(105) 00:16:01.781 fused_ordering(106) 00:16:01.781 fused_ordering(107) 00:16:01.781 fused_ordering(108) 00:16:01.781 fused_ordering(109) 00:16:01.781 fused_ordering(110) 00:16:01.781 fused_ordering(111) 00:16:01.781 fused_ordering(112) 00:16:01.781 fused_ordering(113) 00:16:01.781 fused_ordering(114) 00:16:01.781 fused_ordering(115) 00:16:01.781 fused_ordering(116) 00:16:01.781 fused_ordering(117) 00:16:01.781 fused_ordering(118) 00:16:01.781 fused_ordering(119) 00:16:01.781 fused_ordering(120) 00:16:01.781 fused_ordering(121) 00:16:01.781 fused_ordering(122) 00:16:01.781 fused_ordering(123) 00:16:01.781 fused_ordering(124) 00:16:01.781 fused_ordering(125) 00:16:01.781 fused_ordering(126) 00:16:01.781 fused_ordering(127) 00:16:01.781 fused_ordering(128) 00:16:01.781 fused_ordering(129) 00:16:01.781 fused_ordering(130) 00:16:01.781 fused_ordering(131) 00:16:01.781 fused_ordering(132) 00:16:01.781 fused_ordering(133) 00:16:01.781 fused_ordering(134) 00:16:01.781 fused_ordering(135) 00:16:01.781 fused_ordering(136) 00:16:01.781 fused_ordering(137) 00:16:01.781 fused_ordering(138) 00:16:01.781 fused_ordering(139) 00:16:01.781 fused_ordering(140) 00:16:01.781 fused_ordering(141) 00:16:01.781 fused_ordering(142) 00:16:01.781 fused_ordering(143) 00:16:01.781 fused_ordering(144) 00:16:01.781 fused_ordering(145) 00:16:01.781 fused_ordering(146) 00:16:01.781 fused_ordering(147) 00:16:01.781 fused_ordering(148) 00:16:01.781 fused_ordering(149) 00:16:01.781 fused_ordering(150) 00:16:01.781 fused_ordering(151) 00:16:01.781 fused_ordering(152) 00:16:01.781 fused_ordering(153) 00:16:01.781 fused_ordering(154) 00:16:01.781 fused_ordering(155) 00:16:01.781 fused_ordering(156) 00:16:01.781 fused_ordering(157) 00:16:01.781 fused_ordering(158) 00:16:01.781 fused_ordering(159) 00:16:01.781 fused_ordering(160) 00:16:01.781 fused_ordering(161) 00:16:01.781 fused_ordering(162) 00:16:01.781 fused_ordering(163) 00:16:01.781 fused_ordering(164) 00:16:01.781 fused_ordering(165) 00:16:01.781 fused_ordering(166) 00:16:01.781 fused_ordering(167) 00:16:01.781 fused_ordering(168) 00:16:01.781 fused_ordering(169) 00:16:01.781 fused_ordering(170) 00:16:01.781 fused_ordering(171) 00:16:01.781 fused_ordering(172) 00:16:01.781 fused_ordering(173) 00:16:01.781 fused_ordering(174) 00:16:01.781 fused_ordering(175) 00:16:01.781 fused_ordering(176) 00:16:01.781 fused_ordering(177) 00:16:01.781 fused_ordering(178) 00:16:01.781 fused_ordering(179) 00:16:01.781 fused_ordering(180) 00:16:01.781 fused_ordering(181) 00:16:01.781 fused_ordering(182) 00:16:01.781 fused_ordering(183) 00:16:01.781 fused_ordering(184) 00:16:01.781 fused_ordering(185) 00:16:01.781 fused_ordering(186) 00:16:01.781 fused_ordering(187) 00:16:01.781 fused_ordering(188) 00:16:01.781 fused_ordering(189) 00:16:01.781 fused_ordering(190) 00:16:01.781 fused_ordering(191) 00:16:01.781 fused_ordering(192) 00:16:01.781 fused_ordering(193) 00:16:01.781 fused_ordering(194) 00:16:01.781 fused_ordering(195) 00:16:01.781 fused_ordering(196) 00:16:01.781 fused_ordering(197) 00:16:01.781 fused_ordering(198) 00:16:01.781 fused_ordering(199) 00:16:01.781 fused_ordering(200) 00:16:01.781 fused_ordering(201) 00:16:01.781 fused_ordering(202) 00:16:01.781 fused_ordering(203) 00:16:01.781 fused_ordering(204) 00:16:01.781 fused_ordering(205) 00:16:02.042 fused_ordering(206) 00:16:02.042 fused_ordering(207) 00:16:02.042 fused_ordering(208) 00:16:02.042 fused_ordering(209) 00:16:02.042 fused_ordering(210) 00:16:02.042 fused_ordering(211) 00:16:02.042 fused_ordering(212) 00:16:02.042 fused_ordering(213) 00:16:02.042 fused_ordering(214) 00:16:02.042 fused_ordering(215) 00:16:02.042 fused_ordering(216) 00:16:02.042 fused_ordering(217) 00:16:02.042 fused_ordering(218) 00:16:02.042 fused_ordering(219) 00:16:02.042 fused_ordering(220) 00:16:02.042 fused_ordering(221) 00:16:02.042 fused_ordering(222) 00:16:02.042 fused_ordering(223) 00:16:02.042 fused_ordering(224) 00:16:02.042 fused_ordering(225) 00:16:02.042 fused_ordering(226) 00:16:02.042 fused_ordering(227) 00:16:02.042 fused_ordering(228) 00:16:02.042 fused_ordering(229) 00:16:02.042 fused_ordering(230) 00:16:02.042 fused_ordering(231) 00:16:02.042 fused_ordering(232) 00:16:02.042 fused_ordering(233) 00:16:02.042 fused_ordering(234) 00:16:02.042 fused_ordering(235) 00:16:02.042 fused_ordering(236) 00:16:02.042 fused_ordering(237) 00:16:02.042 fused_ordering(238) 00:16:02.042 fused_ordering(239) 00:16:02.042 fused_ordering(240) 00:16:02.042 fused_ordering(241) 00:16:02.042 fused_ordering(242) 00:16:02.042 fused_ordering(243) 00:16:02.042 fused_ordering(244) 00:16:02.042 fused_ordering(245) 00:16:02.042 fused_ordering(246) 00:16:02.042 fused_ordering(247) 00:16:02.042 fused_ordering(248) 00:16:02.042 fused_ordering(249) 00:16:02.042 fused_ordering(250) 00:16:02.042 fused_ordering(251) 00:16:02.042 fused_ordering(252) 00:16:02.042 fused_ordering(253) 00:16:02.042 fused_ordering(254) 00:16:02.042 fused_ordering(255) 00:16:02.042 fused_ordering(256) 00:16:02.042 fused_ordering(257) 00:16:02.042 fused_ordering(258) 00:16:02.042 fused_ordering(259) 00:16:02.042 fused_ordering(260) 00:16:02.042 fused_ordering(261) 00:16:02.042 fused_ordering(262) 00:16:02.042 fused_ordering(263) 00:16:02.042 fused_ordering(264) 00:16:02.042 fused_ordering(265) 00:16:02.042 fused_ordering(266) 00:16:02.042 fused_ordering(267) 00:16:02.042 fused_ordering(268) 00:16:02.042 fused_ordering(269) 00:16:02.042 fused_ordering(270) 00:16:02.042 fused_ordering(271) 00:16:02.042 fused_ordering(272) 00:16:02.042 fused_ordering(273) 00:16:02.042 fused_ordering(274) 00:16:02.042 fused_ordering(275) 00:16:02.042 fused_ordering(276) 00:16:02.042 fused_ordering(277) 00:16:02.042 fused_ordering(278) 00:16:02.042 fused_ordering(279) 00:16:02.042 fused_ordering(280) 00:16:02.042 fused_ordering(281) 00:16:02.042 fused_ordering(282) 00:16:02.042 fused_ordering(283) 00:16:02.042 fused_ordering(284) 00:16:02.042 fused_ordering(285) 00:16:02.042 fused_ordering(286) 00:16:02.042 fused_ordering(287) 00:16:02.042 fused_ordering(288) 00:16:02.042 fused_ordering(289) 00:16:02.042 fused_ordering(290) 00:16:02.042 fused_ordering(291) 00:16:02.042 fused_ordering(292) 00:16:02.042 fused_ordering(293) 00:16:02.042 fused_ordering(294) 00:16:02.042 fused_ordering(295) 00:16:02.042 fused_ordering(296) 00:16:02.042 fused_ordering(297) 00:16:02.042 fused_ordering(298) 00:16:02.042 fused_ordering(299) 00:16:02.042 fused_ordering(300) 00:16:02.042 fused_ordering(301) 00:16:02.042 fused_ordering(302) 00:16:02.042 fused_ordering(303) 00:16:02.042 fused_ordering(304) 00:16:02.042 fused_ordering(305) 00:16:02.043 fused_ordering(306) 00:16:02.043 fused_ordering(307) 00:16:02.043 fused_ordering(308) 00:16:02.043 fused_ordering(309) 00:16:02.043 fused_ordering(310) 00:16:02.043 fused_ordering(311) 00:16:02.043 fused_ordering(312) 00:16:02.043 fused_ordering(313) 00:16:02.043 fused_ordering(314) 00:16:02.043 fused_ordering(315) 00:16:02.043 fused_ordering(316) 00:16:02.043 fused_ordering(317) 00:16:02.043 fused_ordering(318) 00:16:02.043 fused_ordering(319) 00:16:02.043 fused_ordering(320) 00:16:02.043 fused_ordering(321) 00:16:02.043 fused_ordering(322) 00:16:02.043 fused_ordering(323) 00:16:02.043 fused_ordering(324) 00:16:02.043 fused_ordering(325) 00:16:02.043 fused_ordering(326) 00:16:02.043 fused_ordering(327) 00:16:02.043 fused_ordering(328) 00:16:02.043 fused_ordering(329) 00:16:02.043 fused_ordering(330) 00:16:02.043 fused_ordering(331) 00:16:02.043 fused_ordering(332) 00:16:02.043 fused_ordering(333) 00:16:02.043 fused_ordering(334) 00:16:02.043 fused_ordering(335) 00:16:02.043 fused_ordering(336) 00:16:02.043 fused_ordering(337) 00:16:02.043 fused_ordering(338) 00:16:02.043 fused_ordering(339) 00:16:02.043 fused_ordering(340) 00:16:02.043 fused_ordering(341) 00:16:02.043 fused_ordering(342) 00:16:02.043 fused_ordering(343) 00:16:02.043 fused_ordering(344) 00:16:02.043 fused_ordering(345) 00:16:02.043 fused_ordering(346) 00:16:02.043 fused_ordering(347) 00:16:02.043 fused_ordering(348) 00:16:02.043 fused_ordering(349) 00:16:02.043 fused_ordering(350) 00:16:02.043 fused_ordering(351) 00:16:02.043 fused_ordering(352) 00:16:02.043 fused_ordering(353) 00:16:02.043 fused_ordering(354) 00:16:02.043 fused_ordering(355) 00:16:02.043 fused_ordering(356) 00:16:02.043 fused_ordering(357) 00:16:02.043 fused_ordering(358) 00:16:02.043 fused_ordering(359) 00:16:02.043 fused_ordering(360) 00:16:02.043 fused_ordering(361) 00:16:02.043 fused_ordering(362) 00:16:02.043 fused_ordering(363) 00:16:02.043 fused_ordering(364) 00:16:02.043 fused_ordering(365) 00:16:02.043 fused_ordering(366) 00:16:02.043 fused_ordering(367) 00:16:02.043 fused_ordering(368) 00:16:02.043 fused_ordering(369) 00:16:02.043 fused_ordering(370) 00:16:02.043 fused_ordering(371) 00:16:02.043 fused_ordering(372) 00:16:02.043 fused_ordering(373) 00:16:02.043 fused_ordering(374) 00:16:02.043 fused_ordering(375) 00:16:02.043 fused_ordering(376) 00:16:02.043 fused_ordering(377) 00:16:02.043 fused_ordering(378) 00:16:02.043 fused_ordering(379) 00:16:02.043 fused_ordering(380) 00:16:02.043 fused_ordering(381) 00:16:02.043 fused_ordering(382) 00:16:02.043 fused_ordering(383) 00:16:02.043 fused_ordering(384) 00:16:02.043 fused_ordering(385) 00:16:02.043 fused_ordering(386) 00:16:02.043 fused_ordering(387) 00:16:02.043 fused_ordering(388) 00:16:02.043 fused_ordering(389) 00:16:02.043 fused_ordering(390) 00:16:02.043 fused_ordering(391) 00:16:02.043 fused_ordering(392) 00:16:02.043 fused_ordering(393) 00:16:02.043 fused_ordering(394) 00:16:02.043 fused_ordering(395) 00:16:02.043 fused_ordering(396) 00:16:02.043 fused_ordering(397) 00:16:02.043 fused_ordering(398) 00:16:02.043 fused_ordering(399) 00:16:02.043 fused_ordering(400) 00:16:02.043 fused_ordering(401) 00:16:02.043 fused_ordering(402) 00:16:02.043 fused_ordering(403) 00:16:02.043 fused_ordering(404) 00:16:02.043 fused_ordering(405) 00:16:02.043 fused_ordering(406) 00:16:02.043 fused_ordering(407) 00:16:02.043 fused_ordering(408) 00:16:02.043 fused_ordering(409) 00:16:02.043 fused_ordering(410) 00:16:02.304 fused_ordering(411) 00:16:02.304 fused_ordering(412) 00:16:02.304 fused_ordering(413) 00:16:02.304 fused_ordering(414) 00:16:02.304 fused_ordering(415) 00:16:02.304 fused_ordering(416) 00:16:02.304 fused_ordering(417) 00:16:02.304 fused_ordering(418) 00:16:02.304 fused_ordering(419) 00:16:02.304 fused_ordering(420) 00:16:02.304 fused_ordering(421) 00:16:02.304 fused_ordering(422) 00:16:02.304 fused_ordering(423) 00:16:02.304 fused_ordering(424) 00:16:02.304 fused_ordering(425) 00:16:02.304 fused_ordering(426) 00:16:02.304 fused_ordering(427) 00:16:02.304 fused_ordering(428) 00:16:02.304 fused_ordering(429) 00:16:02.304 fused_ordering(430) 00:16:02.304 fused_ordering(431) 00:16:02.304 fused_ordering(432) 00:16:02.304 fused_ordering(433) 00:16:02.304 fused_ordering(434) 00:16:02.304 fused_ordering(435) 00:16:02.304 fused_ordering(436) 00:16:02.304 fused_ordering(437) 00:16:02.304 fused_ordering(438) 00:16:02.304 fused_ordering(439) 00:16:02.304 fused_ordering(440) 00:16:02.304 fused_ordering(441) 00:16:02.304 fused_ordering(442) 00:16:02.304 fused_ordering(443) 00:16:02.304 fused_ordering(444) 00:16:02.304 fused_ordering(445) 00:16:02.304 fused_ordering(446) 00:16:02.304 fused_ordering(447) 00:16:02.304 fused_ordering(448) 00:16:02.304 fused_ordering(449) 00:16:02.304 fused_ordering(450) 00:16:02.304 fused_ordering(451) 00:16:02.304 fused_ordering(452) 00:16:02.304 fused_ordering(453) 00:16:02.304 fused_ordering(454) 00:16:02.304 fused_ordering(455) 00:16:02.304 fused_ordering(456) 00:16:02.304 fused_ordering(457) 00:16:02.304 fused_ordering(458) 00:16:02.304 fused_ordering(459) 00:16:02.304 fused_ordering(460) 00:16:02.304 fused_ordering(461) 00:16:02.304 fused_ordering(462) 00:16:02.304 fused_ordering(463) 00:16:02.304 fused_ordering(464) 00:16:02.304 fused_ordering(465) 00:16:02.304 fused_ordering(466) 00:16:02.304 fused_ordering(467) 00:16:02.304 fused_ordering(468) 00:16:02.304 fused_ordering(469) 00:16:02.304 fused_ordering(470) 00:16:02.304 fused_ordering(471) 00:16:02.304 fused_ordering(472) 00:16:02.304 fused_ordering(473) 00:16:02.304 fused_ordering(474) 00:16:02.304 fused_ordering(475) 00:16:02.304 fused_ordering(476) 00:16:02.304 fused_ordering(477) 00:16:02.304 fused_ordering(478) 00:16:02.304 fused_ordering(479) 00:16:02.304 fused_ordering(480) 00:16:02.304 fused_ordering(481) 00:16:02.304 fused_ordering(482) 00:16:02.304 fused_ordering(483) 00:16:02.304 fused_ordering(484) 00:16:02.304 fused_ordering(485) 00:16:02.304 fused_ordering(486) 00:16:02.304 fused_ordering(487) 00:16:02.304 fused_ordering(488) 00:16:02.304 fused_ordering(489) 00:16:02.304 fused_ordering(490) 00:16:02.304 fused_ordering(491) 00:16:02.304 fused_ordering(492) 00:16:02.304 fused_ordering(493) 00:16:02.304 fused_ordering(494) 00:16:02.304 fused_ordering(495) 00:16:02.304 fused_ordering(496) 00:16:02.304 fused_ordering(497) 00:16:02.304 fused_ordering(498) 00:16:02.304 fused_ordering(499) 00:16:02.304 fused_ordering(500) 00:16:02.304 fused_ordering(501) 00:16:02.304 fused_ordering(502) 00:16:02.304 fused_ordering(503) 00:16:02.304 fused_ordering(504) 00:16:02.304 fused_ordering(505) 00:16:02.304 fused_ordering(506) 00:16:02.304 fused_ordering(507) 00:16:02.304 fused_ordering(508) 00:16:02.304 fused_ordering(509) 00:16:02.304 fused_ordering(510) 00:16:02.304 fused_ordering(511) 00:16:02.304 fused_ordering(512) 00:16:02.304 fused_ordering(513) 00:16:02.304 fused_ordering(514) 00:16:02.304 fused_ordering(515) 00:16:02.304 fused_ordering(516) 00:16:02.304 fused_ordering(517) 00:16:02.304 fused_ordering(518) 00:16:02.305 fused_ordering(519) 00:16:02.305 fused_ordering(520) 00:16:02.305 fused_ordering(521) 00:16:02.305 fused_ordering(522) 00:16:02.305 fused_ordering(523) 00:16:02.305 fused_ordering(524) 00:16:02.305 fused_ordering(525) 00:16:02.305 fused_ordering(526) 00:16:02.305 fused_ordering(527) 00:16:02.305 fused_ordering(528) 00:16:02.305 fused_ordering(529) 00:16:02.305 fused_ordering(530) 00:16:02.305 fused_ordering(531) 00:16:02.305 fused_ordering(532) 00:16:02.305 fused_ordering(533) 00:16:02.305 fused_ordering(534) 00:16:02.305 fused_ordering(535) 00:16:02.305 fused_ordering(536) 00:16:02.305 fused_ordering(537) 00:16:02.305 fused_ordering(538) 00:16:02.305 fused_ordering(539) 00:16:02.305 fused_ordering(540) 00:16:02.305 fused_ordering(541) 00:16:02.305 fused_ordering(542) 00:16:02.305 fused_ordering(543) 00:16:02.305 fused_ordering(544) 00:16:02.305 fused_ordering(545) 00:16:02.305 fused_ordering(546) 00:16:02.305 fused_ordering(547) 00:16:02.305 fused_ordering(548) 00:16:02.305 fused_ordering(549) 00:16:02.305 fused_ordering(550) 00:16:02.305 fused_ordering(551) 00:16:02.305 fused_ordering(552) 00:16:02.305 fused_ordering(553) 00:16:02.305 fused_ordering(554) 00:16:02.305 fused_ordering(555) 00:16:02.305 fused_ordering(556) 00:16:02.305 fused_ordering(557) 00:16:02.305 fused_ordering(558) 00:16:02.305 fused_ordering(559) 00:16:02.305 fused_ordering(560) 00:16:02.305 fused_ordering(561) 00:16:02.305 fused_ordering(562) 00:16:02.305 fused_ordering(563) 00:16:02.305 fused_ordering(564) 00:16:02.305 fused_ordering(565) 00:16:02.305 fused_ordering(566) 00:16:02.305 fused_ordering(567) 00:16:02.305 fused_ordering(568) 00:16:02.305 fused_ordering(569) 00:16:02.305 fused_ordering(570) 00:16:02.305 fused_ordering(571) 00:16:02.305 fused_ordering(572) 00:16:02.305 fused_ordering(573) 00:16:02.305 fused_ordering(574) 00:16:02.305 fused_ordering(575) 00:16:02.305 fused_ordering(576) 00:16:02.305 fused_ordering(577) 00:16:02.305 fused_ordering(578) 00:16:02.305 fused_ordering(579) 00:16:02.305 fused_ordering(580) 00:16:02.305 fused_ordering(581) 00:16:02.305 fused_ordering(582) 00:16:02.305 fused_ordering(583) 00:16:02.305 fused_ordering(584) 00:16:02.305 fused_ordering(585) 00:16:02.305 fused_ordering(586) 00:16:02.305 fused_ordering(587) 00:16:02.305 fused_ordering(588) 00:16:02.305 fused_ordering(589) 00:16:02.305 fused_ordering(590) 00:16:02.305 fused_ordering(591) 00:16:02.305 fused_ordering(592) 00:16:02.305 fused_ordering(593) 00:16:02.305 fused_ordering(594) 00:16:02.305 fused_ordering(595) 00:16:02.305 fused_ordering(596) 00:16:02.305 fused_ordering(597) 00:16:02.305 fused_ordering(598) 00:16:02.305 fused_ordering(599) 00:16:02.305 fused_ordering(600) 00:16:02.305 fused_ordering(601) 00:16:02.305 fused_ordering(602) 00:16:02.305 fused_ordering(603) 00:16:02.305 fused_ordering(604) 00:16:02.305 fused_ordering(605) 00:16:02.305 fused_ordering(606) 00:16:02.305 fused_ordering(607) 00:16:02.305 fused_ordering(608) 00:16:02.305 fused_ordering(609) 00:16:02.305 fused_ordering(610) 00:16:02.305 fused_ordering(611) 00:16:02.305 fused_ordering(612) 00:16:02.305 fused_ordering(613) 00:16:02.305 fused_ordering(614) 00:16:02.305 fused_ordering(615) 00:16:02.878 fused_ordering(616) 00:16:02.878 fused_ordering(617) 00:16:02.878 fused_ordering(618) 00:16:02.878 fused_ordering(619) 00:16:02.878 fused_ordering(620) 00:16:02.878 fused_ordering(621) 00:16:02.878 fused_ordering(622) 00:16:02.878 fused_ordering(623) 00:16:02.878 fused_ordering(624) 00:16:02.878 fused_ordering(625) 00:16:02.878 fused_ordering(626) 00:16:02.878 fused_ordering(627) 00:16:02.878 fused_ordering(628) 00:16:02.878 fused_ordering(629) 00:16:02.878 fused_ordering(630) 00:16:02.878 fused_ordering(631) 00:16:02.878 fused_ordering(632) 00:16:02.878 fused_ordering(633) 00:16:02.878 fused_ordering(634) 00:16:02.878 fused_ordering(635) 00:16:02.878 fused_ordering(636) 00:16:02.878 fused_ordering(637) 00:16:02.878 fused_ordering(638) 00:16:02.878 fused_ordering(639) 00:16:02.878 fused_ordering(640) 00:16:02.878 fused_ordering(641) 00:16:02.878 fused_ordering(642) 00:16:02.878 fused_ordering(643) 00:16:02.878 fused_ordering(644) 00:16:02.878 fused_ordering(645) 00:16:02.878 fused_ordering(646) 00:16:02.878 fused_ordering(647) 00:16:02.878 fused_ordering(648) 00:16:02.878 fused_ordering(649) 00:16:02.878 fused_ordering(650) 00:16:02.878 fused_ordering(651) 00:16:02.878 fused_ordering(652) 00:16:02.878 fused_ordering(653) 00:16:02.878 fused_ordering(654) 00:16:02.878 fused_ordering(655) 00:16:02.878 fused_ordering(656) 00:16:02.878 fused_ordering(657) 00:16:02.878 fused_ordering(658) 00:16:02.878 fused_ordering(659) 00:16:02.878 fused_ordering(660) 00:16:02.878 fused_ordering(661) 00:16:02.878 fused_ordering(662) 00:16:02.878 fused_ordering(663) 00:16:02.878 fused_ordering(664) 00:16:02.878 fused_ordering(665) 00:16:02.878 fused_ordering(666) 00:16:02.878 fused_ordering(667) 00:16:02.878 fused_ordering(668) 00:16:02.878 fused_ordering(669) 00:16:02.878 fused_ordering(670) 00:16:02.878 fused_ordering(671) 00:16:02.878 fused_ordering(672) 00:16:02.878 fused_ordering(673) 00:16:02.878 fused_ordering(674) 00:16:02.878 fused_ordering(675) 00:16:02.878 fused_ordering(676) 00:16:02.878 fused_ordering(677) 00:16:02.878 fused_ordering(678) 00:16:02.878 fused_ordering(679) 00:16:02.878 fused_ordering(680) 00:16:02.878 fused_ordering(681) 00:16:02.878 fused_ordering(682) 00:16:02.878 fused_ordering(683) 00:16:02.878 fused_ordering(684) 00:16:02.878 fused_ordering(685) 00:16:02.878 fused_ordering(686) 00:16:02.878 fused_ordering(687) 00:16:02.878 fused_ordering(688) 00:16:02.878 fused_ordering(689) 00:16:02.878 fused_ordering(690) 00:16:02.878 fused_ordering(691) 00:16:02.878 fused_ordering(692) 00:16:02.878 fused_ordering(693) 00:16:02.878 fused_ordering(694) 00:16:02.878 fused_ordering(695) 00:16:02.878 fused_ordering(696) 00:16:02.878 fused_ordering(697) 00:16:02.878 fused_ordering(698) 00:16:02.878 fused_ordering(699) 00:16:02.878 fused_ordering(700) 00:16:02.878 fused_ordering(701) 00:16:02.878 fused_ordering(702) 00:16:02.878 fused_ordering(703) 00:16:02.878 fused_ordering(704) 00:16:02.878 fused_ordering(705) 00:16:02.878 fused_ordering(706) 00:16:02.878 fused_ordering(707) 00:16:02.878 fused_ordering(708) 00:16:02.878 fused_ordering(709) 00:16:02.878 fused_ordering(710) 00:16:02.878 fused_ordering(711) 00:16:02.878 fused_ordering(712) 00:16:02.878 fused_ordering(713) 00:16:02.878 fused_ordering(714) 00:16:02.878 fused_ordering(715) 00:16:02.878 fused_ordering(716) 00:16:02.878 fused_ordering(717) 00:16:02.878 fused_ordering(718) 00:16:02.878 fused_ordering(719) 00:16:02.878 fused_ordering(720) 00:16:02.878 fused_ordering(721) 00:16:02.878 fused_ordering(722) 00:16:02.878 fused_ordering(723) 00:16:02.878 fused_ordering(724) 00:16:02.878 fused_ordering(725) 00:16:02.878 fused_ordering(726) 00:16:02.878 fused_ordering(727) 00:16:02.878 fused_ordering(728) 00:16:02.878 fused_ordering(729) 00:16:02.878 fused_ordering(730) 00:16:02.878 fused_ordering(731) 00:16:02.878 fused_ordering(732) 00:16:02.878 fused_ordering(733) 00:16:02.878 fused_ordering(734) 00:16:02.878 fused_ordering(735) 00:16:02.878 fused_ordering(736) 00:16:02.878 fused_ordering(737) 00:16:02.878 fused_ordering(738) 00:16:02.878 fused_ordering(739) 00:16:02.878 fused_ordering(740) 00:16:02.878 fused_ordering(741) 00:16:02.878 fused_ordering(742) 00:16:02.878 fused_ordering(743) 00:16:02.878 fused_ordering(744) 00:16:02.878 fused_ordering(745) 00:16:02.878 fused_ordering(746) 00:16:02.878 fused_ordering(747) 00:16:02.878 fused_ordering(748) 00:16:02.878 fused_ordering(749) 00:16:02.878 fused_ordering(750) 00:16:02.878 fused_ordering(751) 00:16:02.878 fused_ordering(752) 00:16:02.878 fused_ordering(753) 00:16:02.878 fused_ordering(754) 00:16:02.878 fused_ordering(755) 00:16:02.878 fused_ordering(756) 00:16:02.878 fused_ordering(757) 00:16:02.878 fused_ordering(758) 00:16:02.878 fused_ordering(759) 00:16:02.878 fused_ordering(760) 00:16:02.878 fused_ordering(761) 00:16:02.878 fused_ordering(762) 00:16:02.878 fused_ordering(763) 00:16:02.878 fused_ordering(764) 00:16:02.878 fused_ordering(765) 00:16:02.878 fused_ordering(766) 00:16:02.878 fused_ordering(767) 00:16:02.878 fused_ordering(768) 00:16:02.878 fused_ordering(769) 00:16:02.878 fused_ordering(770) 00:16:02.878 fused_ordering(771) 00:16:02.878 fused_ordering(772) 00:16:02.878 fused_ordering(773) 00:16:02.878 fused_ordering(774) 00:16:02.878 fused_ordering(775) 00:16:02.878 fused_ordering(776) 00:16:02.878 fused_ordering(777) 00:16:02.878 fused_ordering(778) 00:16:02.878 fused_ordering(779) 00:16:02.878 fused_ordering(780) 00:16:02.878 fused_ordering(781) 00:16:02.878 fused_ordering(782) 00:16:02.878 fused_ordering(783) 00:16:02.878 fused_ordering(784) 00:16:02.878 fused_ordering(785) 00:16:02.878 fused_ordering(786) 00:16:02.878 fused_ordering(787) 00:16:02.878 fused_ordering(788) 00:16:02.878 fused_ordering(789) 00:16:02.878 fused_ordering(790) 00:16:02.878 fused_ordering(791) 00:16:02.878 fused_ordering(792) 00:16:02.878 fused_ordering(793) 00:16:02.878 fused_ordering(794) 00:16:02.878 fused_ordering(795) 00:16:02.878 fused_ordering(796) 00:16:02.878 fused_ordering(797) 00:16:02.878 fused_ordering(798) 00:16:02.878 fused_ordering(799) 00:16:02.878 fused_ordering(800) 00:16:02.878 fused_ordering(801) 00:16:02.878 fused_ordering(802) 00:16:02.878 fused_ordering(803) 00:16:02.878 fused_ordering(804) 00:16:02.878 fused_ordering(805) 00:16:02.878 fused_ordering(806) 00:16:02.878 fused_ordering(807) 00:16:02.878 fused_ordering(808) 00:16:02.878 fused_ordering(809) 00:16:02.878 fused_ordering(810) 00:16:02.878 fused_ordering(811) 00:16:02.878 fused_ordering(812) 00:16:02.878 fused_ordering(813) 00:16:02.879 fused_ordering(814) 00:16:02.879 fused_ordering(815) 00:16:02.879 fused_ordering(816) 00:16:02.879 fused_ordering(817) 00:16:02.879 fused_ordering(818) 00:16:02.879 fused_ordering(819) 00:16:02.879 fused_ordering(820) 00:16:03.450 fused_ordering(821) 00:16:03.450 fused_ordering(822) 00:16:03.450 fused_ordering(823) 00:16:03.450 fused_ordering(824) 00:16:03.450 fused_ordering(825) 00:16:03.450 fused_ordering(826) 00:16:03.450 fused_ordering(827) 00:16:03.450 fused_ordering(828) 00:16:03.450 fused_ordering(829) 00:16:03.450 fused_ordering(830) 00:16:03.450 fused_ordering(831) 00:16:03.450 fused_ordering(832) 00:16:03.450 fused_ordering(833) 00:16:03.450 fused_ordering(834) 00:16:03.450 fused_ordering(835) 00:16:03.450 fused_ordering(836) 00:16:03.450 fused_ordering(837) 00:16:03.450 fused_ordering(838) 00:16:03.450 fused_ordering(839) 00:16:03.450 fused_ordering(840) 00:16:03.450 fused_ordering(841) 00:16:03.450 fused_ordering(842) 00:16:03.450 fused_ordering(843) 00:16:03.450 fused_ordering(844) 00:16:03.450 fused_ordering(845) 00:16:03.450 fused_ordering(846) 00:16:03.450 fused_ordering(847) 00:16:03.450 fused_ordering(848) 00:16:03.450 fused_ordering(849) 00:16:03.450 fused_ordering(850) 00:16:03.450 fused_ordering(851) 00:16:03.450 fused_ordering(852) 00:16:03.450 fused_ordering(853) 00:16:03.450 fused_ordering(854) 00:16:03.450 fused_ordering(855) 00:16:03.450 fused_ordering(856) 00:16:03.450 fused_ordering(857) 00:16:03.450 fused_ordering(858) 00:16:03.450 fused_ordering(859) 00:16:03.450 fused_ordering(860) 00:16:03.450 fused_ordering(861) 00:16:03.450 fused_ordering(862) 00:16:03.450 fused_ordering(863) 00:16:03.450 fused_ordering(864) 00:16:03.450 fused_ordering(865) 00:16:03.450 fused_ordering(866) 00:16:03.450 fused_ordering(867) 00:16:03.450 fused_ordering(868) 00:16:03.450 fused_ordering(869) 00:16:03.450 fused_ordering(870) 00:16:03.450 fused_ordering(871) 00:16:03.450 fused_ordering(872) 00:16:03.450 fused_ordering(873) 00:16:03.450 fused_ordering(874) 00:16:03.450 fused_ordering(875) 00:16:03.450 fused_ordering(876) 00:16:03.450 fused_ordering(877) 00:16:03.450 fused_ordering(878) 00:16:03.450 fused_ordering(879) 00:16:03.450 fused_ordering(880) 00:16:03.450 fused_ordering(881) 00:16:03.450 fused_ordering(882) 00:16:03.450 fused_ordering(883) 00:16:03.450 fused_ordering(884) 00:16:03.450 fused_ordering(885) 00:16:03.450 fused_ordering(886) 00:16:03.450 fused_ordering(887) 00:16:03.450 fused_ordering(888) 00:16:03.450 fused_ordering(889) 00:16:03.450 fused_ordering(890) 00:16:03.450 fused_ordering(891) 00:16:03.450 fused_ordering(892) 00:16:03.450 fused_ordering(893) 00:16:03.450 fused_ordering(894) 00:16:03.450 fused_ordering(895) 00:16:03.450 fused_ordering(896) 00:16:03.450 fused_ordering(897) 00:16:03.450 fused_ordering(898) 00:16:03.450 fused_ordering(899) 00:16:03.450 fused_ordering(900) 00:16:03.450 fused_ordering(901) 00:16:03.450 fused_ordering(902) 00:16:03.450 fused_ordering(903) 00:16:03.450 fused_ordering(904) 00:16:03.450 fused_ordering(905) 00:16:03.450 fused_ordering(906) 00:16:03.450 fused_ordering(907) 00:16:03.450 fused_ordering(908) 00:16:03.450 fused_ordering(909) 00:16:03.450 fused_ordering(910) 00:16:03.450 fused_ordering(911) 00:16:03.450 fused_ordering(912) 00:16:03.450 fused_ordering(913) 00:16:03.450 fused_ordering(914) 00:16:03.450 fused_ordering(915) 00:16:03.450 fused_ordering(916) 00:16:03.450 fused_ordering(917) 00:16:03.450 fused_ordering(918) 00:16:03.450 fused_ordering(919) 00:16:03.450 fused_ordering(920) 00:16:03.450 fused_ordering(921) 00:16:03.450 fused_ordering(922) 00:16:03.450 fused_ordering(923) 00:16:03.450 fused_ordering(924) 00:16:03.450 fused_ordering(925) 00:16:03.450 fused_ordering(926) 00:16:03.450 fused_ordering(927) 00:16:03.450 fused_ordering(928) 00:16:03.450 fused_ordering(929) 00:16:03.450 fused_ordering(930) 00:16:03.450 fused_ordering(931) 00:16:03.450 fused_ordering(932) 00:16:03.450 fused_ordering(933) 00:16:03.450 fused_ordering(934) 00:16:03.450 fused_ordering(935) 00:16:03.450 fused_ordering(936) 00:16:03.450 fused_ordering(937) 00:16:03.450 fused_ordering(938) 00:16:03.450 fused_ordering(939) 00:16:03.450 fused_ordering(940) 00:16:03.450 fused_ordering(941) 00:16:03.450 fused_ordering(942) 00:16:03.450 fused_ordering(943) 00:16:03.450 fused_ordering(944) 00:16:03.450 fused_ordering(945) 00:16:03.450 fused_ordering(946) 00:16:03.450 fused_ordering(947) 00:16:03.450 fused_ordering(948) 00:16:03.450 fused_ordering(949) 00:16:03.450 fused_ordering(950) 00:16:03.450 fused_ordering(951) 00:16:03.450 fused_ordering(952) 00:16:03.450 fused_ordering(953) 00:16:03.450 fused_ordering(954) 00:16:03.450 fused_ordering(955) 00:16:03.450 fused_ordering(956) 00:16:03.450 fused_ordering(957) 00:16:03.450 fused_ordering(958) 00:16:03.450 fused_ordering(959) 00:16:03.450 fused_ordering(960) 00:16:03.450 fused_ordering(961) 00:16:03.450 fused_ordering(962) 00:16:03.450 fused_ordering(963) 00:16:03.450 fused_ordering(964) 00:16:03.450 fused_ordering(965) 00:16:03.450 fused_ordering(966) 00:16:03.450 fused_ordering(967) 00:16:03.450 fused_ordering(968) 00:16:03.450 fused_ordering(969) 00:16:03.450 fused_ordering(970) 00:16:03.450 fused_ordering(971) 00:16:03.450 fused_ordering(972) 00:16:03.450 fused_ordering(973) 00:16:03.450 fused_ordering(974) 00:16:03.450 fused_ordering(975) 00:16:03.450 fused_ordering(976) 00:16:03.450 fused_ordering(977) 00:16:03.450 fused_ordering(978) 00:16:03.450 fused_ordering(979) 00:16:03.450 fused_ordering(980) 00:16:03.450 fused_ordering(981) 00:16:03.450 fused_ordering(982) 00:16:03.450 fused_ordering(983) 00:16:03.450 fused_ordering(984) 00:16:03.450 fused_ordering(985) 00:16:03.450 fused_ordering(986) 00:16:03.450 fused_ordering(987) 00:16:03.450 fused_ordering(988) 00:16:03.450 fused_ordering(989) 00:16:03.450 fused_ordering(990) 00:16:03.450 fused_ordering(991) 00:16:03.450 fused_ordering(992) 00:16:03.450 fused_ordering(993) 00:16:03.450 fused_ordering(994) 00:16:03.450 fused_ordering(995) 00:16:03.450 fused_ordering(996) 00:16:03.450 fused_ordering(997) 00:16:03.450 fused_ordering(998) 00:16:03.450 fused_ordering(999) 00:16:03.450 fused_ordering(1000) 00:16:03.450 fused_ordering(1001) 00:16:03.450 fused_ordering(1002) 00:16:03.450 fused_ordering(1003) 00:16:03.450 fused_ordering(1004) 00:16:03.450 fused_ordering(1005) 00:16:03.450 fused_ordering(1006) 00:16:03.450 fused_ordering(1007) 00:16:03.450 fused_ordering(1008) 00:16:03.450 fused_ordering(1009) 00:16:03.450 fused_ordering(1010) 00:16:03.450 fused_ordering(1011) 00:16:03.450 fused_ordering(1012) 00:16:03.450 fused_ordering(1013) 00:16:03.450 fused_ordering(1014) 00:16:03.450 fused_ordering(1015) 00:16:03.450 fused_ordering(1016) 00:16:03.450 fused_ordering(1017) 00:16:03.450 fused_ordering(1018) 00:16:03.450 fused_ordering(1019) 00:16:03.450 fused_ordering(1020) 00:16:03.450 fused_ordering(1021) 00:16:03.450 fused_ordering(1022) 00:16:03.450 fused_ordering(1023) 00:16:03.450 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:03.450 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:03.450 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:03.450 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:16:03.450 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:03.450 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:16:03.450 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:03.450 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:03.450 rmmod nvme_tcp 00:16:03.450 rmmod nvme_fabrics 00:16:03.450 rmmod nvme_keyring 00:16:03.450 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:03.450 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:16:03.450 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:16:03.450 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2045439 ']' 00:16:03.450 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2045439 00:16:03.450 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2045439 ']' 00:16:03.450 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2045439 00:16:03.450 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:16:03.450 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:03.450 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2045439 00:16:03.450 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:03.450 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:03.451 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2045439' 00:16:03.451 killing process with pid 2045439 00:16:03.451 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2045439 00:16:03.451 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2045439 00:16:03.713 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:03.713 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:03.713 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:03.713 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:16:03.713 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:16:03.713 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:03.713 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:16:03.713 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:03.713 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:03.713 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.713 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:03.713 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.625 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:05.625 00:16:05.625 real 0m14.247s 00:16:05.625 user 0m7.263s 00:16:05.625 sys 0m7.671s 00:16:05.625 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:05.625 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:05.625 ************************************ 00:16:05.625 END TEST nvmf_fused_ordering 00:16:05.625 ************************************ 00:16:05.887 07:25:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:16:05.887 07:25:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:05.887 07:25:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:05.887 07:25:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:05.887 ************************************ 00:16:05.887 START TEST nvmf_ns_masking 00:16:05.887 ************************************ 00:16:05.887 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:16:05.887 * Looking for test storage... 00:16:05.887 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:05.887 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:05.887 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:05.887 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:16:05.887 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:05.887 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:05.887 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:05.887 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:05.887 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:16:05.887 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:16:05.887 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:16:05.887 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:16:05.887 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:16:05.887 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:16:05.887 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:16:05.887 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:05.888 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:16:05.888 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:16:05.888 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:05.888 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:05.888 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:16:05.888 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:16:05.888 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:05.888 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:16:05.888 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:16:05.888 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:16:05.888 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:16:05.888 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:05.888 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:16:05.888 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:16:05.888 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:05.888 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:05.888 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:16:05.888 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:05.888 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:05.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.888 --rc genhtml_branch_coverage=1 00:16:05.888 --rc genhtml_function_coverage=1 00:16:05.888 --rc genhtml_legend=1 00:16:05.888 --rc geninfo_all_blocks=1 00:16:05.888 --rc geninfo_unexecuted_blocks=1 00:16:05.888 00:16:05.888 ' 00:16:05.888 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:05.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.888 --rc genhtml_branch_coverage=1 00:16:05.888 --rc genhtml_function_coverage=1 00:16:05.888 --rc genhtml_legend=1 00:16:05.888 --rc geninfo_all_blocks=1 00:16:05.888 --rc geninfo_unexecuted_blocks=1 00:16:05.888 00:16:05.888 ' 00:16:05.888 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:05.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.888 --rc genhtml_branch_coverage=1 00:16:05.888 --rc genhtml_function_coverage=1 00:16:05.888 --rc genhtml_legend=1 00:16:05.888 --rc geninfo_all_blocks=1 00:16:05.888 --rc geninfo_unexecuted_blocks=1 00:16:05.888 00:16:05.888 ' 00:16:05.888 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:05.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.888 --rc genhtml_branch_coverage=1 00:16:05.888 --rc genhtml_function_coverage=1 00:16:05.888 --rc genhtml_legend=1 00:16:05.888 --rc geninfo_all_blocks=1 00:16:05.888 --rc geninfo_unexecuted_blocks=1 00:16:05.888 00:16:05.888 ' 00:16:05.888 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:05.888 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:16:05.888 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:05.888 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:05.888 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:05.888 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:05.888 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:05.888 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:05.888 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:05.888 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:05.888 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:06.148 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:06.148 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:06.148 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:06.148 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:06.148 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:06.148 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:06.148 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:06.148 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:06.148 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:16:06.148 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:06.148 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:06.148 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:06.148 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.148 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.148 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.148 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:16:06.148 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.149 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:16:06.149 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:06.149 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:06.149 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:06.149 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:06.149 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:06.149 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:06.149 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:06.149 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:06.149 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:06.149 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:06.149 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:06.149 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:16:06.149 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:16:06.149 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:16:06.149 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=412fdb48-b5de-49e5-81e0-df59831f1ae2 00:16:06.149 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:16:06.149 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=31dc85fe-76ab-4b11-b3c0-3edf69e529f4 00:16:06.149 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:16:06.149 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:16:06.149 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:16:06.149 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:16:06.149 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=a0fefd9a-c05e-4d5c-9ce4-a5322f8c26fa 00:16:06.149 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:16:06.149 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:06.149 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:06.149 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:06.149 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:06.149 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:06.149 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.149 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:06.149 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.149 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:06.149 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:06.149 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:16:06.149 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:14.319 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:14.319 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:14.319 Found net devices under 0000:31:00.0: cvl_0_0 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:14.319 Found net devices under 0000:31:00.1: cvl_0_1 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:14.319 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:14.320 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:14.320 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:14.320 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:14.320 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:14.320 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:14.320 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:14.320 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:14.320 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:14.320 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:14.320 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:14.320 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:14.320 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:14.320 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:14.581 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:14.581 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:14.581 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:14.581 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:14.581 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:14.581 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.493 ms 00:16:14.581 00:16:14.581 --- 10.0.0.2 ping statistics --- 00:16:14.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.581 rtt min/avg/max/mdev = 0.493/0.493/0.493/0.000 ms 00:16:14.581 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:14.581 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:14.581 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:16:14.581 00:16:14.581 --- 10.0.0.1 ping statistics --- 00:16:14.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.581 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:16:14.581 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:14.581 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:16:14.581 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:14.581 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:14.581 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:14.581 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:14.581 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:14.581 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:14.581 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:14.581 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:16:14.581 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:14.581 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:14.581 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:14.581 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2050942 00:16:14.581 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2050942 00:16:14.581 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2050942 ']' 00:16:14.581 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.581 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:14.581 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.581 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:14.581 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:14.581 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:14.581 [2024-11-26 07:25:58.628076] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:16:14.581 [2024-11-26 07:25:58.628141] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:14.842 [2024-11-26 07:25:58.718077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.842 [2024-11-26 07:25:58.757936] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:14.842 [2024-11-26 07:25:58.757974] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:14.842 [2024-11-26 07:25:58.757984] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:14.842 [2024-11-26 07:25:58.757992] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:14.842 [2024-11-26 07:25:58.757998] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:14.842 [2024-11-26 07:25:58.758604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.414 07:25:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:15.414 07:25:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:16:15.414 07:25:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:15.414 07:25:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:15.414 07:25:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:15.414 07:25:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:15.414 07:25:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:15.675 [2024-11-26 07:25:59.604239] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:15.675 07:25:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:16:15.675 07:25:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:16:15.675 07:25:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:15.675 Malloc1 00:16:15.675 07:25:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:15.935 Malloc2 00:16:15.935 07:25:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:16.195 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:16:16.456 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:16.456 [2024-11-26 07:26:00.516777] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:16.456 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:16:16.456 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a0fefd9a-c05e-4d5c-9ce4-a5322f8c26fa -a 10.0.0.2 -s 4420 -i 4 00:16:16.716 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:16:16.716 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:16:16.716 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:16.716 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:16.716 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:16:18.630 07:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:18.630 07:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:18.630 07:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:18.891 07:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:18.891 07:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:18.891 07:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:16:18.891 07:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:18.891 07:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:18.891 07:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:18.891 07:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:18.891 07:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:16:18.891 07:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:18.891 07:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:18.891 [ 0]:0x1 00:16:18.891 07:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:18.891 07:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:18.891 07:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ecfa2f68fdc546e2917d4d68d973bf30 00:16:18.891 07:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ecfa2f68fdc546e2917d4d68d973bf30 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:18.891 07:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:19.153 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:16:19.153 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:19.153 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:19.153 [ 0]:0x1 00:16:19.153 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:19.153 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:19.153 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ecfa2f68fdc546e2917d4d68d973bf30 00:16:19.153 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ecfa2f68fdc546e2917d4d68d973bf30 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:19.153 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:16:19.153 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:19.153 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:19.153 [ 1]:0x2 00:16:19.153 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:19.153 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:19.153 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e770f5b40b244b0b93739c58d198ebce 00:16:19.153 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e770f5b40b244b0b93739c58d198ebce != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:19.153 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:16:19.153 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:19.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.153 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:19.414 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:19.674 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:16:19.674 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a0fefd9a-c05e-4d5c-9ce4-a5322f8c26fa -a 10.0.0.2 -s 4420 -i 4 00:16:19.674 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:19.674 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:16:19.674 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:19.674 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:16:19.674 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:16:19.674 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:16:22.228 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:22.228 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:22.228 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:22.228 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:22.228 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:22.228 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:16:22.228 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:22.228 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:22.228 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:22.228 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:22.228 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:16:22.228 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:22.228 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:16:22.228 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:16:22.228 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:22.228 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:16:22.228 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:22.228 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:16:22.228 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:22.228 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:22.228 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:22.228 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:22.228 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:22.228 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:22.228 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:22.228 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:22.228 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:22.228 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:22.228 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:16:22.229 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:22.229 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:22.229 [ 0]:0x2 00:16:22.229 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:22.229 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:22.229 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e770f5b40b244b0b93739c58d198ebce 00:16:22.229 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e770f5b40b244b0b93739c58d198ebce != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:22.229 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:22.229 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:16:22.229 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:22.229 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:22.229 [ 0]:0x1 00:16:22.229 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:22.229 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:22.229 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ecfa2f68fdc546e2917d4d68d973bf30 00:16:22.229 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ecfa2f68fdc546e2917d4d68d973bf30 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:22.229 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:16:22.229 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:22.229 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:22.229 [ 1]:0x2 00:16:22.229 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:22.229 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:22.229 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e770f5b40b244b0b93739c58d198ebce 00:16:22.229 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e770f5b40b244b0b93739c58d198ebce != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:22.229 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:22.489 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:16:22.489 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:22.489 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:16:22.489 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:16:22.489 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:22.489 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:16:22.489 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:22.489 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:16:22.489 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:22.489 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:22.489 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:22.489 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:22.489 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:22.489 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:22.489 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:22.489 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:22.489 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:22.489 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:22.489 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:16:22.489 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:22.489 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:22.749 [ 0]:0x2 00:16:22.749 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:22.749 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:22.749 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e770f5b40b244b0b93739c58d198ebce 00:16:22.749 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e770f5b40b244b0b93739c58d198ebce != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:22.749 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:16:22.749 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:22.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.749 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:23.009 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:16:23.009 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a0fefd9a-c05e-4d5c-9ce4-a5322f8c26fa -a 10.0.0.2 -s 4420 -i 4 00:16:23.269 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:23.269 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:16:23.269 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:23.269 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:16:23.269 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:16:23.269 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:16:25.184 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:25.184 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:25.184 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:25.184 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:16:25.184 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:25.184 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:16:25.184 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:25.184 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:25.184 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:25.184 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:25.184 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:16:25.184 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:25.184 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:25.184 [ 0]:0x1 00:16:25.184 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:25.184 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:25.184 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ecfa2f68fdc546e2917d4d68d973bf30 00:16:25.184 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ecfa2f68fdc546e2917d4d68d973bf30 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:25.184 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:16:25.184 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:25.184 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:25.184 [ 1]:0x2 00:16:25.184 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:25.184 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:25.444 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e770f5b40b244b0b93739c58d198ebce 00:16:25.444 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e770f5b40b244b0b93739c58d198ebce != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:25.444 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:25.444 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:16:25.444 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:25.444 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:16:25.444 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:16:25.444 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:25.444 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:16:25.444 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:25.444 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:16:25.444 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:25.444 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:25.444 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:25.444 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:25.705 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:25.706 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:25.706 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:25.706 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:25.706 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:25.706 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:25.706 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:16:25.706 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:25.706 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:25.706 [ 0]:0x2 00:16:25.706 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:25.706 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:25.706 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e770f5b40b244b0b93739c58d198ebce 00:16:25.706 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e770f5b40b244b0b93739c58d198ebce != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:25.706 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:25.706 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:25.706 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:25.706 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:25.706 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:25.706 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:25.706 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:25.706 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:25.706 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:25.706 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:25.706 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:25.706 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:25.967 [2024-11-26 07:26:09.855271] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:25.967 request: 00:16:25.967 { 00:16:25.967 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:25.967 "nsid": 2, 00:16:25.967 "host": "nqn.2016-06.io.spdk:host1", 00:16:25.967 "method": "nvmf_ns_remove_host", 00:16:25.967 "req_id": 1 00:16:25.967 } 00:16:25.967 Got JSON-RPC error response 00:16:25.967 response: 00:16:25.967 { 00:16:25.967 "code": -32602, 00:16:25.967 "message": "Invalid parameters" 00:16:25.967 } 00:16:25.967 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:25.967 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:25.967 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:25.967 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:25.967 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:16:25.967 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:25.968 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:16:25.968 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:16:25.968 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:25.968 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:16:25.968 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:25.968 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:16:25.968 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:25.968 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:25.968 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:25.968 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:25.968 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:25.968 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:25.968 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:25.968 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:25.968 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:25.968 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:25.968 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:16:25.968 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:25.968 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:25.968 [ 0]:0x2 00:16:25.968 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:25.968 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:25.968 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e770f5b40b244b0b93739c58d198ebce 00:16:25.968 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e770f5b40b244b0b93739c58d198ebce != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:25.968 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:16:25.968 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:26.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:26.229 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2053203 00:16:26.229 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:16:26.229 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:16:26.229 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2053203 /var/tmp/host.sock 00:16:26.229 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2053203 ']' 00:16:26.229 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:16:26.229 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:26.229 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:26.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:26.229 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:26.229 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:26.229 [2024-11-26 07:26:10.259662] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:16:26.229 [2024-11-26 07:26:10.259714] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2053203 ] 00:16:26.229 [2024-11-26 07:26:10.353468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.489 [2024-11-26 07:26:10.389423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.061 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:27.061 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:16:27.061 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:27.322 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:27.322 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 412fdb48-b5de-49e5-81e0-df59831f1ae2 00:16:27.322 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:16:27.322 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 412FDB48B5DE49E581E0DF59831F1AE2 -i 00:16:27.582 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 31dc85fe-76ab-4b11-b3c0-3edf69e529f4 00:16:27.582 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:16:27.582 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 31DC85FE76AB4B11B3C03EDF69E529F4 -i 00:16:27.843 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:27.843 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:16:28.103 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:28.103 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:28.363 nvme0n1 00:16:28.363 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:28.363 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:28.624 nvme1n2 00:16:28.624 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:16:28.624 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:16:28.624 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:28.624 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:16:28.624 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:16:28.884 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:16:28.884 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:16:28.884 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:16:28.884 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:16:29.145 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 412fdb48-b5de-49e5-81e0-df59831f1ae2 == \4\1\2\f\d\b\4\8\-\b\5\d\e\-\4\9\e\5\-\8\1\e\0\-\d\f\5\9\8\3\1\f\1\a\e\2 ]] 00:16:29.145 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:16:29.145 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:16:29.145 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:16:29.145 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 31dc85fe-76ab-4b11-b3c0-3edf69e529f4 == \3\1\d\c\8\5\f\e\-\7\6\a\b\-\4\b\1\1\-\b\3\c\0\-\3\e\d\f\6\9\e\5\2\9\f\4 ]] 00:16:29.145 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:29.406 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:29.667 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 412fdb48-b5de-49e5-81e0-df59831f1ae2 00:16:29.667 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:16:29.667 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 412FDB48B5DE49E581E0DF59831F1AE2 00:16:29.667 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:29.667 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 412FDB48B5DE49E581E0DF59831F1AE2 00:16:29.667 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:29.667 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:29.667 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:29.667 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:29.667 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:29.667 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:29.667 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:29.667 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:29.667 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 412FDB48B5DE49E581E0DF59831F1AE2 00:16:29.667 [2024-11-26 07:26:13.770025] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:16:29.667 [2024-11-26 07:26:13.770059] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:16:29.667 [2024-11-26 07:26:13.770068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.667 request: 00:16:29.667 { 00:16:29.667 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:29.667 "namespace": { 00:16:29.667 "bdev_name": "invalid", 00:16:29.667 "nsid": 1, 00:16:29.667 "nguid": "412FDB48B5DE49E581E0DF59831F1AE2", 00:16:29.667 "no_auto_visible": false 00:16:29.667 }, 00:16:29.667 "method": "nvmf_subsystem_add_ns", 00:16:29.667 "req_id": 1 00:16:29.667 } 00:16:29.667 Got JSON-RPC error response 00:16:29.667 response: 00:16:29.667 { 00:16:29.667 "code": -32602, 00:16:29.667 "message": "Invalid parameters" 00:16:29.667 } 00:16:29.667 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:29.667 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:29.667 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:29.667 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:29.667 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 412fdb48-b5de-49e5-81e0-df59831f1ae2 00:16:29.667 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:16:29.667 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 412FDB48B5DE49E581E0DF59831F1AE2 -i 00:16:29.928 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:16:31.839 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:16:31.839 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:16:31.839 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:32.100 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:16:32.100 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2053203 00:16:32.100 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2053203 ']' 00:16:32.100 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2053203 00:16:32.100 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:16:32.100 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:32.100 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2053203 00:16:32.100 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:32.100 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:32.100 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2053203' 00:16:32.100 killing process with pid 2053203 00:16:32.100 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2053203 00:16:32.100 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2053203 00:16:32.362 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:32.623 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:16:32.623 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:16:32.623 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:32.623 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:16:32.623 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:32.623 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:16:32.623 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:32.623 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:32.623 rmmod nvme_tcp 00:16:32.623 rmmod nvme_fabrics 00:16:32.623 rmmod nvme_keyring 00:16:32.623 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:32.623 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:16:32.623 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:16:32.623 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2050942 ']' 00:16:32.623 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2050942 00:16:32.623 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2050942 ']' 00:16:32.623 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2050942 00:16:32.623 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:16:32.623 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:32.623 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2050942 00:16:32.623 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:32.623 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:32.623 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2050942' 00:16:32.623 killing process with pid 2050942 00:16:32.623 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2050942 00:16:32.623 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2050942 00:16:32.883 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:32.884 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:32.884 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:32.884 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:16:32.884 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:16:32.884 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:32.884 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:16:32.884 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:32.884 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:32.884 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:32.884 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:32.884 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.808 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:34.808 00:16:34.808 real 0m29.093s 00:16:34.808 user 0m31.822s 00:16:34.808 sys 0m8.852s 00:16:34.808 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:34.808 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:34.808 ************************************ 00:16:34.808 END TEST nvmf_ns_masking 00:16:34.808 ************************************ 00:16:35.070 07:26:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:16:35.070 07:26:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:35.070 07:26:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:35.070 07:26:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:35.070 07:26:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:35.070 ************************************ 00:16:35.070 START TEST nvmf_nvme_cli 00:16:35.070 ************************************ 00:16:35.070 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:35.070 * Looking for test storage... 00:16:35.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:35.070 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:35.070 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:16:35.070 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:35.070 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:35.070 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:35.071 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:35.071 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:35.071 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:16:35.071 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:16:35.071 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:16:35.071 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:16:35.071 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:16:35.071 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:16:35.071 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:16:35.071 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:35.071 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:16:35.071 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:16:35.071 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:35.071 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:35.071 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:16:35.071 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:16:35.071 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:35.071 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:16:35.071 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:16:35.071 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:16:35.071 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:16:35.071 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:35.071 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:16:35.071 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:16:35.071 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:35.071 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:35.071 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:16:35.071 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:35.071 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:35.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.071 --rc genhtml_branch_coverage=1 00:16:35.071 --rc genhtml_function_coverage=1 00:16:35.071 --rc genhtml_legend=1 00:16:35.071 --rc geninfo_all_blocks=1 00:16:35.071 --rc geninfo_unexecuted_blocks=1 00:16:35.071 00:16:35.071 ' 00:16:35.071 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:35.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.071 --rc genhtml_branch_coverage=1 00:16:35.071 --rc genhtml_function_coverage=1 00:16:35.071 --rc genhtml_legend=1 00:16:35.071 --rc geninfo_all_blocks=1 00:16:35.071 --rc geninfo_unexecuted_blocks=1 00:16:35.071 00:16:35.071 ' 00:16:35.071 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:35.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.071 --rc genhtml_branch_coverage=1 00:16:35.071 --rc genhtml_function_coverage=1 00:16:35.071 --rc genhtml_legend=1 00:16:35.071 --rc geninfo_all_blocks=1 00:16:35.071 --rc geninfo_unexecuted_blocks=1 00:16:35.071 00:16:35.071 ' 00:16:35.071 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:35.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.071 --rc genhtml_branch_coverage=1 00:16:35.071 --rc genhtml_function_coverage=1 00:16:35.071 --rc genhtml_legend=1 00:16:35.071 --rc geninfo_all_blocks=1 00:16:35.071 --rc geninfo_unexecuted_blocks=1 00:16:35.071 00:16:35.071 ' 00:16:35.071 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:35.071 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:35.332 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:35.332 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.333 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:35.333 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.333 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:35.333 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:35.333 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:16:35.333 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:43.479 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:43.479 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:16:43.479 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:43.479 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:43.479 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:43.479 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:43.479 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:43.479 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:16:43.479 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:43.479 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:16:43.479 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:16:43.479 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:16:43.479 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:16:43.479 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:43.480 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:43.480 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:43.480 Found net devices under 0000:31:00.0: cvl_0_0 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:43.480 Found net devices under 0000:31:00.1: cvl_0_1 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:43.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:43.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.702 ms 00:16:43.480 00:16:43.480 --- 10.0.0.2 ping statistics --- 00:16:43.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.480 rtt min/avg/max/mdev = 0.702/0.702/0.702/0.000 ms 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:43.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:43.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:16:43.480 00:16:43.480 --- 10.0.0.1 ping statistics --- 00:16:43.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.480 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:43.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:43.741 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:43.741 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:43.741 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:43.741 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:43.741 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2059217 00:16:43.741 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2059217 00:16:43.741 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:43.741 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2059217 ']' 00:16:43.741 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.741 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:43.741 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.741 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:43.741 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:43.741 [2024-11-26 07:26:27.686612] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:16:43.741 [2024-11-26 07:26:27.686677] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.741 [2024-11-26 07:26:27.786513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:43.741 [2024-11-26 07:26:27.828491] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:43.741 [2024-11-26 07:26:27.828530] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:43.741 [2024-11-26 07:26:27.828538] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:43.741 [2024-11-26 07:26:27.828545] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:43.741 [2024-11-26 07:26:27.828552] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:43.741 [2024-11-26 07:26:27.830184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.741 [2024-11-26 07:26:27.830303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:43.741 [2024-11-26 07:26:27.830460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.741 [2024-11-26 07:26:27.830461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:44.683 [2024-11-26 07:26:28.545112] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:44.683 Malloc0 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:44.683 Malloc1 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:44.683 [2024-11-26 07:26:28.646562] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.683 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:16:44.945 00:16:44.945 Discovery Log Number of Records 2, Generation counter 2 00:16:44.945 =====Discovery Log Entry 0====== 00:16:44.945 trtype: tcp 00:16:44.945 adrfam: ipv4 00:16:44.945 subtype: current discovery subsystem 00:16:44.945 treq: not required 00:16:44.945 portid: 0 00:16:44.945 trsvcid: 4420 00:16:44.945 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:44.945 traddr: 10.0.0.2 00:16:44.945 eflags: explicit discovery connections, duplicate discovery information 00:16:44.945 sectype: none 00:16:44.945 =====Discovery Log Entry 1====== 00:16:44.945 trtype: tcp 00:16:44.945 adrfam: ipv4 00:16:44.945 subtype: nvme subsystem 00:16:44.945 treq: not required 00:16:44.945 portid: 0 00:16:44.945 trsvcid: 4420 00:16:44.945 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:44.945 traddr: 10.0.0.2 00:16:44.945 eflags: none 00:16:44.945 sectype: none 00:16:44.945 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:44.945 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:44.945 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:44.945 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:44.945 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:44.945 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:44.945 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:44.945 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:44.946 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:44.946 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:44.946 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:46.328 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:46.328 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:16:46.328 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:46.328 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:16:46.328 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:16:46.328 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:16:48.240 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:48.240 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:48.240 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:48.500 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:16:48.501 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:48.501 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:16:48.501 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:48.501 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:48.501 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:48.501 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:48.501 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:48.501 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:48.501 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:48.501 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:48.501 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:48.501 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:16:48.501 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:48.501 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:48.501 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:16:48.501 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:48.501 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:16:48.501 /dev/nvme0n2 ]] 00:16:48.501 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:48.501 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:48.501 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:48.501 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:48.501 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:48.501 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:48.501 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:48.501 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:48.501 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:48.501 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:48.501 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:16:48.501 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:48.501 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:48.501 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:16:48.501 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:48.501 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:48.501 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:49.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:49.073 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:49.073 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:16:49.073 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:49.073 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:49.073 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:49.073 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:49.073 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:16:49.073 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:49.073 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:49.073 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.073 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:49.073 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.073 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:49.073 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:49.073 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:49.073 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:16:49.073 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:49.073 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:16:49.073 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:49.073 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:49.073 rmmod nvme_tcp 00:16:49.073 rmmod nvme_fabrics 00:16:49.073 rmmod nvme_keyring 00:16:49.073 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:49.073 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:16:49.073 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:16:49.073 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2059217 ']' 00:16:49.073 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2059217 00:16:49.073 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2059217 ']' 00:16:49.073 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2059217 00:16:49.073 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:16:49.073 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:49.073 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2059217 00:16:49.073 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:49.073 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:49.073 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2059217' 00:16:49.073 killing process with pid 2059217 00:16:49.073 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2059217 00:16:49.073 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2059217 00:16:49.334 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:49.334 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:49.334 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:49.334 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:16:49.334 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:16:49.334 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:49.334 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:16:49.335 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:49.335 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:49.335 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.335 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:49.335 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.334 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:51.334 00:16:51.334 real 0m16.313s 00:16:51.334 user 0m23.992s 00:16:51.334 sys 0m7.046s 00:16:51.334 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:51.334 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:51.334 ************************************ 00:16:51.334 END TEST nvmf_nvme_cli 00:16:51.334 ************************************ 00:16:51.334 07:26:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:16:51.334 07:26:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:51.334 07:26:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:51.334 07:26:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:51.334 07:26:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:51.334 ************************************ 00:16:51.334 START TEST nvmf_vfio_user 00:16:51.334 ************************************ 00:16:51.334 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:51.643 * Looking for test storage... 00:16:51.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:51.643 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:51.643 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:51.643 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:16:51.643 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:51.643 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:51.643 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:51.643 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:51.643 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:16:51.643 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:16:51.643 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:16:51.643 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:16:51.643 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:16:51.643 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:16:51.643 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:16:51.643 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:51.643 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:16:51.643 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:16:51.643 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:51.643 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:51.643 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:16:51.643 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:16:51.643 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:51.643 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:16:51.643 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:16:51.643 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:16:51.643 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:16:51.643 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:51.643 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:51.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.644 --rc genhtml_branch_coverage=1 00:16:51.644 --rc genhtml_function_coverage=1 00:16:51.644 --rc genhtml_legend=1 00:16:51.644 --rc geninfo_all_blocks=1 00:16:51.644 --rc geninfo_unexecuted_blocks=1 00:16:51.644 00:16:51.644 ' 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:51.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.644 --rc genhtml_branch_coverage=1 00:16:51.644 --rc genhtml_function_coverage=1 00:16:51.644 --rc genhtml_legend=1 00:16:51.644 --rc geninfo_all_blocks=1 00:16:51.644 --rc geninfo_unexecuted_blocks=1 00:16:51.644 00:16:51.644 ' 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:51.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.644 --rc genhtml_branch_coverage=1 00:16:51.644 --rc genhtml_function_coverage=1 00:16:51.644 --rc genhtml_legend=1 00:16:51.644 --rc geninfo_all_blocks=1 00:16:51.644 --rc geninfo_unexecuted_blocks=1 00:16:51.644 00:16:51.644 ' 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:51.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.644 --rc genhtml_branch_coverage=1 00:16:51.644 --rc genhtml_function_coverage=1 00:16:51.644 --rc genhtml_legend=1 00:16:51.644 --rc geninfo_all_blocks=1 00:16:51.644 --rc geninfo_unexecuted_blocks=1 00:16:51.644 00:16:51.644 ' 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:51.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2061036 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2061036' 00:16:51.644 Process pid: 2061036 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2061036 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2061036 ']' 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:51.644 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:51.644 [2024-11-26 07:26:35.695958] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:16:51.644 [2024-11-26 07:26:35.696036] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.905 [2024-11-26 07:26:35.781120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:51.905 [2024-11-26 07:26:35.822423] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.905 [2024-11-26 07:26:35.822459] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.905 [2024-11-26 07:26:35.822467] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:51.905 [2024-11-26 07:26:35.822474] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:51.905 [2024-11-26 07:26:35.822479] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.905 [2024-11-26 07:26:35.824085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.905 [2024-11-26 07:26:35.824208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:51.905 [2024-11-26 07:26:35.824364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.905 [2024-11-26 07:26:35.824364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:52.477 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:52.477 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:16:52.477 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:53.422 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:16:53.684 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:53.684 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:53.684 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:53.684 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:53.684 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:53.945 Malloc1 00:16:53.945 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:54.205 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:54.206 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:54.467 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:54.467 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:54.467 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:54.728 Malloc2 00:16:54.728 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:54.728 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:54.989 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:55.253 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:16:55.253 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:16:55.253 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:55.253 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:55.253 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:16:55.253 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:55.253 [2024-11-26 07:26:39.258920] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:16:55.253 [2024-11-26 07:26:39.258990] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2061739 ] 00:16:55.253 [2024-11-26 07:26:39.314986] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:16:55.253 [2024-11-26 07:26:39.317330] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:55.253 [2024-11-26 07:26:39.317353] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f63551bc000 00:16:55.253 [2024-11-26 07:26:39.318331] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:55.253 [2024-11-26 07:26:39.319333] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:55.253 [2024-11-26 07:26:39.320332] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:55.253 [2024-11-26 07:26:39.321345] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:55.253 [2024-11-26 07:26:39.322351] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:55.253 [2024-11-26 07:26:39.323354] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:55.253 [2024-11-26 07:26:39.324363] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:55.253 [2024-11-26 07:26:39.325363] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:55.253 [2024-11-26 07:26:39.326375] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:55.253 [2024-11-26 07:26:39.326388] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f63551b1000 00:16:55.253 [2024-11-26 07:26:39.327713] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:55.253 [2024-11-26 07:26:39.347021] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:16:55.253 [2024-11-26 07:26:39.347049] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:16:55.253 [2024-11-26 07:26:39.352515] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:55.253 [2024-11-26 07:26:39.352560] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:55.253 [2024-11-26 07:26:39.352640] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:16:55.253 [2024-11-26 07:26:39.352656] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:16:55.253 [2024-11-26 07:26:39.352661] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:16:55.253 [2024-11-26 07:26:39.353518] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:16:55.253 [2024-11-26 07:26:39.353528] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:16:55.253 [2024-11-26 07:26:39.353535] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:16:55.253 [2024-11-26 07:26:39.354523] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:55.253 [2024-11-26 07:26:39.354532] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:16:55.253 [2024-11-26 07:26:39.354539] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:16:55.253 [2024-11-26 07:26:39.355523] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:16:55.253 [2024-11-26 07:26:39.355532] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:55.253 [2024-11-26 07:26:39.356531] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:16:55.253 [2024-11-26 07:26:39.356538] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:16:55.253 [2024-11-26 07:26:39.356543] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:16:55.253 [2024-11-26 07:26:39.356550] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:55.253 [2024-11-26 07:26:39.356659] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:16:55.253 [2024-11-26 07:26:39.356664] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:55.253 [2024-11-26 07:26:39.356669] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:16:55.253 [2024-11-26 07:26:39.357534] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:16:55.253 [2024-11-26 07:26:39.358550] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:16:55.253 [2024-11-26 07:26:39.359553] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:55.253 [2024-11-26 07:26:39.360552] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:55.253 [2024-11-26 07:26:39.360604] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:55.253 [2024-11-26 07:26:39.361562] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:16:55.253 [2024-11-26 07:26:39.361570] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:55.253 [2024-11-26 07:26:39.361575] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:16:55.253 [2024-11-26 07:26:39.361597] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:16:55.253 [2024-11-26 07:26:39.361609] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:16:55.253 [2024-11-26 07:26:39.361623] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:55.253 [2024-11-26 07:26:39.361628] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:55.253 [2024-11-26 07:26:39.361632] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:55.253 [2024-11-26 07:26:39.361644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:55.253 [2024-11-26 07:26:39.361680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:55.253 [2024-11-26 07:26:39.361690] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:16:55.254 [2024-11-26 07:26:39.361695] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:16:55.254 [2024-11-26 07:26:39.361699] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:16:55.254 [2024-11-26 07:26:39.361704] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:55.254 [2024-11-26 07:26:39.361713] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:16:55.254 [2024-11-26 07:26:39.361718] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:16:55.254 [2024-11-26 07:26:39.361723] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:16:55.254 [2024-11-26 07:26:39.361732] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:16:55.254 [2024-11-26 07:26:39.361743] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:55.254 [2024-11-26 07:26:39.361753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:55.254 [2024-11-26 07:26:39.361763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.254 [2024-11-26 07:26:39.361774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.254 [2024-11-26 07:26:39.361783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.254 [2024-11-26 07:26:39.361792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.254 [2024-11-26 07:26:39.361797] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:16:55.254 [2024-11-26 07:26:39.361804] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:55.254 [2024-11-26 07:26:39.361813] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:55.254 [2024-11-26 07:26:39.361825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:55.254 [2024-11-26 07:26:39.361832] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:16:55.254 [2024-11-26 07:26:39.361838] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:55.254 [2024-11-26 07:26:39.361845] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:16:55.254 [2024-11-26 07:26:39.361851] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:16:55.254 [2024-11-26 07:26:39.361859] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:55.254 [2024-11-26 07:26:39.361870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:55.254 [2024-11-26 07:26:39.361933] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:16:55.254 [2024-11-26 07:26:39.361941] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:16:55.254 [2024-11-26 07:26:39.361948] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:55.254 [2024-11-26 07:26:39.361953] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:55.254 [2024-11-26 07:26:39.361956] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:55.254 [2024-11-26 07:26:39.361963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:55.254 [2024-11-26 07:26:39.361972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:55.254 [2024-11-26 07:26:39.361982] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:16:55.254 [2024-11-26 07:26:39.361990] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:16:55.254 [2024-11-26 07:26:39.361998] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:16:55.254 [2024-11-26 07:26:39.362005] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:55.254 [2024-11-26 07:26:39.362009] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:55.254 [2024-11-26 07:26:39.362013] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:55.254 [2024-11-26 07:26:39.362021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:55.254 [2024-11-26 07:26:39.362035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:55.254 [2024-11-26 07:26:39.362047] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:55.254 [2024-11-26 07:26:39.362056] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:55.254 [2024-11-26 07:26:39.362063] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:55.254 [2024-11-26 07:26:39.362067] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:55.254 [2024-11-26 07:26:39.362071] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:55.254 [2024-11-26 07:26:39.362077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:55.254 [2024-11-26 07:26:39.362086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:55.254 [2024-11-26 07:26:39.362095] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:55.254 [2024-11-26 07:26:39.362102] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:16:55.254 [2024-11-26 07:26:39.362109] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:16:55.254 [2024-11-26 07:26:39.362115] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:16:55.254 [2024-11-26 07:26:39.362120] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:55.254 [2024-11-26 07:26:39.362125] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:16:55.254 [2024-11-26 07:26:39.362130] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:16:55.254 [2024-11-26 07:26:39.362135] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:16:55.254 [2024-11-26 07:26:39.362140] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:16:55.254 [2024-11-26 07:26:39.362157] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:55.254 [2024-11-26 07:26:39.362167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:55.254 [2024-11-26 07:26:39.362179] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:55.254 [2024-11-26 07:26:39.362189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:55.254 [2024-11-26 07:26:39.362201] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:55.254 [2024-11-26 07:26:39.362208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:55.254 [2024-11-26 07:26:39.362219] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:55.254 [2024-11-26 07:26:39.362229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:55.254 [2024-11-26 07:26:39.362243] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:55.254 [2024-11-26 07:26:39.362248] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:55.254 [2024-11-26 07:26:39.362251] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:55.254 [2024-11-26 07:26:39.362255] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:55.254 [2024-11-26 07:26:39.362258] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:55.254 [2024-11-26 07:26:39.362265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:55.254 [2024-11-26 07:26:39.362273] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:55.254 [2024-11-26 07:26:39.362277] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:55.254 [2024-11-26 07:26:39.362280] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:55.254 [2024-11-26 07:26:39.362286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:55.254 [2024-11-26 07:26:39.362294] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:55.254 [2024-11-26 07:26:39.362298] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:55.254 [2024-11-26 07:26:39.362302] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:55.254 [2024-11-26 07:26:39.362308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:55.254 [2024-11-26 07:26:39.362316] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:55.254 [2024-11-26 07:26:39.362321] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:55.254 [2024-11-26 07:26:39.362324] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:55.254 [2024-11-26 07:26:39.362330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:55.254 [2024-11-26 07:26:39.362337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:55.254 [2024-11-26 07:26:39.362350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:55.254 [2024-11-26 07:26:39.362362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:55.254 [2024-11-26 07:26:39.362369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:55.254 ===================================================== 00:16:55.254 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:55.254 ===================================================== 00:16:55.254 Controller Capabilities/Features 00:16:55.254 ================================ 00:16:55.254 Vendor ID: 4e58 00:16:55.254 Subsystem Vendor ID: 4e58 00:16:55.255 Serial Number: SPDK1 00:16:55.255 Model Number: SPDK bdev Controller 00:16:55.255 Firmware Version: 25.01 00:16:55.255 Recommended Arb Burst: 6 00:16:55.255 IEEE OUI Identifier: 8d 6b 50 00:16:55.255 Multi-path I/O 00:16:55.255 May have multiple subsystem ports: Yes 00:16:55.255 May have multiple controllers: Yes 00:16:55.255 Associated with SR-IOV VF: No 00:16:55.255 Max Data Transfer Size: 131072 00:16:55.255 Max Number of Namespaces: 32 00:16:55.255 Max Number of I/O Queues: 127 00:16:55.255 NVMe Specification Version (VS): 1.3 00:16:55.255 NVMe Specification Version (Identify): 1.3 00:16:55.255 Maximum Queue Entries: 256 00:16:55.255 Contiguous Queues Required: Yes 00:16:55.255 Arbitration Mechanisms Supported 00:16:55.255 Weighted Round Robin: Not Supported 00:16:55.255 Vendor Specific: Not Supported 00:16:55.255 Reset Timeout: 15000 ms 00:16:55.255 Doorbell Stride: 4 bytes 00:16:55.255 NVM Subsystem Reset: Not Supported 00:16:55.255 Command Sets Supported 00:16:55.255 NVM Command Set: Supported 00:16:55.255 Boot Partition: Not Supported 00:16:55.255 Memory Page Size Minimum: 4096 bytes 00:16:55.255 Memory Page Size Maximum: 4096 bytes 00:16:55.255 Persistent Memory Region: Not Supported 00:16:55.255 Optional Asynchronous Events Supported 00:16:55.255 Namespace Attribute Notices: Supported 00:16:55.255 Firmware Activation Notices: Not Supported 00:16:55.255 ANA Change Notices: Not Supported 00:16:55.255 PLE Aggregate Log Change Notices: Not Supported 00:16:55.255 LBA Status Info Alert Notices: Not Supported 00:16:55.255 EGE Aggregate Log Change Notices: Not Supported 00:16:55.255 Normal NVM Subsystem Shutdown event: Not Supported 00:16:55.255 Zone Descriptor Change Notices: Not Supported 00:16:55.255 Discovery Log Change Notices: Not Supported 00:16:55.255 Controller Attributes 00:16:55.255 128-bit Host Identifier: Supported 00:16:55.255 Non-Operational Permissive Mode: Not Supported 00:16:55.255 NVM Sets: Not Supported 00:16:55.255 Read Recovery Levels: Not Supported 00:16:55.255 Endurance Groups: Not Supported 00:16:55.255 Predictable Latency Mode: Not Supported 00:16:55.255 Traffic Based Keep ALive: Not Supported 00:16:55.255 Namespace Granularity: Not Supported 00:16:55.255 SQ Associations: Not Supported 00:16:55.255 UUID List: Not Supported 00:16:55.255 Multi-Domain Subsystem: Not Supported 00:16:55.255 Fixed Capacity Management: Not Supported 00:16:55.255 Variable Capacity Management: Not Supported 00:16:55.255 Delete Endurance Group: Not Supported 00:16:55.255 Delete NVM Set: Not Supported 00:16:55.255 Extended LBA Formats Supported: Not Supported 00:16:55.255 Flexible Data Placement Supported: Not Supported 00:16:55.255 00:16:55.255 Controller Memory Buffer Support 00:16:55.255 ================================ 00:16:55.255 Supported: No 00:16:55.255 00:16:55.255 Persistent Memory Region Support 00:16:55.255 ================================ 00:16:55.255 Supported: No 00:16:55.255 00:16:55.255 Admin Command Set Attributes 00:16:55.255 ============================ 00:16:55.255 Security Send/Receive: Not Supported 00:16:55.255 Format NVM: Not Supported 00:16:55.255 Firmware Activate/Download: Not Supported 00:16:55.255 Namespace Management: Not Supported 00:16:55.255 Device Self-Test: Not Supported 00:16:55.255 Directives: Not Supported 00:16:55.255 NVMe-MI: Not Supported 00:16:55.255 Virtualization Management: Not Supported 00:16:55.255 Doorbell Buffer Config: Not Supported 00:16:55.255 Get LBA Status Capability: Not Supported 00:16:55.255 Command & Feature Lockdown Capability: Not Supported 00:16:55.255 Abort Command Limit: 4 00:16:55.255 Async Event Request Limit: 4 00:16:55.255 Number of Firmware Slots: N/A 00:16:55.255 Firmware Slot 1 Read-Only: N/A 00:16:55.255 Firmware Activation Without Reset: N/A 00:16:55.255 Multiple Update Detection Support: N/A 00:16:55.255 Firmware Update Granularity: No Information Provided 00:16:55.255 Per-Namespace SMART Log: No 00:16:55.255 Asymmetric Namespace Access Log Page: Not Supported 00:16:55.256 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:16:55.256 Command Effects Log Page: Supported 00:16:55.256 Get Log Page Extended Data: Supported 00:16:55.256 Telemetry Log Pages: Not Supported 00:16:55.256 Persistent Event Log Pages: Not Supported 00:16:55.256 Supported Log Pages Log Page: May Support 00:16:55.256 Commands Supported & Effects Log Page: Not Supported 00:16:55.256 Feature Identifiers & Effects Log Page:May Support 00:16:55.256 NVMe-MI Commands & Effects Log Page: May Support 00:16:55.256 Data Area 4 for Telemetry Log: Not Supported 00:16:55.256 Error Log Page Entries Supported: 128 00:16:55.256 Keep Alive: Supported 00:16:55.256 Keep Alive Granularity: 10000 ms 00:16:55.256 00:16:55.256 NVM Command Set Attributes 00:16:55.256 ========================== 00:16:55.256 Submission Queue Entry Size 00:16:55.256 Max: 64 00:16:55.256 Min: 64 00:16:55.256 Completion Queue Entry Size 00:16:55.256 Max: 16 00:16:55.256 Min: 16 00:16:55.256 Number of Namespaces: 32 00:16:55.256 Compare Command: Supported 00:16:55.256 Write Uncorrectable Command: Not Supported 00:16:55.256 Dataset Management Command: Supported 00:16:55.256 Write Zeroes Command: Supported 00:16:55.256 Set Features Save Field: Not Supported 00:16:55.256 Reservations: Not Supported 00:16:55.256 Timestamp: Not Supported 00:16:55.256 Copy: Supported 00:16:55.256 Volatile Write Cache: Present 00:16:55.256 Atomic Write Unit (Normal): 1 00:16:55.256 Atomic Write Unit (PFail): 1 00:16:55.256 Atomic Compare & Write Unit: 1 00:16:55.256 Fused Compare & Write: Supported 00:16:55.256 Scatter-Gather List 00:16:55.256 SGL Command Set: Supported (Dword aligned) 00:16:55.256 SGL Keyed: Not Supported 00:16:55.256 SGL Bit Bucket Descriptor: Not Supported 00:16:55.256 SGL Metadata Pointer: Not Supported 00:16:55.256 Oversized SGL: Not Supported 00:16:55.256 SGL Metadata Address: Not Supported 00:16:55.256 SGL Offset: Not Supported 00:16:55.256 Transport SGL Data Block: Not Supported 00:16:55.256 Replay Protected Memory Block: Not Supported 00:16:55.256 00:16:55.256 Firmware Slot Information 00:16:55.256 ========================= 00:16:55.256 Active slot: 1 00:16:55.256 Slot 1 Firmware Revision: 25.01 00:16:55.257 00:16:55.257 00:16:55.257 Commands Supported and Effects 00:16:55.257 ============================== 00:16:55.257 Admin Commands 00:16:55.257 -------------- 00:16:55.257 Get Log Page (02h): Supported 00:16:55.257 Identify (06h): Supported 00:16:55.257 Abort (08h): Supported 00:16:55.257 Set Features (09h): Supported 00:16:55.257 Get Features (0Ah): Supported 00:16:55.257 Asynchronous Event Request (0Ch): Supported 00:16:55.257 Keep Alive (18h): Supported 00:16:55.257 I/O Commands 00:16:55.257 ------------ 00:16:55.257 Flush (00h): Supported LBA-Change 00:16:55.257 Write (01h): Supported LBA-Change 00:16:55.257 Read (02h): Supported 00:16:55.257 Compare (05h): Supported 00:16:55.257 Write Zeroes (08h): Supported LBA-Change 00:16:55.257 Dataset Management (09h): Supported LBA-Change 00:16:55.257 Copy (19h): Supported LBA-Change 00:16:55.257 00:16:55.257 Error Log 00:16:55.257 ========= 00:16:55.257 00:16:55.257 Arbitration 00:16:55.257 =========== 00:16:55.257 Arbitration Burst: 1 00:16:55.257 00:16:55.257 Power Management 00:16:55.257 ================ 00:16:55.257 Number of Power States: 1 00:16:55.257 Current Power State: Power State #0 00:16:55.257 Power State #0: 00:16:55.257 Max Power: 0.00 W 00:16:55.257 Non-Operational State: Operational 00:16:55.257 Entry Latency: Not Reported 00:16:55.257 Exit Latency: Not Reported 00:16:55.257 Relative Read Throughput: 0 00:16:55.257 Relative Read Latency: 0 00:16:55.257 Relative Write Throughput: 0 00:16:55.257 Relative Write Latency: 0 00:16:55.257 Idle Power: Not Reported 00:16:55.257 Active Power: Not Reported 00:16:55.257 Non-Operational Permissive Mode: Not Supported 00:16:55.257 00:16:55.257 Health Information 00:16:55.257 ================== 00:16:55.257 Critical Warnings: 00:16:55.257 Available Spare Space: OK 00:16:55.257 Temperature: OK 00:16:55.257 Device Reliability: OK 00:16:55.257 Read Only: No 00:16:55.257 Volatile Memory Backup: OK 00:16:55.257 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:55.257 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:55.257 Available Spare: 0% 00:16:55.257 Available Sp[2024-11-26 07:26:39.362470] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:55.257 [2024-11-26 07:26:39.362479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:55.257 [2024-11-26 07:26:39.362506] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:16:55.257 [2024-11-26 07:26:39.362516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.257 [2024-11-26 07:26:39.362523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.257 [2024-11-26 07:26:39.362531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.258 [2024-11-26 07:26:39.362538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.258 [2024-11-26 07:26:39.362568] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:55.258 [2024-11-26 07:26:39.362578] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:16:55.258 [2024-11-26 07:26:39.363568] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:55.258 [2024-11-26 07:26:39.363610] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:16:55.258 [2024-11-26 07:26:39.363617] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:16:55.258 [2024-11-26 07:26:39.364574] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:16:55.258 [2024-11-26 07:26:39.364585] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:16:55.258 [2024-11-26 07:26:39.364643] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:16:55.258 [2024-11-26 07:26:39.366604] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:55.525 are Threshold: 0% 00:16:55.525 Life Percentage Used: 0% 00:16:55.525 Data Units Read: 0 00:16:55.525 Data Units Written: 0 00:16:55.525 Host Read Commands: 0 00:16:55.525 Host Write Commands: 0 00:16:55.525 Controller Busy Time: 0 minutes 00:16:55.525 Power Cycles: 0 00:16:55.525 Power On Hours: 0 hours 00:16:55.525 Unsafe Shutdowns: 0 00:16:55.525 Unrecoverable Media Errors: 0 00:16:55.525 Lifetime Error Log Entries: 0 00:16:55.525 Warning Temperature Time: 0 minutes 00:16:55.525 Critical Temperature Time: 0 minutes 00:16:55.525 00:16:55.525 Number of Queues 00:16:55.525 ================ 00:16:55.525 Number of I/O Submission Queues: 127 00:16:55.525 Number of I/O Completion Queues: 127 00:16:55.525 00:16:55.525 Active Namespaces 00:16:55.525 ================= 00:16:55.525 Namespace ID:1 00:16:55.525 Error Recovery Timeout: Unlimited 00:16:55.525 Command Set Identifier: NVM (00h) 00:16:55.525 Deallocate: Supported 00:16:55.525 Deallocated/Unwritten Error: Not Supported 00:16:55.525 Deallocated Read Value: Unknown 00:16:55.525 Deallocate in Write Zeroes: Not Supported 00:16:55.525 Deallocated Guard Field: 0xFFFF 00:16:55.525 Flush: Supported 00:16:55.525 Reservation: Supported 00:16:55.525 Namespace Sharing Capabilities: Multiple Controllers 00:16:55.525 Size (in LBAs): 131072 (0GiB) 00:16:55.525 Capacity (in LBAs): 131072 (0GiB) 00:16:55.525 Utilization (in LBAs): 131072 (0GiB) 00:16:55.525 NGUID: FE2BE4B329FB4E6A846D649B9AA7C780 00:16:55.525 UUID: fe2be4b3-29fb-4e6a-846d-649b9aa7c780 00:16:55.525 Thin Provisioning: Not Supported 00:16:55.525 Per-NS Atomic Units: Yes 00:16:55.525 Atomic Boundary Size (Normal): 0 00:16:55.525 Atomic Boundary Size (PFail): 0 00:16:55.525 Atomic Boundary Offset: 0 00:16:55.525 Maximum Single Source Range Length: 65535 00:16:55.525 Maximum Copy Length: 65535 00:16:55.525 Maximum Source Range Count: 1 00:16:55.525 NGUID/EUI64 Never Reused: No 00:16:55.525 Namespace Write Protected: No 00:16:55.525 Number of LBA Formats: 1 00:16:55.525 Current LBA Format: LBA Format #00 00:16:55.525 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:55.525 00:16:55.525 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:55.526 [2024-11-26 07:26:39.572574] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:00.821 Initializing NVMe Controllers 00:17:00.821 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:00.821 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:00.821 Initialization complete. Launching workers. 00:17:00.821 ======================================================== 00:17:00.821 Latency(us) 00:17:00.821 Device Information : IOPS MiB/s Average min max 00:17:00.821 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39979.85 156.17 3201.83 845.00 10769.30 00:17:00.821 ======================================================== 00:17:00.821 Total : 39979.85 156.17 3201.83 845.00 10769.30 00:17:00.821 00:17:00.821 [2024-11-26 07:26:44.593189] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:00.821 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:00.821 [2024-11-26 07:26:44.782045] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:06.113 Initializing NVMe Controllers 00:17:06.113 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:06.113 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:06.113 Initialization complete. Launching workers. 00:17:06.113 ======================================================== 00:17:06.113 Latency(us) 00:17:06.113 Device Information : IOPS MiB/s Average min max 00:17:06.113 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16052.69 62.71 7979.28 6982.33 7986.00 00:17:06.113 ======================================================== 00:17:06.114 Total : 16052.69 62.71 7979.28 6982.33 7986.00 00:17:06.114 00:17:06.114 [2024-11-26 07:26:49.823555] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:06.114 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:06.114 [2024-11-26 07:26:50.046529] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:11.407 [2024-11-26 07:26:55.115024] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:11.407 Initializing NVMe Controllers 00:17:11.407 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:11.407 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:11.407 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:17:11.407 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:17:11.407 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:17:11.407 Initialization complete. Launching workers. 00:17:11.407 Starting thread on core 2 00:17:11.407 Starting thread on core 3 00:17:11.407 Starting thread on core 1 00:17:11.407 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:17:11.407 [2024-11-26 07:26:55.408224] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:14.710 [2024-11-26 07:26:58.476954] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:14.710 Initializing NVMe Controllers 00:17:14.710 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:14.710 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:14.710 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:17:14.710 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:17:14.710 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:17:14.710 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:17:14.710 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:14.710 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:14.710 Initialization complete. Launching workers. 00:17:14.710 Starting thread on core 1 with urgent priority queue 00:17:14.710 Starting thread on core 2 with urgent priority queue 00:17:14.710 Starting thread on core 3 with urgent priority queue 00:17:14.710 Starting thread on core 0 with urgent priority queue 00:17:14.710 SPDK bdev Controller (SPDK1 ) core 0: 11642.33 IO/s 8.59 secs/100000 ios 00:17:14.710 SPDK bdev Controller (SPDK1 ) core 1: 8073.33 IO/s 12.39 secs/100000 ios 00:17:14.710 SPDK bdev Controller (SPDK1 ) core 2: 8880.67 IO/s 11.26 secs/100000 ios 00:17:14.710 SPDK bdev Controller (SPDK1 ) core 3: 7279.00 IO/s 13.74 secs/100000 ios 00:17:14.710 ======================================================== 00:17:14.710 00:17:14.710 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:14.710 [2024-11-26 07:26:58.769375] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:14.710 Initializing NVMe Controllers 00:17:14.710 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:14.710 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:14.710 Namespace ID: 1 size: 0GB 00:17:14.710 Initialization complete. 00:17:14.710 INFO: using host memory buffer for IO 00:17:14.710 Hello world! 00:17:14.710 [2024-11-26 07:26:58.806553] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:14.971 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:14.971 [2024-11-26 07:26:59.096137] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:16.358 Initializing NVMe Controllers 00:17:16.358 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:16.358 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:16.358 Initialization complete. Launching workers. 00:17:16.358 submit (in ns) avg, min, max = 7426.7, 3905.0, 4092630.0 00:17:16.358 complete (in ns) avg, min, max = 19875.5, 2394.2, 3998422.5 00:17:16.358 00:17:16.358 Submit histogram 00:17:16.358 ================ 00:17:16.358 Range in us Cumulative Count 00:17:16.358 3.893 - 3.920: 0.3886% ( 74) 00:17:16.358 3.920 - 3.947: 3.9595% ( 680) 00:17:16.358 3.947 - 3.973: 11.2062% ( 1380) 00:17:16.358 3.973 - 4.000: 21.3989% ( 1941) 00:17:16.358 4.000 - 4.027: 33.4979% ( 2304) 00:17:16.358 4.027 - 4.053: 46.4528% ( 2467) 00:17:16.358 4.053 - 4.080: 62.7842% ( 3110) 00:17:16.358 4.080 - 4.107: 77.4090% ( 2785) 00:17:16.358 4.107 - 4.133: 88.5890% ( 2129) 00:17:16.358 4.133 - 4.160: 94.9535% ( 1212) 00:17:16.358 4.160 - 4.187: 97.7262% ( 528) 00:17:16.358 4.187 - 4.213: 98.8447% ( 213) 00:17:16.358 4.213 - 4.240: 99.3541% ( 97) 00:17:16.358 4.240 - 4.267: 99.4906% ( 26) 00:17:16.358 4.267 - 4.293: 99.5326% ( 8) 00:17:16.358 4.293 - 4.320: 99.5379% ( 1) 00:17:16.358 4.373 - 4.400: 99.5431% ( 1) 00:17:16.358 4.453 - 4.480: 99.5536% ( 2) 00:17:16.358 4.560 - 4.587: 99.5641% ( 2) 00:17:16.358 4.800 - 4.827: 99.5746% ( 2) 00:17:16.358 5.467 - 5.493: 99.5799% ( 1) 00:17:16.358 5.493 - 5.520: 99.5851% ( 1) 00:17:16.358 5.680 - 5.707: 99.5904% ( 1) 00:17:16.358 5.787 - 5.813: 99.5957% ( 1) 00:17:16.358 5.813 - 5.840: 99.6062% ( 2) 00:17:16.358 5.840 - 5.867: 99.6114% ( 1) 00:17:16.358 5.867 - 5.893: 99.6167% ( 1) 00:17:16.358 5.920 - 5.947: 99.6219% ( 1) 00:17:16.358 5.947 - 5.973: 99.6324% ( 2) 00:17:16.358 5.973 - 6.000: 99.6429% ( 2) 00:17:16.358 6.027 - 6.053: 99.6482% ( 1) 00:17:16.358 6.053 - 6.080: 99.6534% ( 1) 00:17:16.358 6.080 - 6.107: 99.6692% ( 3) 00:17:16.358 6.107 - 6.133: 99.6744% ( 1) 00:17:16.358 6.133 - 6.160: 99.6849% ( 2) 00:17:16.358 6.160 - 6.187: 99.6902% ( 1) 00:17:16.359 6.213 - 6.240: 99.6954% ( 1) 00:17:16.359 6.240 - 6.267: 99.7007% ( 1) 00:17:16.359 6.267 - 6.293: 99.7059% ( 1) 00:17:16.359 6.347 - 6.373: 99.7164% ( 2) 00:17:16.359 6.427 - 6.453: 99.7217% ( 1) 00:17:16.359 6.453 - 6.480: 99.7269% ( 1) 00:17:16.359 6.480 - 6.507: 99.7322% ( 1) 00:17:16.359 6.507 - 6.533: 99.7427% ( 2) 00:17:16.359 6.533 - 6.560: 99.7584% ( 3) 00:17:16.359 6.613 - 6.640: 99.7637% ( 1) 00:17:16.359 6.693 - 6.720: 99.7689% ( 1) 00:17:16.359 6.773 - 6.800: 99.7742% ( 1) 00:17:16.359 6.880 - 6.933: 99.7794% ( 1) 00:17:16.359 6.933 - 6.987: 99.7899% ( 2) 00:17:16.359 6.987 - 7.040: 99.7952% ( 1) 00:17:16.359 7.093 - 7.147: 99.8057% ( 2) 00:17:16.359 7.200 - 7.253: 99.8162% ( 2) 00:17:16.359 7.253 - 7.307: 99.8267% ( 2) 00:17:16.359 7.307 - 7.360: 99.8320% ( 1) 00:17:16.359 7.467 - 7.520: 99.8477% ( 3) 00:17:16.359 7.627 - 7.680: 99.8530% ( 1) 00:17:16.359 7.787 - 7.840: 99.8582% ( 1) 00:17:16.359 7.840 - 7.893: 99.8635% ( 1) 00:17:16.359 7.893 - 7.947: 99.8687% ( 1) 00:17:16.359 7.947 - 8.000: 99.8845% ( 3) 00:17:16.359 8.213 - 8.267: 99.8950% ( 2) 00:17:16.359 8.587 - 8.640: 99.9002% ( 1) 00:17:16.359 9.173 - 9.227: 99.9055% ( 1) 00:17:16.359 9.493 - 9.547: 99.9107% ( 1) 00:17:16.359 16.533 - 16.640: 99.9160% ( 1) 00:17:16.359 3986.773 - 4014.080: 99.9947% ( 15) 00:17:16.359 4068.693 - 4096.000: 100.0000% ( 1) 00:17:16.359 00:17:16.359 Complete histogram 00:17:16.359 ================== 00:17:16.359 Range in us Cumulative Count 00:17:16.359 2.387 - 2.400: 0.1050% ( 20) 00:17:16.359 2.400 - 2.413: 0.8770% ( 147) 00:17:16.359 2.413 - 2.427: 1.1133% ( 45) 00:17:16.359 2.427 - 2.440: 1.3391% ( 43) 00:17:16.359 2.440 - 2.453: 13.2280% ( 2264) 00:17:16.359 2.453 - 2.467: 51.2157% ( 7234) 00:17:16.359 2.467 - 2.480: 61.2141% ( 1904) 00:17:16.359 2.480 - 2.493: 72.9244% ( 2230) 00:17:16.359 2.493 - 2.507: 78.9949% ( 1156) 00:17:16.359 2.507 - [2024-11-26 07:27:00.117877] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:16.359 2.520: 81.7518% ( 525) 00:17:16.359 2.520 - 2.533: 86.8508% ( 971) 00:17:16.359 2.533 - 2.547: 92.6062% ( 1096) 00:17:16.359 2.547 - 2.560: 95.6992% ( 589) 00:17:16.359 2.560 - 2.573: 97.6317% ( 368) 00:17:16.359 2.573 - 2.587: 98.7765% ( 218) 00:17:16.359 2.587 - 2.600: 99.2018% ( 81) 00:17:16.359 2.600 - 2.613: 99.2911% ( 17) 00:17:16.359 2.613 - 2.627: 99.3068% ( 3) 00:17:16.359 2.627 - 2.640: 99.3121% ( 1) 00:17:16.359 2.680 - 2.693: 99.3173% ( 1) 00:17:16.359 2.933 - 2.947: 99.3226% ( 1) 00:17:16.359 4.267 - 4.293: 99.3278% ( 1) 00:17:16.359 4.347 - 4.373: 99.3383% ( 2) 00:17:16.359 4.373 - 4.400: 99.3488% ( 2) 00:17:16.359 4.560 - 4.587: 99.3541% ( 1) 00:17:16.359 4.587 - 4.613: 99.3593% ( 1) 00:17:16.359 4.693 - 4.720: 99.3698% ( 2) 00:17:16.359 4.747 - 4.773: 99.3751% ( 1) 00:17:16.359 4.800 - 4.827: 99.3803% ( 1) 00:17:16.359 4.960 - 4.987: 99.3856% ( 1) 00:17:16.359 4.987 - 5.013: 99.3909% ( 1) 00:17:16.359 5.013 - 5.040: 99.3961% ( 1) 00:17:16.359 5.120 - 5.147: 99.4014% ( 1) 00:17:16.359 5.173 - 5.200: 99.4066% ( 1) 00:17:16.359 5.200 - 5.227: 99.4119% ( 1) 00:17:16.359 5.227 - 5.253: 99.4171% ( 1) 00:17:16.359 5.333 - 5.360: 99.4224% ( 1) 00:17:16.359 5.360 - 5.387: 99.4276% ( 1) 00:17:16.359 5.387 - 5.413: 99.4329% ( 1) 00:17:16.359 5.413 - 5.440: 99.4381% ( 1) 00:17:16.359 5.440 - 5.467: 99.4434% ( 1) 00:17:16.359 5.467 - 5.493: 99.4539% ( 2) 00:17:16.359 5.520 - 5.547: 99.4591% ( 1) 00:17:16.359 5.547 - 5.573: 99.4644% ( 1) 00:17:16.359 5.627 - 5.653: 99.4696% ( 1) 00:17:16.359 5.680 - 5.707: 99.4749% ( 1) 00:17:16.359 5.707 - 5.733: 99.4801% ( 1) 00:17:16.359 5.733 - 5.760: 99.4854% ( 1) 00:17:16.359 5.760 - 5.787: 99.4906% ( 1) 00:17:16.359 5.920 - 5.947: 99.4959% ( 1) 00:17:16.359 5.947 - 5.973: 99.5011% ( 1) 00:17:16.359 5.973 - 6.000: 99.5064% ( 1) 00:17:16.359 6.000 - 6.027: 99.5116% ( 1) 00:17:16.359 6.107 - 6.133: 99.5169% ( 1) 00:17:16.359 6.240 - 6.267: 99.5221% ( 1) 00:17:16.359 6.293 - 6.320: 99.5274% ( 1) 00:17:16.359 6.693 - 6.720: 99.5326% ( 1) 00:17:16.359 6.747 - 6.773: 99.5379% ( 1) 00:17:16.359 7.040 - 7.093: 99.5431% ( 1) 00:17:16.359 8.533 - 8.587: 99.5484% ( 1) 00:17:16.359 10.293 - 10.347: 99.5536% ( 1) 00:17:16.359 13.333 - 13.387: 99.5589% ( 1) 00:17:16.359 13.760 - 13.867: 99.5641% ( 1) 00:17:16.359 3522.560 - 3549.867: 99.5694% ( 1) 00:17:16.359 3986.773 - 4014.080: 100.0000% ( 82) 00:17:16.359 00:17:16.359 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:17:16.359 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:16.359 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:17:16.359 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:17:16.359 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:16.359 [ 00:17:16.359 { 00:17:16.359 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:16.359 "subtype": "Discovery", 00:17:16.359 "listen_addresses": [], 00:17:16.359 "allow_any_host": true, 00:17:16.359 "hosts": [] 00:17:16.359 }, 00:17:16.359 { 00:17:16.359 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:16.359 "subtype": "NVMe", 00:17:16.359 "listen_addresses": [ 00:17:16.359 { 00:17:16.359 "trtype": "VFIOUSER", 00:17:16.359 "adrfam": "IPv4", 00:17:16.359 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:16.359 "trsvcid": "0" 00:17:16.359 } 00:17:16.359 ], 00:17:16.359 "allow_any_host": true, 00:17:16.359 "hosts": [], 00:17:16.359 "serial_number": "SPDK1", 00:17:16.359 "model_number": "SPDK bdev Controller", 00:17:16.359 "max_namespaces": 32, 00:17:16.359 "min_cntlid": 1, 00:17:16.359 "max_cntlid": 65519, 00:17:16.359 "namespaces": [ 00:17:16.360 { 00:17:16.360 "nsid": 1, 00:17:16.360 "bdev_name": "Malloc1", 00:17:16.360 "name": "Malloc1", 00:17:16.360 "nguid": "FE2BE4B329FB4E6A846D649B9AA7C780", 00:17:16.360 "uuid": "fe2be4b3-29fb-4e6a-846d-649b9aa7c780" 00:17:16.360 } 00:17:16.360 ] 00:17:16.360 }, 00:17:16.360 { 00:17:16.360 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:16.360 "subtype": "NVMe", 00:17:16.360 "listen_addresses": [ 00:17:16.360 { 00:17:16.360 "trtype": "VFIOUSER", 00:17:16.360 "adrfam": "IPv4", 00:17:16.360 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:16.360 "trsvcid": "0" 00:17:16.360 } 00:17:16.360 ], 00:17:16.360 "allow_any_host": true, 00:17:16.360 "hosts": [], 00:17:16.360 "serial_number": "SPDK2", 00:17:16.360 "model_number": "SPDK bdev Controller", 00:17:16.360 "max_namespaces": 32, 00:17:16.360 "min_cntlid": 1, 00:17:16.360 "max_cntlid": 65519, 00:17:16.360 "namespaces": [ 00:17:16.360 { 00:17:16.360 "nsid": 1, 00:17:16.360 "bdev_name": "Malloc2", 00:17:16.360 "name": "Malloc2", 00:17:16.360 "nguid": "81D81C6FA5A84DFFB35CD804DFC66FD5", 00:17:16.360 "uuid": "81d81c6f-a5a8-4dff-b35c-d804dfc66fd5" 00:17:16.360 } 00:17:16.360 ] 00:17:16.360 } 00:17:16.360 ] 00:17:16.360 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:16.360 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:17:16.360 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2065783 00:17:16.360 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:16.360 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:17:16.360 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:16.360 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:16.360 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:17:16.360 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:16.360 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:17:16.621 Malloc3 00:17:16.621 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:17:16.621 [2024-11-26 07:27:00.549388] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:16.621 [2024-11-26 07:27:00.712497] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:16.621 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:16.880 Asynchronous Event Request test 00:17:16.880 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:16.880 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:16.880 Registering asynchronous event callbacks... 00:17:16.880 Starting namespace attribute notice tests for all controllers... 00:17:16.880 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:16.880 aer_cb - Changed Namespace 00:17:16.880 Cleaning up... 00:17:16.880 [ 00:17:16.880 { 00:17:16.880 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:16.880 "subtype": "Discovery", 00:17:16.880 "listen_addresses": [], 00:17:16.880 "allow_any_host": true, 00:17:16.880 "hosts": [] 00:17:16.881 }, 00:17:16.881 { 00:17:16.881 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:16.881 "subtype": "NVMe", 00:17:16.881 "listen_addresses": [ 00:17:16.881 { 00:17:16.881 "trtype": "VFIOUSER", 00:17:16.881 "adrfam": "IPv4", 00:17:16.881 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:16.881 "trsvcid": "0" 00:17:16.881 } 00:17:16.881 ], 00:17:16.881 "allow_any_host": true, 00:17:16.881 "hosts": [], 00:17:16.881 "serial_number": "SPDK1", 00:17:16.881 "model_number": "SPDK bdev Controller", 00:17:16.881 "max_namespaces": 32, 00:17:16.881 "min_cntlid": 1, 00:17:16.881 "max_cntlid": 65519, 00:17:16.881 "namespaces": [ 00:17:16.881 { 00:17:16.881 "nsid": 1, 00:17:16.881 "bdev_name": "Malloc1", 00:17:16.881 "name": "Malloc1", 00:17:16.881 "nguid": "FE2BE4B329FB4E6A846D649B9AA7C780", 00:17:16.881 "uuid": "fe2be4b3-29fb-4e6a-846d-649b9aa7c780" 00:17:16.881 }, 00:17:16.881 { 00:17:16.881 "nsid": 2, 00:17:16.881 "bdev_name": "Malloc3", 00:17:16.881 "name": "Malloc3", 00:17:16.881 "nguid": "5EC4FBB572D74DCC8D43D47856629A8E", 00:17:16.881 "uuid": "5ec4fbb5-72d7-4dcc-8d43-d47856629a8e" 00:17:16.881 } 00:17:16.881 ] 00:17:16.881 }, 00:17:16.881 { 00:17:16.881 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:16.881 "subtype": "NVMe", 00:17:16.881 "listen_addresses": [ 00:17:16.881 { 00:17:16.881 "trtype": "VFIOUSER", 00:17:16.881 "adrfam": "IPv4", 00:17:16.881 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:16.881 "trsvcid": "0" 00:17:16.881 } 00:17:16.881 ], 00:17:16.881 "allow_any_host": true, 00:17:16.881 "hosts": [], 00:17:16.881 "serial_number": "SPDK2", 00:17:16.881 "model_number": "SPDK bdev Controller", 00:17:16.881 "max_namespaces": 32, 00:17:16.881 "min_cntlid": 1, 00:17:16.881 "max_cntlid": 65519, 00:17:16.881 "namespaces": [ 00:17:16.881 { 00:17:16.881 "nsid": 1, 00:17:16.881 "bdev_name": "Malloc2", 00:17:16.881 "name": "Malloc2", 00:17:16.881 "nguid": "81D81C6FA5A84DFFB35CD804DFC66FD5", 00:17:16.881 "uuid": "81d81c6f-a5a8-4dff-b35c-d804dfc66fd5" 00:17:16.881 } 00:17:16.881 ] 00:17:16.881 } 00:17:16.881 ] 00:17:16.881 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2065783 00:17:16.881 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:16.881 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:16.881 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:17:16.881 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:16.881 [2024-11-26 07:27:00.952144] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:17:16.881 [2024-11-26 07:27:00.952189] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2065928 ] 00:17:16.881 [2024-11-26 07:27:01.005929] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:17:17.144 [2024-11-26 07:27:01.015084] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:17.144 [2024-11-26 07:27:01.015110] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fe504616000 00:17:17.144 [2024-11-26 07:27:01.016082] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:17.144 [2024-11-26 07:27:01.017084] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:17.144 [2024-11-26 07:27:01.018089] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:17.144 [2024-11-26 07:27:01.019093] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:17.144 [2024-11-26 07:27:01.020103] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:17.144 [2024-11-26 07:27:01.021105] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:17.144 [2024-11-26 07:27:01.022108] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:17.144 [2024-11-26 07:27:01.023117] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:17.144 [2024-11-26 07:27:01.024127] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:17.144 [2024-11-26 07:27:01.024139] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fe50460b000 00:17:17.144 [2024-11-26 07:27:01.025465] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:17.144 [2024-11-26 07:27:01.041677] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:17:17.144 [2024-11-26 07:27:01.041701] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:17:17.144 [2024-11-26 07:27:01.046779] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:17.144 [2024-11-26 07:27:01.046828] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:17.144 [2024-11-26 07:27:01.046914] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:17:17.144 [2024-11-26 07:27:01.046931] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:17:17.144 [2024-11-26 07:27:01.046937] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:17:17.144 [2024-11-26 07:27:01.047785] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:17:17.144 [2024-11-26 07:27:01.047795] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:17:17.144 [2024-11-26 07:27:01.047803] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:17:17.144 [2024-11-26 07:27:01.048786] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:17.144 [2024-11-26 07:27:01.048797] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:17:17.144 [2024-11-26 07:27:01.048805] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:17:17.144 [2024-11-26 07:27:01.049794] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:17:17.144 [2024-11-26 07:27:01.049804] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:17.144 [2024-11-26 07:27:01.050797] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:17:17.144 [2024-11-26 07:27:01.050807] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:17:17.144 [2024-11-26 07:27:01.050812] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:17:17.144 [2024-11-26 07:27:01.050818] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:17.144 [2024-11-26 07:27:01.050927] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:17:17.144 [2024-11-26 07:27:01.050932] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:17.144 [2024-11-26 07:27:01.050937] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:17:17.144 [2024-11-26 07:27:01.051805] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:17:17.144 [2024-11-26 07:27:01.052813] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:17:17.144 [2024-11-26 07:27:01.053819] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:17.144 [2024-11-26 07:27:01.054825] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:17.144 [2024-11-26 07:27:01.054869] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:17.144 [2024-11-26 07:27:01.055836] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:17:17.144 [2024-11-26 07:27:01.055844] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:17.144 [2024-11-26 07:27:01.055849] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:17:17.144 [2024-11-26 07:27:01.055873] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:17:17.144 [2024-11-26 07:27:01.055882] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:17:17.144 [2024-11-26 07:27:01.055896] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:17.144 [2024-11-26 07:27:01.055901] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:17.144 [2024-11-26 07:27:01.055904] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:17.144 [2024-11-26 07:27:01.055916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:17.144 [2024-11-26 07:27:01.059871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:17.144 [2024-11-26 07:27:01.059883] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:17:17.144 [2024-11-26 07:27:01.059888] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:17:17.144 [2024-11-26 07:27:01.059892] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:17:17.144 [2024-11-26 07:27:01.059897] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:17.144 [2024-11-26 07:27:01.059905] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:17:17.144 [2024-11-26 07:27:01.059910] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:17:17.144 [2024-11-26 07:27:01.059914] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:17:17.144 [2024-11-26 07:27:01.059924] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:17:17.144 [2024-11-26 07:27:01.059934] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:17.144 [2024-11-26 07:27:01.067869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:17.144 [2024-11-26 07:27:01.067883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.144 [2024-11-26 07:27:01.067891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.144 [2024-11-26 07:27:01.067900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.144 [2024-11-26 07:27:01.067908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.144 [2024-11-26 07:27:01.067913] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:17:17.144 [2024-11-26 07:27:01.067920] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:17.144 [2024-11-26 07:27:01.067930] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:17.144 [2024-11-26 07:27:01.075867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:17.144 [2024-11-26 07:27:01.075878] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:17:17.145 [2024-11-26 07:27:01.075883] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:17.145 [2024-11-26 07:27:01.075892] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:17:17.145 [2024-11-26 07:27:01.075898] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:17:17.145 [2024-11-26 07:27:01.075907] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:17.145 [2024-11-26 07:27:01.083868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:17.145 [2024-11-26 07:27:01.083934] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:17:17.145 [2024-11-26 07:27:01.083942] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:17:17.145 [2024-11-26 07:27:01.083950] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:17.145 [2024-11-26 07:27:01.083954] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:17.145 [2024-11-26 07:27:01.083958] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:17.145 [2024-11-26 07:27:01.083964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:17.145 [2024-11-26 07:27:01.091869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:17.145 [2024-11-26 07:27:01.091882] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:17:17.145 [2024-11-26 07:27:01.091894] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:17:17.145 [2024-11-26 07:27:01.091902] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:17:17.145 [2024-11-26 07:27:01.091909] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:17.145 [2024-11-26 07:27:01.091914] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:17.145 [2024-11-26 07:27:01.091917] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:17.145 [2024-11-26 07:27:01.091923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:17.145 [2024-11-26 07:27:01.099870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:17.145 [2024-11-26 07:27:01.099886] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:17.145 [2024-11-26 07:27:01.099894] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:17.145 [2024-11-26 07:27:01.099902] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:17.145 [2024-11-26 07:27:01.099906] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:17.145 [2024-11-26 07:27:01.099910] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:17.145 [2024-11-26 07:27:01.099916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:17.145 [2024-11-26 07:27:01.107869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:17.145 [2024-11-26 07:27:01.107882] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:17.145 [2024-11-26 07:27:01.107889] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:17:17.145 [2024-11-26 07:27:01.107897] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:17:17.145 [2024-11-26 07:27:01.107903] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:17:17.145 [2024-11-26 07:27:01.107909] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:17.145 [2024-11-26 07:27:01.107914] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:17:17.145 [2024-11-26 07:27:01.107919] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:17:17.145 [2024-11-26 07:27:01.107923] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:17:17.145 [2024-11-26 07:27:01.107928] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:17:17.145 [2024-11-26 07:27:01.107945] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:17.145 [2024-11-26 07:27:01.115868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:17.145 [2024-11-26 07:27:01.115883] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:17.145 [2024-11-26 07:27:01.123869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:17.145 [2024-11-26 07:27:01.123883] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:17.145 [2024-11-26 07:27:01.131868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:17.145 [2024-11-26 07:27:01.131882] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:17.145 [2024-11-26 07:27:01.139868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:17.145 [2024-11-26 07:27:01.139884] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:17.145 [2024-11-26 07:27:01.139889] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:17.145 [2024-11-26 07:27:01.139893] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:17.145 [2024-11-26 07:27:01.139896] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:17.145 [2024-11-26 07:27:01.139900] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:17.145 [2024-11-26 07:27:01.139906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:17.145 [2024-11-26 07:27:01.139914] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:17.145 [2024-11-26 07:27:01.139918] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:17.145 [2024-11-26 07:27:01.139922] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:17.145 [2024-11-26 07:27:01.139928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:17.145 [2024-11-26 07:27:01.139937] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:17.145 [2024-11-26 07:27:01.139941] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:17.145 [2024-11-26 07:27:01.139945] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:17.145 [2024-11-26 07:27:01.139951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:17.145 [2024-11-26 07:27:01.139958] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:17.145 [2024-11-26 07:27:01.139963] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:17.145 [2024-11-26 07:27:01.139966] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:17.145 [2024-11-26 07:27:01.139972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:17.145 [2024-11-26 07:27:01.147868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:17.145 [2024-11-26 07:27:01.147883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:17.145 [2024-11-26 07:27:01.147893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:17.145 [2024-11-26 07:27:01.147901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:17.145 ===================================================== 00:17:17.145 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:17.145 ===================================================== 00:17:17.145 Controller Capabilities/Features 00:17:17.145 ================================ 00:17:17.145 Vendor ID: 4e58 00:17:17.145 Subsystem Vendor ID: 4e58 00:17:17.145 Serial Number: SPDK2 00:17:17.145 Model Number: SPDK bdev Controller 00:17:17.145 Firmware Version: 25.01 00:17:17.145 Recommended Arb Burst: 6 00:17:17.145 IEEE OUI Identifier: 8d 6b 50 00:17:17.145 Multi-path I/O 00:17:17.145 May have multiple subsystem ports: Yes 00:17:17.145 May have multiple controllers: Yes 00:17:17.145 Associated with SR-IOV VF: No 00:17:17.145 Max Data Transfer Size: 131072 00:17:17.145 Max Number of Namespaces: 32 00:17:17.145 Max Number of I/O Queues: 127 00:17:17.145 NVMe Specification Version (VS): 1.3 00:17:17.145 NVMe Specification Version (Identify): 1.3 00:17:17.145 Maximum Queue Entries: 256 00:17:17.145 Contiguous Queues Required: Yes 00:17:17.145 Arbitration Mechanisms Supported 00:17:17.145 Weighted Round Robin: Not Supported 00:17:17.145 Vendor Specific: Not Supported 00:17:17.145 Reset Timeout: 15000 ms 00:17:17.145 Doorbell Stride: 4 bytes 00:17:17.145 NVM Subsystem Reset: Not Supported 00:17:17.145 Command Sets Supported 00:17:17.145 NVM Command Set: Supported 00:17:17.146 Boot Partition: Not Supported 00:17:17.146 Memory Page Size Minimum: 4096 bytes 00:17:17.146 Memory Page Size Maximum: 4096 bytes 00:17:17.146 Persistent Memory Region: Not Supported 00:17:17.146 Optional Asynchronous Events Supported 00:17:17.146 Namespace Attribute Notices: Supported 00:17:17.146 Firmware Activation Notices: Not Supported 00:17:17.146 ANA Change Notices: Not Supported 00:17:17.146 PLE Aggregate Log Change Notices: Not Supported 00:17:17.146 LBA Status Info Alert Notices: Not Supported 00:17:17.146 EGE Aggregate Log Change Notices: Not Supported 00:17:17.146 Normal NVM Subsystem Shutdown event: Not Supported 00:17:17.146 Zone Descriptor Change Notices: Not Supported 00:17:17.146 Discovery Log Change Notices: Not Supported 00:17:17.146 Controller Attributes 00:17:17.146 128-bit Host Identifier: Supported 00:17:17.146 Non-Operational Permissive Mode: Not Supported 00:17:17.146 NVM Sets: Not Supported 00:17:17.146 Read Recovery Levels: Not Supported 00:17:17.146 Endurance Groups: Not Supported 00:17:17.146 Predictable Latency Mode: Not Supported 00:17:17.146 Traffic Based Keep ALive: Not Supported 00:17:17.146 Namespace Granularity: Not Supported 00:17:17.146 SQ Associations: Not Supported 00:17:17.146 UUID List: Not Supported 00:17:17.146 Multi-Domain Subsystem: Not Supported 00:17:17.146 Fixed Capacity Management: Not Supported 00:17:17.146 Variable Capacity Management: Not Supported 00:17:17.146 Delete Endurance Group: Not Supported 00:17:17.146 Delete NVM Set: Not Supported 00:17:17.146 Extended LBA Formats Supported: Not Supported 00:17:17.146 Flexible Data Placement Supported: Not Supported 00:17:17.146 00:17:17.146 Controller Memory Buffer Support 00:17:17.146 ================================ 00:17:17.146 Supported: No 00:17:17.146 00:17:17.146 Persistent Memory Region Support 00:17:17.146 ================================ 00:17:17.146 Supported: No 00:17:17.146 00:17:17.146 Admin Command Set Attributes 00:17:17.146 ============================ 00:17:17.146 Security Send/Receive: Not Supported 00:17:17.146 Format NVM: Not Supported 00:17:17.146 Firmware Activate/Download: Not Supported 00:17:17.146 Namespace Management: Not Supported 00:17:17.146 Device Self-Test: Not Supported 00:17:17.146 Directives: Not Supported 00:17:17.146 NVMe-MI: Not Supported 00:17:17.146 Virtualization Management: Not Supported 00:17:17.146 Doorbell Buffer Config: Not Supported 00:17:17.146 Get LBA Status Capability: Not Supported 00:17:17.146 Command & Feature Lockdown Capability: Not Supported 00:17:17.146 Abort Command Limit: 4 00:17:17.146 Async Event Request Limit: 4 00:17:17.146 Number of Firmware Slots: N/A 00:17:17.146 Firmware Slot 1 Read-Only: N/A 00:17:17.146 Firmware Activation Without Reset: N/A 00:17:17.146 Multiple Update Detection Support: N/A 00:17:17.146 Firmware Update Granularity: No Information Provided 00:17:17.146 Per-Namespace SMART Log: No 00:17:17.146 Asymmetric Namespace Access Log Page: Not Supported 00:17:17.146 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:17:17.146 Command Effects Log Page: Supported 00:17:17.146 Get Log Page Extended Data: Supported 00:17:17.146 Telemetry Log Pages: Not Supported 00:17:17.146 Persistent Event Log Pages: Not Supported 00:17:17.146 Supported Log Pages Log Page: May Support 00:17:17.146 Commands Supported & Effects Log Page: Not Supported 00:17:17.146 Feature Identifiers & Effects Log Page:May Support 00:17:17.146 NVMe-MI Commands & Effects Log Page: May Support 00:17:17.146 Data Area 4 for Telemetry Log: Not Supported 00:17:17.146 Error Log Page Entries Supported: 128 00:17:17.146 Keep Alive: Supported 00:17:17.146 Keep Alive Granularity: 10000 ms 00:17:17.146 00:17:17.146 NVM Command Set Attributes 00:17:17.146 ========================== 00:17:17.146 Submission Queue Entry Size 00:17:17.146 Max: 64 00:17:17.146 Min: 64 00:17:17.146 Completion Queue Entry Size 00:17:17.146 Max: 16 00:17:17.146 Min: 16 00:17:17.146 Number of Namespaces: 32 00:17:17.146 Compare Command: Supported 00:17:17.146 Write Uncorrectable Command: Not Supported 00:17:17.146 Dataset Management Command: Supported 00:17:17.146 Write Zeroes Command: Supported 00:17:17.146 Set Features Save Field: Not Supported 00:17:17.146 Reservations: Not Supported 00:17:17.146 Timestamp: Not Supported 00:17:17.146 Copy: Supported 00:17:17.146 Volatile Write Cache: Present 00:17:17.146 Atomic Write Unit (Normal): 1 00:17:17.146 Atomic Write Unit (PFail): 1 00:17:17.146 Atomic Compare & Write Unit: 1 00:17:17.146 Fused Compare & Write: Supported 00:17:17.146 Scatter-Gather List 00:17:17.146 SGL Command Set: Supported (Dword aligned) 00:17:17.146 SGL Keyed: Not Supported 00:17:17.146 SGL Bit Bucket Descriptor: Not Supported 00:17:17.146 SGL Metadata Pointer: Not Supported 00:17:17.146 Oversized SGL: Not Supported 00:17:17.146 SGL Metadata Address: Not Supported 00:17:17.146 SGL Offset: Not Supported 00:17:17.146 Transport SGL Data Block: Not Supported 00:17:17.146 Replay Protected Memory Block: Not Supported 00:17:17.146 00:17:17.146 Firmware Slot Information 00:17:17.146 ========================= 00:17:17.146 Active slot: 1 00:17:17.146 Slot 1 Firmware Revision: 25.01 00:17:17.146 00:17:17.146 00:17:17.146 Commands Supported and Effects 00:17:17.146 ============================== 00:17:17.146 Admin Commands 00:17:17.146 -------------- 00:17:17.146 Get Log Page (02h): Supported 00:17:17.146 Identify (06h): Supported 00:17:17.146 Abort (08h): Supported 00:17:17.146 Set Features (09h): Supported 00:17:17.146 Get Features (0Ah): Supported 00:17:17.146 Asynchronous Event Request (0Ch): Supported 00:17:17.146 Keep Alive (18h): Supported 00:17:17.146 I/O Commands 00:17:17.146 ------------ 00:17:17.146 Flush (00h): Supported LBA-Change 00:17:17.146 Write (01h): Supported LBA-Change 00:17:17.146 Read (02h): Supported 00:17:17.146 Compare (05h): Supported 00:17:17.146 Write Zeroes (08h): Supported LBA-Change 00:17:17.146 Dataset Management (09h): Supported LBA-Change 00:17:17.146 Copy (19h): Supported LBA-Change 00:17:17.146 00:17:17.146 Error Log 00:17:17.146 ========= 00:17:17.146 00:17:17.146 Arbitration 00:17:17.146 =========== 00:17:17.146 Arbitration Burst: 1 00:17:17.146 00:17:17.146 Power Management 00:17:17.146 ================ 00:17:17.146 Number of Power States: 1 00:17:17.146 Current Power State: Power State #0 00:17:17.146 Power State #0: 00:17:17.146 Max Power: 0.00 W 00:17:17.146 Non-Operational State: Operational 00:17:17.146 Entry Latency: Not Reported 00:17:17.146 Exit Latency: Not Reported 00:17:17.146 Relative Read Throughput: 0 00:17:17.146 Relative Read Latency: 0 00:17:17.146 Relative Write Throughput: 0 00:17:17.146 Relative Write Latency: 0 00:17:17.146 Idle Power: Not Reported 00:17:17.146 Active Power: Not Reported 00:17:17.146 Non-Operational Permissive Mode: Not Supported 00:17:17.146 00:17:17.146 Health Information 00:17:17.146 ================== 00:17:17.146 Critical Warnings: 00:17:17.146 Available Spare Space: OK 00:17:17.146 Temperature: OK 00:17:17.146 Device Reliability: OK 00:17:17.146 Read Only: No 00:17:17.146 Volatile Memory Backup: OK 00:17:17.146 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:17.146 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:17.146 Available Spare: 0% 00:17:17.146 Available Sp[2024-11-26 07:27:01.148002] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:17.146 [2024-11-26 07:27:01.155870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:17.146 [2024-11-26 07:27:01.155900] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:17:17.146 [2024-11-26 07:27:01.155909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.146 [2024-11-26 07:27:01.155916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.146 [2024-11-26 07:27:01.155923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.146 [2024-11-26 07:27:01.155929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.146 [2024-11-26 07:27:01.159868] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:17.146 [2024-11-26 07:27:01.159880] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:17:17.146 [2024-11-26 07:27:01.160000] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:17.146 [2024-11-26 07:27:01.160050] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:17:17.147 [2024-11-26 07:27:01.160058] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:17:17.147 [2024-11-26 07:27:01.161006] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:17:17.147 [2024-11-26 07:27:01.161018] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:17:17.147 [2024-11-26 07:27:01.161065] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:17:17.147 [2024-11-26 07:27:01.162455] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:17.147 are Threshold: 0% 00:17:17.147 Life Percentage Used: 0% 00:17:17.147 Data Units Read: 0 00:17:17.147 Data Units Written: 0 00:17:17.147 Host Read Commands: 0 00:17:17.147 Host Write Commands: 0 00:17:17.147 Controller Busy Time: 0 minutes 00:17:17.147 Power Cycles: 0 00:17:17.147 Power On Hours: 0 hours 00:17:17.147 Unsafe Shutdowns: 0 00:17:17.147 Unrecoverable Media Errors: 0 00:17:17.147 Lifetime Error Log Entries: 0 00:17:17.147 Warning Temperature Time: 0 minutes 00:17:17.147 Critical Temperature Time: 0 minutes 00:17:17.147 00:17:17.147 Number of Queues 00:17:17.147 ================ 00:17:17.147 Number of I/O Submission Queues: 127 00:17:17.147 Number of I/O Completion Queues: 127 00:17:17.147 00:17:17.147 Active Namespaces 00:17:17.147 ================= 00:17:17.147 Namespace ID:1 00:17:17.147 Error Recovery Timeout: Unlimited 00:17:17.147 Command Set Identifier: NVM (00h) 00:17:17.147 Deallocate: Supported 00:17:17.147 Deallocated/Unwritten Error: Not Supported 00:17:17.147 Deallocated Read Value: Unknown 00:17:17.147 Deallocate in Write Zeroes: Not Supported 00:17:17.147 Deallocated Guard Field: 0xFFFF 00:17:17.147 Flush: Supported 00:17:17.147 Reservation: Supported 00:17:17.147 Namespace Sharing Capabilities: Multiple Controllers 00:17:17.147 Size (in LBAs): 131072 (0GiB) 00:17:17.147 Capacity (in LBAs): 131072 (0GiB) 00:17:17.147 Utilization (in LBAs): 131072 (0GiB) 00:17:17.147 NGUID: 81D81C6FA5A84DFFB35CD804DFC66FD5 00:17:17.147 UUID: 81d81c6f-a5a8-4dff-b35c-d804dfc66fd5 00:17:17.147 Thin Provisioning: Not Supported 00:17:17.147 Per-NS Atomic Units: Yes 00:17:17.147 Atomic Boundary Size (Normal): 0 00:17:17.147 Atomic Boundary Size (PFail): 0 00:17:17.147 Atomic Boundary Offset: 0 00:17:17.147 Maximum Single Source Range Length: 65535 00:17:17.147 Maximum Copy Length: 65535 00:17:17.147 Maximum Source Range Count: 1 00:17:17.147 NGUID/EUI64 Never Reused: No 00:17:17.147 Namespace Write Protected: No 00:17:17.147 Number of LBA Formats: 1 00:17:17.147 Current LBA Format: LBA Format #00 00:17:17.147 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:17.147 00:17:17.147 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:17.407 [2024-11-26 07:27:01.362967] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:22.693 Initializing NVMe Controllers 00:17:22.693 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:22.693 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:22.693 Initialization complete. Launching workers. 00:17:22.693 ======================================================== 00:17:22.693 Latency(us) 00:17:22.693 Device Information : IOPS MiB/s Average min max 00:17:22.693 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40018.58 156.32 3199.01 845.60 10784.17 00:17:22.693 ======================================================== 00:17:22.693 Total : 40018.58 156.32 3199.01 845.60 10784.17 00:17:22.693 00:17:22.693 [2024-11-26 07:27:06.475083] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:22.693 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:22.693 [2024-11-26 07:27:06.665631] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:27.986 Initializing NVMe Controllers 00:17:27.986 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:27.986 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:27.986 Initialization complete. Launching workers. 00:17:27.986 ======================================================== 00:17:27.986 Latency(us) 00:17:27.986 Device Information : IOPS MiB/s Average min max 00:17:27.986 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35052.18 136.92 3650.97 1102.87 10667.94 00:17:27.986 ======================================================== 00:17:27.986 Total : 35052.18 136.92 3650.97 1102.87 10667.94 00:17:27.986 00:17:27.986 [2024-11-26 07:27:11.683602] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:27.986 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:27.986 [2024-11-26 07:27:11.889805] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:33.277 [2024-11-26 07:27:17.035947] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:33.277 Initializing NVMe Controllers 00:17:33.277 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:33.277 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:33.277 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:17:33.277 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:17:33.277 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:17:33.277 Initialization complete. Launching workers. 00:17:33.277 Starting thread on core 2 00:17:33.277 Starting thread on core 3 00:17:33.277 Starting thread on core 1 00:17:33.277 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:17:33.277 [2024-11-26 07:27:17.325155] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:36.579 [2024-11-26 07:27:20.410448] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:36.579 Initializing NVMe Controllers 00:17:36.579 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:36.579 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:36.579 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:17:36.579 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:17:36.579 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:17:36.579 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:17:36.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:36.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:36.580 Initialization complete. Launching workers. 00:17:36.580 Starting thread on core 1 with urgent priority queue 00:17:36.580 Starting thread on core 2 with urgent priority queue 00:17:36.580 Starting thread on core 3 with urgent priority queue 00:17:36.580 Starting thread on core 0 with urgent priority queue 00:17:36.580 SPDK bdev Controller (SPDK2 ) core 0: 11922.00 IO/s 8.39 secs/100000 ios 00:17:36.580 SPDK bdev Controller (SPDK2 ) core 1: 13255.00 IO/s 7.54 secs/100000 ios 00:17:36.580 SPDK bdev Controller (SPDK2 ) core 2: 12113.67 IO/s 8.26 secs/100000 ios 00:17:36.580 SPDK bdev Controller (SPDK2 ) core 3: 11228.67 IO/s 8.91 secs/100000 ios 00:17:36.580 ======================================================== 00:17:36.580 00:17:36.580 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:36.580 [2024-11-26 07:27:20.707284] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:36.840 Initializing NVMe Controllers 00:17:36.840 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:36.840 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:36.840 Namespace ID: 1 size: 0GB 00:17:36.840 Initialization complete. 00:17:36.840 INFO: using host memory buffer for IO 00:17:36.840 Hello world! 00:17:36.840 [2024-11-26 07:27:20.715317] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:36.840 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:37.101 [2024-11-26 07:27:21.011117] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:38.044 Initializing NVMe Controllers 00:17:38.044 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:38.044 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:38.044 Initialization complete. Launching workers. 00:17:38.044 submit (in ns) avg, min, max = 7612.2, 3900.0, 4000080.8 00:17:38.044 complete (in ns) avg, min, max = 18591.1, 2389.2, 4077204.2 00:17:38.044 00:17:38.044 Submit histogram 00:17:38.044 ================ 00:17:38.044 Range in us Cumulative Count 00:17:38.044 3.893 - 3.920: 0.6537% ( 125) 00:17:38.044 3.920 - 3.947: 4.0370% ( 647) 00:17:38.044 3.947 - 3.973: 11.7032% ( 1466) 00:17:38.044 3.973 - 4.000: 23.5057% ( 2257) 00:17:38.044 4.000 - 4.027: 36.7202% ( 2527) 00:17:38.044 4.027 - 4.053: 49.5006% ( 2444) 00:17:38.044 4.053 - 4.080: 65.8056% ( 3118) 00:17:38.044 4.080 - 4.107: 80.4947% ( 2809) 00:17:38.044 4.107 - 4.133: 90.9376% ( 1997) 00:17:38.044 4.133 - 4.160: 96.4650% ( 1057) 00:17:38.044 4.160 - 4.187: 98.5881% ( 406) 00:17:38.044 4.187 - 4.213: 99.2313% ( 123) 00:17:38.044 4.213 - 4.240: 99.4091% ( 34) 00:17:38.044 4.240 - 4.267: 99.4457% ( 7) 00:17:38.044 4.267 - 4.293: 99.4718% ( 5) 00:17:38.044 4.427 - 4.453: 99.4771% ( 1) 00:17:38.044 4.480 - 4.507: 99.4875% ( 2) 00:17:38.044 4.560 - 4.587: 99.5032% ( 3) 00:17:38.044 4.720 - 4.747: 99.5084% ( 1) 00:17:38.044 4.773 - 4.800: 99.5137% ( 1) 00:17:38.044 4.907 - 4.933: 99.5189% ( 1) 00:17:38.044 4.933 - 4.960: 99.5241% ( 1) 00:17:38.044 5.173 - 5.200: 99.5294% ( 1) 00:17:38.044 5.227 - 5.253: 99.5346% ( 1) 00:17:38.044 5.280 - 5.307: 99.5398% ( 1) 00:17:38.044 5.387 - 5.413: 99.5451% ( 1) 00:17:38.044 5.467 - 5.493: 99.5503% ( 1) 00:17:38.044 5.573 - 5.600: 99.5555% ( 1) 00:17:38.044 5.653 - 5.680: 99.5660% ( 2) 00:17:38.044 5.787 - 5.813: 99.5712% ( 1) 00:17:38.044 5.813 - 5.840: 99.5764% ( 1) 00:17:38.044 5.973 - 6.000: 99.5869% ( 2) 00:17:38.044 6.000 - 6.027: 99.5921% ( 1) 00:17:38.044 6.027 - 6.053: 99.6026% ( 2) 00:17:38.044 6.080 - 6.107: 99.6078% ( 1) 00:17:38.044 6.107 - 6.133: 99.6130% ( 1) 00:17:38.044 6.133 - 6.160: 99.6183% ( 1) 00:17:38.044 6.160 - 6.187: 99.6235% ( 1) 00:17:38.044 6.187 - 6.213: 99.6287% ( 1) 00:17:38.044 6.213 - 6.240: 99.6339% ( 1) 00:17:38.044 6.240 - 6.267: 99.6392% ( 1) 00:17:38.044 6.267 - 6.293: 99.6549% ( 3) 00:17:38.044 6.293 - 6.320: 99.6601% ( 1) 00:17:38.044 6.320 - 6.347: 99.6653% ( 1) 00:17:38.044 6.400 - 6.427: 99.6706% ( 1) 00:17:38.044 6.427 - 6.453: 99.6758% ( 1) 00:17:38.044 6.453 - 6.480: 99.6810% ( 1) 00:17:38.044 6.480 - 6.507: 99.6915% ( 2) 00:17:38.044 6.507 - 6.533: 99.6967% ( 1) 00:17:38.044 6.533 - 6.560: 99.7019% ( 1) 00:17:38.044 6.613 - 6.640: 99.7176% ( 3) 00:17:38.044 6.640 - 6.667: 99.7228% ( 1) 00:17:38.044 6.667 - 6.693: 99.7281% ( 1) 00:17:38.044 6.720 - 6.747: 99.7333% ( 1) 00:17:38.044 6.827 - 6.880: 99.7385% ( 1) 00:17:38.044 6.880 - 6.933: 99.7438% ( 1) 00:17:38.044 6.933 - 6.987: 99.7699% ( 5) 00:17:38.044 6.987 - 7.040: 99.7751% ( 1) 00:17:38.044 7.040 - 7.093: 99.7804% ( 1) 00:17:38.044 7.147 - 7.200: 99.7856% ( 1) 00:17:38.044 7.200 - 7.253: 99.8013% ( 3) 00:17:38.045 7.253 - 7.307: 99.8065% ( 1) 00:17:38.045 7.307 - 7.360: 99.8222% ( 3) 00:17:38.045 7.360 - 7.413: 99.8431% ( 4) 00:17:38.045 7.413 - 7.467: 99.8484% ( 1) 00:17:38.045 7.467 - 7.520: 99.8588% ( 2) 00:17:38.045 7.627 - 7.680: 99.8745% ( 3) 00:17:38.045 7.733 - 7.787: 99.8797% ( 1) 00:17:38.045 7.840 - 7.893: 99.8850% ( 1) 00:17:38.045 8.107 - 8.160: 99.8902% ( 1) 00:17:38.045 8.640 - 8.693: 99.8954% ( 1) 00:17:38.045 9.120 - 9.173: 99.9006% ( 1) 00:17:38.045 9.227 - 9.280: 99.9059% ( 1) 00:17:38.045 15.467 - 15.573: 99.9111% ( 1) 00:17:38.045 3986.773 - 4014.080: 100.0000% ( 17) 00:17:38.045 00:17:38.045 Complete histogram 00:17:38.045 ================== 00:17:38.045 Range in us Cumulative Count 00:17:38.045 2.387 - 2.400: 0.0052% ( 1) 00:17:38.045 2.400 - 2.413: 0.8315% ( 158) 00:17:38.045 2.413 - [2024-11-26 07:27:22.105593] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:38.045 2.427: 1.0093% ( 34) 00:17:38.045 2.427 - 2.440: 1.1295% ( 23) 00:17:38.045 2.440 - 2.453: 2.9650% ( 351) 00:17:38.045 2.453 - 2.467: 50.9439% ( 9175) 00:17:38.045 2.467 - 2.480: 58.8401% ( 1510) 00:17:38.045 2.480 - 2.493: 72.4573% ( 2604) 00:17:38.045 2.493 - 2.507: 78.2461% ( 1107) 00:17:38.045 2.507 - 2.520: 81.6765% ( 656) 00:17:38.045 2.520 - 2.533: 85.5723% ( 745) 00:17:38.045 2.533 - 2.547: 91.7429% ( 1180) 00:17:38.045 2.547 - 2.560: 95.4453% ( 708) 00:17:38.045 2.560 - 2.573: 97.3174% ( 358) 00:17:38.045 2.573 - 2.587: 98.5463% ( 235) 00:17:38.045 2.587 - 2.600: 99.1058% ( 107) 00:17:38.045 2.600 - 2.613: 99.3150% ( 40) 00:17:38.045 2.613 - 2.627: 99.3463% ( 6) 00:17:38.045 2.627 - 2.640: 99.3620% ( 3) 00:17:38.045 2.853 - 2.867: 99.3673% ( 1) 00:17:38.045 2.987 - 3.000: 99.3725% ( 1) 00:17:38.045 4.480 - 4.507: 99.3777% ( 1) 00:17:38.045 4.587 - 4.613: 99.3829% ( 1) 00:17:38.045 4.640 - 4.667: 99.3882% ( 1) 00:17:38.045 4.720 - 4.747: 99.3934% ( 1) 00:17:38.045 4.827 - 4.853: 99.4039% ( 2) 00:17:38.045 4.853 - 4.880: 99.4091% ( 1) 00:17:38.045 4.907 - 4.933: 99.4195% ( 2) 00:17:38.045 4.933 - 4.960: 99.4300% ( 2) 00:17:38.045 5.013 - 5.040: 99.4352% ( 1) 00:17:38.045 5.067 - 5.093: 99.4457% ( 2) 00:17:38.045 5.093 - 5.120: 99.4562% ( 2) 00:17:38.045 5.120 - 5.147: 99.4614% ( 1) 00:17:38.045 5.173 - 5.200: 99.4771% ( 3) 00:17:38.045 5.307 - 5.333: 99.4823% ( 1) 00:17:38.045 5.493 - 5.520: 99.4875% ( 1) 00:17:38.045 5.600 - 5.627: 99.4928% ( 1) 00:17:38.045 5.627 - 5.653: 99.5032% ( 2) 00:17:38.045 5.707 - 5.733: 99.5084% ( 1) 00:17:38.045 5.733 - 5.760: 99.5137% ( 1) 00:17:38.045 5.760 - 5.787: 99.5189% ( 1) 00:17:38.045 5.813 - 5.840: 99.5241% ( 1) 00:17:38.045 5.840 - 5.867: 99.5294% ( 1) 00:17:38.045 5.893 - 5.920: 99.5451% ( 3) 00:17:38.045 5.973 - 6.000: 99.5503% ( 1) 00:17:38.045 6.080 - 6.107: 99.5555% ( 1) 00:17:38.045 6.267 - 6.293: 99.5607% ( 1) 00:17:38.045 6.427 - 6.453: 99.5660% ( 1) 00:17:38.045 6.533 - 6.560: 99.5712% ( 1) 00:17:38.045 8.800 - 8.853: 99.5764% ( 1) 00:17:38.045 9.973 - 10.027: 99.5817% ( 1) 00:17:38.045 13.227 - 13.280: 99.5869% ( 1) 00:17:38.045 15.467 - 15.573: 99.5921% ( 1) 00:17:38.045 156.160 - 157.013: 99.5973% ( 1) 00:17:38.045 3986.773 - 4014.080: 99.9843% ( 74) 00:17:38.045 4014.080 - 4041.387: 99.9948% ( 2) 00:17:38.045 4068.693 - 4096.000: 100.0000% ( 1) 00:17:38.045 00:17:38.045 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:17:38.045 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:38.045 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:17:38.045 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:17:38.045 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:38.307 [ 00:17:38.307 { 00:17:38.307 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:38.307 "subtype": "Discovery", 00:17:38.307 "listen_addresses": [], 00:17:38.307 "allow_any_host": true, 00:17:38.307 "hosts": [] 00:17:38.307 }, 00:17:38.307 { 00:17:38.307 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:38.307 "subtype": "NVMe", 00:17:38.307 "listen_addresses": [ 00:17:38.307 { 00:17:38.307 "trtype": "VFIOUSER", 00:17:38.307 "adrfam": "IPv4", 00:17:38.307 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:38.307 "trsvcid": "0" 00:17:38.307 } 00:17:38.307 ], 00:17:38.307 "allow_any_host": true, 00:17:38.307 "hosts": [], 00:17:38.307 "serial_number": "SPDK1", 00:17:38.307 "model_number": "SPDK bdev Controller", 00:17:38.307 "max_namespaces": 32, 00:17:38.307 "min_cntlid": 1, 00:17:38.307 "max_cntlid": 65519, 00:17:38.307 "namespaces": [ 00:17:38.307 { 00:17:38.307 "nsid": 1, 00:17:38.307 "bdev_name": "Malloc1", 00:17:38.307 "name": "Malloc1", 00:17:38.307 "nguid": "FE2BE4B329FB4E6A846D649B9AA7C780", 00:17:38.307 "uuid": "fe2be4b3-29fb-4e6a-846d-649b9aa7c780" 00:17:38.307 }, 00:17:38.307 { 00:17:38.307 "nsid": 2, 00:17:38.307 "bdev_name": "Malloc3", 00:17:38.307 "name": "Malloc3", 00:17:38.307 "nguid": "5EC4FBB572D74DCC8D43D47856629A8E", 00:17:38.307 "uuid": "5ec4fbb5-72d7-4dcc-8d43-d47856629a8e" 00:17:38.307 } 00:17:38.307 ] 00:17:38.307 }, 00:17:38.307 { 00:17:38.307 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:38.307 "subtype": "NVMe", 00:17:38.307 "listen_addresses": [ 00:17:38.307 { 00:17:38.307 "trtype": "VFIOUSER", 00:17:38.307 "adrfam": "IPv4", 00:17:38.307 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:38.307 "trsvcid": "0" 00:17:38.307 } 00:17:38.307 ], 00:17:38.307 "allow_any_host": true, 00:17:38.307 "hosts": [], 00:17:38.307 "serial_number": "SPDK2", 00:17:38.307 "model_number": "SPDK bdev Controller", 00:17:38.307 "max_namespaces": 32, 00:17:38.307 "min_cntlid": 1, 00:17:38.307 "max_cntlid": 65519, 00:17:38.307 "namespaces": [ 00:17:38.307 { 00:17:38.307 "nsid": 1, 00:17:38.307 "bdev_name": "Malloc2", 00:17:38.307 "name": "Malloc2", 00:17:38.307 "nguid": "81D81C6FA5A84DFFB35CD804DFC66FD5", 00:17:38.307 "uuid": "81d81c6f-a5a8-4dff-b35c-d804dfc66fd5" 00:17:38.307 } 00:17:38.307 ] 00:17:38.307 } 00:17:38.307 ] 00:17:38.307 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:38.307 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2070683 00:17:38.307 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:38.307 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:17:38.307 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:17:38.307 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:38.307 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:38.307 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:17:38.307 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:38.307 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:17:38.568 Malloc4 00:17:38.568 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:17:38.568 [2024-11-26 07:27:22.540613] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:38.568 [2024-11-26 07:27:22.694633] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:38.829 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:38.829 Asynchronous Event Request test 00:17:38.829 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:38.829 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:38.829 Registering asynchronous event callbacks... 00:17:38.829 Starting namespace attribute notice tests for all controllers... 00:17:38.829 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:38.829 aer_cb - Changed Namespace 00:17:38.829 Cleaning up... 00:17:38.829 [ 00:17:38.829 { 00:17:38.829 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:38.829 "subtype": "Discovery", 00:17:38.829 "listen_addresses": [], 00:17:38.829 "allow_any_host": true, 00:17:38.829 "hosts": [] 00:17:38.829 }, 00:17:38.829 { 00:17:38.829 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:38.829 "subtype": "NVMe", 00:17:38.829 "listen_addresses": [ 00:17:38.829 { 00:17:38.829 "trtype": "VFIOUSER", 00:17:38.829 "adrfam": "IPv4", 00:17:38.829 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:38.829 "trsvcid": "0" 00:17:38.829 } 00:17:38.829 ], 00:17:38.829 "allow_any_host": true, 00:17:38.829 "hosts": [], 00:17:38.829 "serial_number": "SPDK1", 00:17:38.829 "model_number": "SPDK bdev Controller", 00:17:38.829 "max_namespaces": 32, 00:17:38.829 "min_cntlid": 1, 00:17:38.829 "max_cntlid": 65519, 00:17:38.829 "namespaces": [ 00:17:38.829 { 00:17:38.829 "nsid": 1, 00:17:38.829 "bdev_name": "Malloc1", 00:17:38.829 "name": "Malloc1", 00:17:38.829 "nguid": "FE2BE4B329FB4E6A846D649B9AA7C780", 00:17:38.829 "uuid": "fe2be4b3-29fb-4e6a-846d-649b9aa7c780" 00:17:38.829 }, 00:17:38.829 { 00:17:38.829 "nsid": 2, 00:17:38.829 "bdev_name": "Malloc3", 00:17:38.829 "name": "Malloc3", 00:17:38.829 "nguid": "5EC4FBB572D74DCC8D43D47856629A8E", 00:17:38.829 "uuid": "5ec4fbb5-72d7-4dcc-8d43-d47856629a8e" 00:17:38.829 } 00:17:38.829 ] 00:17:38.829 }, 00:17:38.829 { 00:17:38.829 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:38.829 "subtype": "NVMe", 00:17:38.829 "listen_addresses": [ 00:17:38.829 { 00:17:38.829 "trtype": "VFIOUSER", 00:17:38.829 "adrfam": "IPv4", 00:17:38.829 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:38.829 "trsvcid": "0" 00:17:38.829 } 00:17:38.829 ], 00:17:38.829 "allow_any_host": true, 00:17:38.829 "hosts": [], 00:17:38.829 "serial_number": "SPDK2", 00:17:38.830 "model_number": "SPDK bdev Controller", 00:17:38.830 "max_namespaces": 32, 00:17:38.830 "min_cntlid": 1, 00:17:38.830 "max_cntlid": 65519, 00:17:38.830 "namespaces": [ 00:17:38.830 { 00:17:38.830 "nsid": 1, 00:17:38.830 "bdev_name": "Malloc2", 00:17:38.830 "name": "Malloc2", 00:17:38.830 "nguid": "81D81C6FA5A84DFFB35CD804DFC66FD5", 00:17:38.830 "uuid": "81d81c6f-a5a8-4dff-b35c-d804dfc66fd5" 00:17:38.830 }, 00:17:38.830 { 00:17:38.830 "nsid": 2, 00:17:38.830 "bdev_name": "Malloc4", 00:17:38.830 "name": "Malloc4", 00:17:38.830 "nguid": "91CBDDC85FEB4DEBA7CC0E14F713A0F8", 00:17:38.830 "uuid": "91cbddc8-5feb-4deb-a7cc-0e14f713a0f8" 00:17:38.830 } 00:17:38.830 ] 00:17:38.830 } 00:17:38.830 ] 00:17:38.830 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2070683 00:17:38.830 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:17:38.830 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2061036 00:17:38.830 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2061036 ']' 00:17:38.830 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2061036 00:17:38.830 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:17:38.830 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:38.830 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2061036 00:17:39.090 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:39.091 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:39.091 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2061036' 00:17:39.091 killing process with pid 2061036 00:17:39.091 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2061036 00:17:39.091 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2061036 00:17:39.091 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:39.091 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:39.091 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:17:39.091 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:17:39.091 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:17:39.091 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2070701 00:17:39.091 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2070701' 00:17:39.091 Process pid: 2070701 00:17:39.091 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:17:39.091 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:39.091 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2070701 00:17:39.091 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2070701 ']' 00:17:39.091 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.091 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:39.091 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.091 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:39.091 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:39.091 [2024-11-26 07:27:23.181901] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:17:39.091 [2024-11-26 07:27:23.182811] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:17:39.091 [2024-11-26 07:27:23.182852] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.353 [2024-11-26 07:27:23.263030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:39.353 [2024-11-26 07:27:23.297419] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:39.353 [2024-11-26 07:27:23.297457] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:39.353 [2024-11-26 07:27:23.297465] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:39.353 [2024-11-26 07:27:23.297472] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:39.353 [2024-11-26 07:27:23.297478] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:39.353 [2024-11-26 07:27:23.298944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.353 [2024-11-26 07:27:23.299170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:39.353 [2024-11-26 07:27:23.299325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:39.353 [2024-11-26 07:27:23.299325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.353 [2024-11-26 07:27:23.354168] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:17:39.353 [2024-11-26 07:27:23.354605] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:17:39.353 [2024-11-26 07:27:23.355340] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:17:39.353 [2024-11-26 07:27:23.355531] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:17:39.353 [2024-11-26 07:27:23.355746] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:17:39.926 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:39.926 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:17:39.926 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:40.869 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:17:41.130 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:41.130 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:41.130 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:41.130 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:41.130 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:41.392 Malloc1 00:17:41.392 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:41.653 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:41.914 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:41.914 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:41.914 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:41.914 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:42.174 Malloc2 00:17:42.174 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:42.434 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:42.434 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:42.694 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:17:42.694 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2070701 00:17:42.694 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2070701 ']' 00:17:42.694 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2070701 00:17:42.694 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:17:42.694 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:42.694 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2070701 00:17:42.694 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:42.694 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:42.694 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2070701' 00:17:42.694 killing process with pid 2070701 00:17:42.694 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2070701 00:17:42.694 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2070701 00:17:42.956 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:42.956 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:42.956 00:17:42.956 real 0m51.547s 00:17:42.956 user 3m17.394s 00:17:42.956 sys 0m2.809s 00:17:42.956 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:42.956 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:42.956 ************************************ 00:17:42.956 END TEST nvmf_vfio_user 00:17:42.956 ************************************ 00:17:42.956 07:27:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:42.956 07:27:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:42.956 07:27:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:42.956 07:27:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:42.956 ************************************ 00:17:42.956 START TEST nvmf_vfio_user_nvme_compliance 00:17:42.956 ************************************ 00:17:42.956 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:42.956 * Looking for test storage... 00:17:43.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:43.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.218 --rc genhtml_branch_coverage=1 00:17:43.218 --rc genhtml_function_coverage=1 00:17:43.218 --rc genhtml_legend=1 00:17:43.218 --rc geninfo_all_blocks=1 00:17:43.218 --rc geninfo_unexecuted_blocks=1 00:17:43.218 00:17:43.218 ' 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:43.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.218 --rc genhtml_branch_coverage=1 00:17:43.218 --rc genhtml_function_coverage=1 00:17:43.218 --rc genhtml_legend=1 00:17:43.218 --rc geninfo_all_blocks=1 00:17:43.218 --rc geninfo_unexecuted_blocks=1 00:17:43.218 00:17:43.218 ' 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:43.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.218 --rc genhtml_branch_coverage=1 00:17:43.218 --rc genhtml_function_coverage=1 00:17:43.218 --rc genhtml_legend=1 00:17:43.218 --rc geninfo_all_blocks=1 00:17:43.218 --rc geninfo_unexecuted_blocks=1 00:17:43.218 00:17:43.218 ' 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:43.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.218 --rc genhtml_branch_coverage=1 00:17:43.218 --rc genhtml_function_coverage=1 00:17:43.218 --rc genhtml_legend=1 00:17:43.218 --rc geninfo_all_blocks=1 00:17:43.218 --rc geninfo_unexecuted_blocks=1 00:17:43.218 00:17:43.218 ' 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.218 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.219 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:17:43.219 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.219 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:17:43.219 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:43.219 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:43.219 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:43.219 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:43.219 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:43.219 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:43.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:43.219 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:43.219 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:43.219 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:43.219 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:43.219 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:43.219 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:17:43.219 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:17:43.219 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:17:43.219 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2071688 00:17:43.219 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2071688' 00:17:43.219 Process pid: 2071688 00:17:43.219 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:43.219 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:43.219 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2071688 00:17:43.219 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 2071688 ']' 00:17:43.219 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.219 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:43.219 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.219 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:43.219 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:43.219 [2024-11-26 07:27:27.283104] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:17:43.219 [2024-11-26 07:27:27.283177] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.479 [2024-11-26 07:27:27.369446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:43.479 [2024-11-26 07:27:27.411633] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.479 [2024-11-26 07:27:27.411671] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.479 [2024-11-26 07:27:27.411679] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:43.479 [2024-11-26 07:27:27.411686] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:43.479 [2024-11-26 07:27:27.411692] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.479 [2024-11-26 07:27:27.413174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.479 [2024-11-26 07:27:27.413296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.479 [2024-11-26 07:27:27.413299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.049 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:44.050 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:17:44.050 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:17:44.993 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:44.993 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:17:44.993 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:44.993 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.993 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:44.993 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.993 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:17:44.993 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:44.993 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.993 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:45.252 malloc0 00:17:45.252 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.253 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:17:45.253 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.253 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:45.253 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.253 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:45.253 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.253 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:45.253 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.253 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:45.253 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.253 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:45.253 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.253 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:17:45.253 00:17:45.253 00:17:45.253 CUnit - A unit testing framework for C - Version 2.1-3 00:17:45.253 http://cunit.sourceforge.net/ 00:17:45.253 00:17:45.253 00:17:45.253 Suite: nvme_compliance 00:17:45.513 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-26 07:27:29.384324] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:45.513 [2024-11-26 07:27:29.385671] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:17:45.513 [2024-11-26 07:27:29.385683] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:17:45.513 [2024-11-26 07:27:29.385688] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:17:45.513 [2024-11-26 07:27:29.387343] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:45.513 passed 00:17:45.513 Test: admin_identify_ctrlr_verify_fused ...[2024-11-26 07:27:29.481937] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:45.513 [2024-11-26 07:27:29.484952] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:45.513 passed 00:17:45.513 Test: admin_identify_ns ...[2024-11-26 07:27:29.580102] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:45.513 [2024-11-26 07:27:29.639875] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:45.773 [2024-11-26 07:27:29.647875] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:45.773 [2024-11-26 07:27:29.668993] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:45.773 passed 00:17:45.773 Test: admin_get_features_mandatory_features ...[2024-11-26 07:27:29.763082] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:45.773 [2024-11-26 07:27:29.766103] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:45.773 passed 00:17:45.773 Test: admin_get_features_optional_features ...[2024-11-26 07:27:29.860641] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:45.773 [2024-11-26 07:27:29.863656] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:45.773 passed 00:17:46.034 Test: admin_set_features_number_of_queues ...[2024-11-26 07:27:29.957123] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:46.034 [2024-11-26 07:27:30.061987] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:46.034 passed 00:17:46.034 Test: admin_get_log_page_mandatory_logs ...[2024-11-26 07:27:30.157043] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:46.034 [2024-11-26 07:27:30.160059] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:46.295 passed 00:17:46.295 Test: admin_get_log_page_with_lpo ...[2024-11-26 07:27:30.253184] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:46.295 [2024-11-26 07:27:30.320874] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:17:46.295 [2024-11-26 07:27:30.333938] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:46.295 passed 00:17:46.556 Test: fabric_property_get ...[2024-11-26 07:27:30.428149] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:46.556 [2024-11-26 07:27:30.429420] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:17:46.556 [2024-11-26 07:27:30.431178] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:46.556 passed 00:17:46.556 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-26 07:27:30.526842] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:46.556 [2024-11-26 07:27:30.528100] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:17:46.556 [2024-11-26 07:27:30.529857] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:46.556 passed 00:17:46.556 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-26 07:27:30.623034] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:46.817 [2024-11-26 07:27:30.704874] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:46.817 [2024-11-26 07:27:30.720868] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:46.817 [2024-11-26 07:27:30.725985] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:46.817 passed 00:17:46.817 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-26 07:27:30.819959] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:46.817 [2024-11-26 07:27:30.821202] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:17:46.817 [2024-11-26 07:27:30.822975] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:46.817 passed 00:17:46.817 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-26 07:27:30.917131] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:47.078 [2024-11-26 07:27:30.992868] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:47.078 [2024-11-26 07:27:31.016871] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:47.078 [2024-11-26 07:27:31.021961] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:47.078 passed 00:17:47.078 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-26 07:27:31.114949] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:47.078 [2024-11-26 07:27:31.116200] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:17:47.078 [2024-11-26 07:27:31.116220] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:17:47.078 [2024-11-26 07:27:31.118974] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:47.078 passed 00:17:47.339 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-26 07:27:31.211112] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:47.339 [2024-11-26 07:27:31.302873] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:47.339 [2024-11-26 07:27:31.310870] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:47.339 [2024-11-26 07:27:31.318880] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:47.339 [2024-11-26 07:27:31.326866] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:47.339 [2024-11-26 07:27:31.355959] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:47.339 passed 00:17:47.339 Test: admin_create_io_sq_verify_pc ...[2024-11-26 07:27:31.449958] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:47.340 [2024-11-26 07:27:31.468881] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:47.601 [2024-11-26 07:27:31.486127] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:47.601 passed 00:17:47.601 Test: admin_create_io_qp_max_qps ...[2024-11-26 07:27:31.577661] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:48.545 [2024-11-26 07:27:32.672874] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:17:49.229 [2024-11-26 07:27:33.048214] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:49.229 passed 00:17:49.229 Test: admin_create_io_sq_shared_cq ...[2024-11-26 07:27:33.142113] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:49.229 [2024-11-26 07:27:33.273867] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:49.229 [2024-11-26 07:27:33.310924] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:49.491 passed 00:17:49.491 00:17:49.491 Run Summary: Type Total Ran Passed Failed Inactive 00:17:49.491 suites 1 1 n/a 0 0 00:17:49.491 tests 18 18 18 0 0 00:17:49.491 asserts 360 360 360 0 n/a 00:17:49.491 00:17:49.491 Elapsed time = 1.645 seconds 00:17:49.491 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2071688 00:17:49.491 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 2071688 ']' 00:17:49.491 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 2071688 00:17:49.491 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:17:49.491 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:49.491 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2071688 00:17:49.491 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:49.491 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:49.491 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2071688' 00:17:49.491 killing process with pid 2071688 00:17:49.491 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 2071688 00:17:49.491 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 2071688 00:17:49.491 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:49.491 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:49.491 00:17:49.491 real 0m6.554s 00:17:49.491 user 0m18.618s 00:17:49.491 sys 0m0.545s 00:17:49.491 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:49.491 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:49.491 ************************************ 00:17:49.491 END TEST nvmf_vfio_user_nvme_compliance 00:17:49.491 ************************************ 00:17:49.491 07:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:49.491 07:27:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:49.491 07:27:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:49.491 07:27:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:49.753 ************************************ 00:17:49.753 START TEST nvmf_vfio_user_fuzz 00:17:49.753 ************************************ 00:17:49.753 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:49.753 * Looking for test storage... 00:17:49.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:49.753 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:49.753 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:17:49.753 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:49.753 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:49.753 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:49.753 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:49.753 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:49.753 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:17:49.753 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:17:49.753 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:17:49.753 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:17:49.753 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:17:49.753 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:17:49.753 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:17:49.753 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:49.753 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:17:49.753 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:17:49.753 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:49.753 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:49.753 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:17:49.753 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:17:49.753 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:49.753 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:17:49.753 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:17:49.753 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:17:49.753 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:17:49.753 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:49.753 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:17:49.753 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:17:49.753 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:49.753 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:49.753 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:17:49.753 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:49.753 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:49.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.753 --rc genhtml_branch_coverage=1 00:17:49.753 --rc genhtml_function_coverage=1 00:17:49.753 --rc genhtml_legend=1 00:17:49.753 --rc geninfo_all_blocks=1 00:17:49.753 --rc geninfo_unexecuted_blocks=1 00:17:49.753 00:17:49.753 ' 00:17:49.753 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:49.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.753 --rc genhtml_branch_coverage=1 00:17:49.753 --rc genhtml_function_coverage=1 00:17:49.753 --rc genhtml_legend=1 00:17:49.753 --rc geninfo_all_blocks=1 00:17:49.753 --rc geninfo_unexecuted_blocks=1 00:17:49.753 00:17:49.753 ' 00:17:49.753 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:49.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.753 --rc genhtml_branch_coverage=1 00:17:49.753 --rc genhtml_function_coverage=1 00:17:49.754 --rc genhtml_legend=1 00:17:49.754 --rc geninfo_all_blocks=1 00:17:49.754 --rc geninfo_unexecuted_blocks=1 00:17:49.754 00:17:49.754 ' 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:49.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.754 --rc genhtml_branch_coverage=1 00:17:49.754 --rc genhtml_function_coverage=1 00:17:49.754 --rc genhtml_legend=1 00:17:49.754 --rc geninfo_all_blocks=1 00:17:49.754 --rc geninfo_unexecuted_blocks=1 00:17:49.754 00:17:49.754 ' 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:49.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2072969 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2072969' 00:17:49.754 Process pid: 2072969 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2072969 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2072969 ']' 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:49.754 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:50.702 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:50.702 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:17:50.702 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:17:51.643 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:51.643 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.643 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:51.643 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.643 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:17:51.643 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:51.643 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.643 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:51.643 malloc0 00:17:51.643 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.643 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:17:51.643 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.643 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:51.903 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.903 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:51.903 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.903 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:51.903 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.903 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:51.903 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.904 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:51.904 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.904 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:17:51.904 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:18:24.040 Fuzzing completed. Shutting down the fuzz application 00:18:24.040 00:18:24.040 Dumping successful admin opcodes: 00:18:24.040 9, 10, 00:18:24.040 Dumping successful io opcodes: 00:18:24.040 0, 00:18:24.040 NS: 0x20000081ef00 I/O qp, Total commands completed: 1094281, total successful commands: 4310, random_seed: 787397248 00:18:24.040 NS: 0x20000081ef00 admin qp, Total commands completed: 138768, total successful commands: 30, random_seed: 1948141184 00:18:24.040 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:18:24.040 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.040 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:24.040 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.040 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2072969 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2072969 ']' 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 2072969 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2072969 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2072969' 00:18:24.041 killing process with pid 2072969 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 2072969 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 2072969 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:18:24.041 00:18:24.041 real 0m33.777s 00:18:24.041 user 0m37.269s 00:18:24.041 sys 0m26.663s 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:24.041 ************************************ 00:18:24.041 END TEST nvmf_vfio_user_fuzz 00:18:24.041 ************************************ 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:24.041 ************************************ 00:18:24.041 START TEST nvmf_auth_target 00:18:24.041 ************************************ 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:24.041 * Looking for test storage... 00:18:24.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:24.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.041 --rc genhtml_branch_coverage=1 00:18:24.041 --rc genhtml_function_coverage=1 00:18:24.041 --rc genhtml_legend=1 00:18:24.041 --rc geninfo_all_blocks=1 00:18:24.041 --rc geninfo_unexecuted_blocks=1 00:18:24.041 00:18:24.041 ' 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:24.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.041 --rc genhtml_branch_coverage=1 00:18:24.041 --rc genhtml_function_coverage=1 00:18:24.041 --rc genhtml_legend=1 00:18:24.041 --rc geninfo_all_blocks=1 00:18:24.041 --rc geninfo_unexecuted_blocks=1 00:18:24.041 00:18:24.041 ' 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:24.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.041 --rc genhtml_branch_coverage=1 00:18:24.041 --rc genhtml_function_coverage=1 00:18:24.041 --rc genhtml_legend=1 00:18:24.041 --rc geninfo_all_blocks=1 00:18:24.041 --rc geninfo_unexecuted_blocks=1 00:18:24.041 00:18:24.041 ' 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:24.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.041 --rc genhtml_branch_coverage=1 00:18:24.041 --rc genhtml_function_coverage=1 00:18:24.041 --rc genhtml_legend=1 00:18:24.041 --rc geninfo_all_blocks=1 00:18:24.041 --rc geninfo_unexecuted_blocks=1 00:18:24.041 00:18:24.041 ' 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:24.041 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:24.042 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.042 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.042 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.042 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:24.042 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.042 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:18:24.042 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:24.042 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:24.042 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:24.042 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:24.042 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:24.042 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:24.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:24.042 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:24.042 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:24.042 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:24.042 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:24.042 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:24.042 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:24.042 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:24.042 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:24.042 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:24.042 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:24.042 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:18:24.042 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:24.042 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:24.042 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:24.042 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:24.042 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:24.042 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.042 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:24.042 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.042 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:24.042 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:24.042 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:18:24.042 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.179 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:32.179 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:18:32.179 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:32.179 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:32.179 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:32.179 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:32.179 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:32.179 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:18:32.179 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:32.179 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:18:32.179 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:18:32.179 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:18:32.179 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:18:32.179 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:18:32.179 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:18:32.179 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:32.179 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:32.179 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:32.179 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:32.179 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:32.179 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:32.179 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:32.179 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:32.179 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:32.179 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:32.179 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:32.179 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:32.179 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:32.179 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:32.179 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:32.179 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:32.179 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:32.179 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:32.179 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:32.179 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:32.179 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:32.180 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:32.180 Found net devices under 0000:31:00.0: cvl_0_0 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:32.180 Found net devices under 0000:31:00.1: cvl_0_1 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:32.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:32.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:18:32.180 00:18:32.180 --- 10.0.0.2 ping statistics --- 00:18:32.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.180 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:32.180 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:32.180 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:18:32.180 00:18:32.180 --- 10.0.0.1 ping statistics --- 00:18:32.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.180 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2083854 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2083854 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2083854 ']' 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.180 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:32.753 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:32.753 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:32.753 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:32.753 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:32.753 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.753 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.753 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2083888 00:18:32.753 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:32.753 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:32.753 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:18:32.753 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:32.753 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:32.753 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:32.753 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:18:32.753 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:32.753 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:32.753 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=73ba86a929548419010016b5b12b2e2b6976e51708479bcf 00:18:32.753 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:32.753 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.f8f 00:18:32.753 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 73ba86a929548419010016b5b12b2e2b6976e51708479bcf 0 00:18:32.753 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 73ba86a929548419010016b5b12b2e2b6976e51708479bcf 0 00:18:32.753 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:32.753 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:32.753 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=73ba86a929548419010016b5b12b2e2b6976e51708479bcf 00:18:32.753 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:18:32.753 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:32.753 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.f8f 00:18:32.753 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.f8f 00:18:32.753 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.f8f 00:18:32.753 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:18:32.753 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:32.753 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:32.753 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:32.753 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:18:32.753 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:18:32.754 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:32.754 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0e4ba14a5898a3fbc138b93bad881e27601ec2806219bd85fa0295cd3965d71d 00:18:32.754 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:32.754 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.OpQ 00:18:32.754 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0e4ba14a5898a3fbc138b93bad881e27601ec2806219bd85fa0295cd3965d71d 3 00:18:32.754 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0e4ba14a5898a3fbc138b93bad881e27601ec2806219bd85fa0295cd3965d71d 3 00:18:32.754 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:32.754 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:32.754 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0e4ba14a5898a3fbc138b93bad881e27601ec2806219bd85fa0295cd3965d71d 00:18:32.754 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:18:32.754 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:32.754 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.OpQ 00:18:32.754 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.OpQ 00:18:32.754 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.OpQ 00:18:32.754 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:18:32.754 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:32.754 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:32.754 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:32.754 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:18:32.754 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:18:33.015 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:33.015 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c7dd0b4eeeeb276d7ee7ab1c3b49e450 00:18:33.015 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:33.015 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Ueb 00:18:33.015 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c7dd0b4eeeeb276d7ee7ab1c3b49e450 1 00:18:33.015 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c7dd0b4eeeeb276d7ee7ab1c3b49e450 1 00:18:33.015 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:33.015 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:33.015 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c7dd0b4eeeeb276d7ee7ab1c3b49e450 00:18:33.015 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:18:33.015 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:33.015 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Ueb 00:18:33.015 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Ueb 00:18:33.015 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.Ueb 00:18:33.015 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:18:33.015 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:33.016 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:33.016 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:33.016 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:18:33.016 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:33.016 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:33.016 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0de50566b38da390994e242be43ba1e5a0051ff3d671a38e 00:18:33.016 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:33.016 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Ggq 00:18:33.016 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0de50566b38da390994e242be43ba1e5a0051ff3d671a38e 2 00:18:33.016 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0de50566b38da390994e242be43ba1e5a0051ff3d671a38e 2 00:18:33.016 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:33.016 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:33.016 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0de50566b38da390994e242be43ba1e5a0051ff3d671a38e 00:18:33.016 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:18:33.016 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:33.016 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Ggq 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Ggq 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Ggq 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0f02e12ce9094ec7687a28387599cd3ce0617f8bde405a94 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.7xJ 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0f02e12ce9094ec7687a28387599cd3ce0617f8bde405a94 2 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0f02e12ce9094ec7687a28387599cd3ce0617f8bde405a94 2 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0f02e12ce9094ec7687a28387599cd3ce0617f8bde405a94 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.7xJ 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.7xJ 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.7xJ 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=61083ffa6978044366d652e5709d356a 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.x9E 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 61083ffa6978044366d652e5709d356a 1 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 61083ffa6978044366d652e5709d356a 1 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=61083ffa6978044366d652e5709d356a 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.x9E 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.x9E 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.x9E 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:18:33.016 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:33.278 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=015ccac6643c8aae515d28582671519a2dd668433c269c17c9395d4f73629b61 00:18:33.278 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:33.278 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.UZ6 00:18:33.278 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 015ccac6643c8aae515d28582671519a2dd668433c269c17c9395d4f73629b61 3 00:18:33.278 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 015ccac6643c8aae515d28582671519a2dd668433c269c17c9395d4f73629b61 3 00:18:33.278 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:33.278 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:33.278 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=015ccac6643c8aae515d28582671519a2dd668433c269c17c9395d4f73629b61 00:18:33.278 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:18:33.278 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:33.278 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.UZ6 00:18:33.278 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.UZ6 00:18:33.278 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.UZ6 00:18:33.278 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:18:33.278 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2083854 00:18:33.278 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2083854 ']' 00:18:33.278 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.278 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:33.278 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.278 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:33.278 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.278 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:33.278 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:33.278 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2083888 /var/tmp/host.sock 00:18:33.278 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2083888 ']' 00:18:33.278 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:33.278 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:33.278 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:33.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:33.278 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:33.278 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.539 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:33.539 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:33.539 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:18:33.539 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.539 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.539 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.539 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:33.539 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.f8f 00:18:33.539 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.539 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.539 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.539 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.f8f 00:18:33.539 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.f8f 00:18:33.800 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.OpQ ]] 00:18:33.800 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.OpQ 00:18:33.800 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.800 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.800 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.800 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.OpQ 00:18:33.800 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.OpQ 00:18:34.061 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:34.061 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Ueb 00:18:34.061 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.061 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.061 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.061 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Ueb 00:18:34.061 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Ueb 00:18:34.061 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Ggq ]] 00:18:34.061 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Ggq 00:18:34.061 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.061 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.061 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.061 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Ggq 00:18:34.061 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Ggq 00:18:34.321 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:34.321 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.7xJ 00:18:34.321 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.321 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.321 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.321 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.7xJ 00:18:34.321 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.7xJ 00:18:34.582 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.x9E ]] 00:18:34.582 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.x9E 00:18:34.582 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.582 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.582 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.582 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.x9E 00:18:34.582 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.x9E 00:18:34.843 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:34.843 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.UZ6 00:18:34.843 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.843 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.843 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.843 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.UZ6 00:18:34.843 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.UZ6 00:18:34.843 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:18:34.843 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:34.843 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:34.843 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:34.843 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:34.843 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:35.104 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:18:35.104 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:35.104 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:35.104 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:35.104 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:35.104 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.104 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.104 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.104 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.104 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.104 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.104 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.104 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.364 00:18:35.364 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:35.364 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:35.364 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.624 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.624 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.624 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.624 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.624 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.624 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:35.624 { 00:18:35.624 "cntlid": 1, 00:18:35.624 "qid": 0, 00:18:35.624 "state": "enabled", 00:18:35.624 "thread": "nvmf_tgt_poll_group_000", 00:18:35.624 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:35.624 "listen_address": { 00:18:35.624 "trtype": "TCP", 00:18:35.624 "adrfam": "IPv4", 00:18:35.624 "traddr": "10.0.0.2", 00:18:35.624 "trsvcid": "4420" 00:18:35.624 }, 00:18:35.624 "peer_address": { 00:18:35.624 "trtype": "TCP", 00:18:35.624 "adrfam": "IPv4", 00:18:35.624 "traddr": "10.0.0.1", 00:18:35.624 "trsvcid": "37484" 00:18:35.624 }, 00:18:35.624 "auth": { 00:18:35.624 "state": "completed", 00:18:35.624 "digest": "sha256", 00:18:35.624 "dhgroup": "null" 00:18:35.624 } 00:18:35.624 } 00:18:35.624 ]' 00:18:35.624 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:35.624 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:35.624 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:35.624 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:35.624 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:35.624 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.624 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.624 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.885 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzNiYTg2YTkyOTU0ODQxOTAxMDAxNmI1YjEyYjJlMmI2OTc2ZTUxNzA4NDc5YmNm8TT1Rw==: --dhchap-ctrl-secret DHHC-1:03:MGU0YmExNGE1ODk4YTNmYmMxMzhiOTNiYWQ4ODFlMjc2MDFlYzI4MDYyMTliZDg1ZmEwMjk1Y2QzOTY1ZDcxZIwNWJ4=: 00:18:35.885 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NzNiYTg2YTkyOTU0ODQxOTAxMDAxNmI1YjEyYjJlMmI2OTc2ZTUxNzA4NDc5YmNm8TT1Rw==: --dhchap-ctrl-secret DHHC-1:03:MGU0YmExNGE1ODk4YTNmYmMxMzhiOTNiYWQ4ODFlMjc2MDFlYzI4MDYyMTliZDg1ZmEwMjk1Y2QzOTY1ZDcxZIwNWJ4=: 00:18:36.455 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.716 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:36.716 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.716 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.716 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.716 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:36.716 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:36.716 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:36.716 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:18:36.716 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.716 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:36.716 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:36.716 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:36.716 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.716 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.716 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.716 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.716 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.716 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.716 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.716 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.976 00:18:36.977 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.977 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.977 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.237 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.237 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.237 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.237 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.237 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.237 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:37.237 { 00:18:37.237 "cntlid": 3, 00:18:37.237 "qid": 0, 00:18:37.237 "state": "enabled", 00:18:37.237 "thread": "nvmf_tgt_poll_group_000", 00:18:37.237 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:37.237 "listen_address": { 00:18:37.237 "trtype": "TCP", 00:18:37.237 "adrfam": "IPv4", 00:18:37.237 "traddr": "10.0.0.2", 00:18:37.237 "trsvcid": "4420" 00:18:37.237 }, 00:18:37.237 "peer_address": { 00:18:37.237 "trtype": "TCP", 00:18:37.237 "adrfam": "IPv4", 00:18:37.237 "traddr": "10.0.0.1", 00:18:37.237 "trsvcid": "37502" 00:18:37.237 }, 00:18:37.237 "auth": { 00:18:37.237 "state": "completed", 00:18:37.237 "digest": "sha256", 00:18:37.237 "dhgroup": "null" 00:18:37.237 } 00:18:37.237 } 00:18:37.237 ]' 00:18:37.237 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:37.237 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:37.237 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:37.237 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:37.237 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:37.237 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.237 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.237 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.497 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: --dhchap-ctrl-secret DHHC-1:02:MGRlNTA1NjZiMzhkYTM5MDk5NGUyNDJiZTQzYmExZTVhMDA1MWZmM2Q2NzFhMzhl6hIcBA==: 00:18:37.497 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: --dhchap-ctrl-secret DHHC-1:02:MGRlNTA1NjZiMzhkYTM5MDk5NGUyNDJiZTQzYmExZTVhMDA1MWZmM2Q2NzFhMzhl6hIcBA==: 00:18:38.439 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.439 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:38.439 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.439 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.439 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.439 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:38.439 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:38.439 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:38.439 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:18:38.439 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:38.439 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:38.439 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:38.439 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:38.439 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.440 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.440 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.440 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.440 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.440 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.440 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.440 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.701 00:18:38.701 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:38.701 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.701 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:38.962 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.962 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.962 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.962 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.962 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.962 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:38.962 { 00:18:38.962 "cntlid": 5, 00:18:38.962 "qid": 0, 00:18:38.962 "state": "enabled", 00:18:38.962 "thread": "nvmf_tgt_poll_group_000", 00:18:38.962 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:38.962 "listen_address": { 00:18:38.962 "trtype": "TCP", 00:18:38.962 "adrfam": "IPv4", 00:18:38.962 "traddr": "10.0.0.2", 00:18:38.962 "trsvcid": "4420" 00:18:38.962 }, 00:18:38.962 "peer_address": { 00:18:38.962 "trtype": "TCP", 00:18:38.962 "adrfam": "IPv4", 00:18:38.962 "traddr": "10.0.0.1", 00:18:38.962 "trsvcid": "51794" 00:18:38.962 }, 00:18:38.962 "auth": { 00:18:38.962 "state": "completed", 00:18:38.962 "digest": "sha256", 00:18:38.962 "dhgroup": "null" 00:18:38.962 } 00:18:38.962 } 00:18:38.962 ]' 00:18:38.962 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:38.962 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:38.962 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:38.962 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:38.962 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:38.962 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.962 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.962 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.223 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: --dhchap-ctrl-secret DHHC-1:01:NjEwODNmZmE2OTc4MDQ0MzY2ZDY1MmU1NzA5ZDM1NmFfCOpC: 00:18:39.223 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: --dhchap-ctrl-secret DHHC-1:01:NjEwODNmZmE2OTc4MDQ0MzY2ZDY1MmU1NzA5ZDM1NmFfCOpC: 00:18:40.164 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.164 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:40.164 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.164 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.164 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.164 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:40.164 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:40.164 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:40.164 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:18:40.164 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:40.164 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:40.164 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:40.164 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:40.164 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.164 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:40.164 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.164 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.164 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.164 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:40.164 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:40.164 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:40.425 00:18:40.425 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:40.425 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:40.425 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.686 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.686 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.686 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.686 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.686 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.686 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:40.686 { 00:18:40.686 "cntlid": 7, 00:18:40.686 "qid": 0, 00:18:40.686 "state": "enabled", 00:18:40.686 "thread": "nvmf_tgt_poll_group_000", 00:18:40.686 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:40.686 "listen_address": { 00:18:40.686 "trtype": "TCP", 00:18:40.686 "adrfam": "IPv4", 00:18:40.686 "traddr": "10.0.0.2", 00:18:40.686 "trsvcid": "4420" 00:18:40.686 }, 00:18:40.686 "peer_address": { 00:18:40.686 "trtype": "TCP", 00:18:40.686 "adrfam": "IPv4", 00:18:40.686 "traddr": "10.0.0.1", 00:18:40.686 "trsvcid": "51820" 00:18:40.686 }, 00:18:40.686 "auth": { 00:18:40.686 "state": "completed", 00:18:40.686 "digest": "sha256", 00:18:40.686 "dhgroup": "null" 00:18:40.686 } 00:18:40.686 } 00:18:40.686 ]' 00:18:40.686 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:40.686 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:40.686 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:40.686 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:40.686 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:40.686 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.686 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.686 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.947 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:18:40.947 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:18:41.888 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.888 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:41.888 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.888 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.888 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.888 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:41.888 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:41.888 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:41.888 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:41.888 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:18:41.888 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:41.888 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:41.888 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:41.888 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:41.888 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.888 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.888 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.888 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.888 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.889 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.889 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.889 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.151 00:18:42.151 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:42.151 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:42.151 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.413 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.413 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.413 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.413 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.413 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.413 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:42.413 { 00:18:42.413 "cntlid": 9, 00:18:42.413 "qid": 0, 00:18:42.413 "state": "enabled", 00:18:42.413 "thread": "nvmf_tgt_poll_group_000", 00:18:42.413 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:42.413 "listen_address": { 00:18:42.413 "trtype": "TCP", 00:18:42.413 "adrfam": "IPv4", 00:18:42.413 "traddr": "10.0.0.2", 00:18:42.413 "trsvcid": "4420" 00:18:42.413 }, 00:18:42.413 "peer_address": { 00:18:42.413 "trtype": "TCP", 00:18:42.413 "adrfam": "IPv4", 00:18:42.413 "traddr": "10.0.0.1", 00:18:42.413 "trsvcid": "51840" 00:18:42.413 }, 00:18:42.413 "auth": { 00:18:42.413 "state": "completed", 00:18:42.413 "digest": "sha256", 00:18:42.413 "dhgroup": "ffdhe2048" 00:18:42.413 } 00:18:42.413 } 00:18:42.413 ]' 00:18:42.413 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:42.413 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:42.413 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:42.413 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:42.413 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:42.413 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.413 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.413 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.672 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzNiYTg2YTkyOTU0ODQxOTAxMDAxNmI1YjEyYjJlMmI2OTc2ZTUxNzA4NDc5YmNm8TT1Rw==: --dhchap-ctrl-secret DHHC-1:03:MGU0YmExNGE1ODk4YTNmYmMxMzhiOTNiYWQ4ODFlMjc2MDFlYzI4MDYyMTliZDg1ZmEwMjk1Y2QzOTY1ZDcxZIwNWJ4=: 00:18:42.672 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NzNiYTg2YTkyOTU0ODQxOTAxMDAxNmI1YjEyYjJlMmI2OTc2ZTUxNzA4NDc5YmNm8TT1Rw==: --dhchap-ctrl-secret DHHC-1:03:MGU0YmExNGE1ODk4YTNmYmMxMzhiOTNiYWQ4ODFlMjc2MDFlYzI4MDYyMTliZDg1ZmEwMjk1Y2QzOTY1ZDcxZIwNWJ4=: 00:18:43.614 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.614 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:43.614 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.614 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.614 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.614 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:43.614 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:43.614 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:43.614 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:18:43.614 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:43.614 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:43.614 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:43.614 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:43.614 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.614 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.614 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.614 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.614 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.614 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.614 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.614 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.875 00:18:43.875 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:43.875 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:43.875 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.135 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.135 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.135 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.135 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.135 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.135 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:44.135 { 00:18:44.135 "cntlid": 11, 00:18:44.135 "qid": 0, 00:18:44.135 "state": "enabled", 00:18:44.135 "thread": "nvmf_tgt_poll_group_000", 00:18:44.135 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:44.135 "listen_address": { 00:18:44.135 "trtype": "TCP", 00:18:44.135 "adrfam": "IPv4", 00:18:44.135 "traddr": "10.0.0.2", 00:18:44.135 "trsvcid": "4420" 00:18:44.135 }, 00:18:44.135 "peer_address": { 00:18:44.135 "trtype": "TCP", 00:18:44.135 "adrfam": "IPv4", 00:18:44.135 "traddr": "10.0.0.1", 00:18:44.135 "trsvcid": "51874" 00:18:44.135 }, 00:18:44.135 "auth": { 00:18:44.135 "state": "completed", 00:18:44.135 "digest": "sha256", 00:18:44.135 "dhgroup": "ffdhe2048" 00:18:44.135 } 00:18:44.135 } 00:18:44.135 ]' 00:18:44.135 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:44.135 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:44.135 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:44.135 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:44.135 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:44.135 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.135 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.135 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.396 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: --dhchap-ctrl-secret DHHC-1:02:MGRlNTA1NjZiMzhkYTM5MDk5NGUyNDJiZTQzYmExZTVhMDA1MWZmM2Q2NzFhMzhl6hIcBA==: 00:18:44.396 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: --dhchap-ctrl-secret DHHC-1:02:MGRlNTA1NjZiMzhkYTM5MDk5NGUyNDJiZTQzYmExZTVhMDA1MWZmM2Q2NzFhMzhl6hIcBA==: 00:18:45.337 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.337 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:45.337 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.337 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.337 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.337 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:45.337 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:45.337 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:45.337 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:18:45.337 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:45.337 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:45.337 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:45.337 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:45.337 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.337 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.337 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.337 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.337 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.337 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.337 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.337 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.597 00:18:45.597 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:45.597 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:45.597 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.857 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.857 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.857 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.857 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.857 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.857 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:45.857 { 00:18:45.857 "cntlid": 13, 00:18:45.857 "qid": 0, 00:18:45.857 "state": "enabled", 00:18:45.857 "thread": "nvmf_tgt_poll_group_000", 00:18:45.857 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:45.857 "listen_address": { 00:18:45.857 "trtype": "TCP", 00:18:45.857 "adrfam": "IPv4", 00:18:45.857 "traddr": "10.0.0.2", 00:18:45.857 "trsvcid": "4420" 00:18:45.857 }, 00:18:45.857 "peer_address": { 00:18:45.857 "trtype": "TCP", 00:18:45.857 "adrfam": "IPv4", 00:18:45.857 "traddr": "10.0.0.1", 00:18:45.857 "trsvcid": "51896" 00:18:45.857 }, 00:18:45.857 "auth": { 00:18:45.857 "state": "completed", 00:18:45.857 "digest": "sha256", 00:18:45.857 "dhgroup": "ffdhe2048" 00:18:45.857 } 00:18:45.857 } 00:18:45.857 ]' 00:18:45.857 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:45.857 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:45.857 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:45.857 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:45.857 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:45.857 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.857 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.858 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.118 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: --dhchap-ctrl-secret DHHC-1:01:NjEwODNmZmE2OTc4MDQ0MzY2ZDY1MmU1NzA5ZDM1NmFfCOpC: 00:18:46.118 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: --dhchap-ctrl-secret DHHC-1:01:NjEwODNmZmE2OTc4MDQ0MzY2ZDY1MmU1NzA5ZDM1NmFfCOpC: 00:18:46.690 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.950 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:46.950 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.950 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.950 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.950 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:46.950 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:46.950 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:46.950 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:18:46.950 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:46.950 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:46.950 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:46.950 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:46.950 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.950 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:46.950 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.950 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.950 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.950 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:46.950 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:46.950 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:47.210 00:18:47.210 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:47.210 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:47.210 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.470 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.470 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.470 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.471 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.471 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.471 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:47.471 { 00:18:47.471 "cntlid": 15, 00:18:47.471 "qid": 0, 00:18:47.471 "state": "enabled", 00:18:47.471 "thread": "nvmf_tgt_poll_group_000", 00:18:47.471 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:47.471 "listen_address": { 00:18:47.471 "trtype": "TCP", 00:18:47.471 "adrfam": "IPv4", 00:18:47.471 "traddr": "10.0.0.2", 00:18:47.471 "trsvcid": "4420" 00:18:47.471 }, 00:18:47.471 "peer_address": { 00:18:47.471 "trtype": "TCP", 00:18:47.471 "adrfam": "IPv4", 00:18:47.471 "traddr": "10.0.0.1", 00:18:47.471 "trsvcid": "51914" 00:18:47.471 }, 00:18:47.471 "auth": { 00:18:47.471 "state": "completed", 00:18:47.471 "digest": "sha256", 00:18:47.471 "dhgroup": "ffdhe2048" 00:18:47.471 } 00:18:47.471 } 00:18:47.471 ]' 00:18:47.471 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:47.471 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:47.471 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:47.471 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:47.471 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:47.471 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.471 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.471 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.731 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:18:47.731 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:18:48.672 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.672 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:48.672 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.672 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.672 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.672 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:48.672 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:48.672 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:48.672 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:48.672 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:18:48.672 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:48.672 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:48.672 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:48.672 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:48.672 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.672 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.672 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.672 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.672 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.673 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.673 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.673 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.932 00:18:48.932 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:48.932 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:48.932 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.192 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.192 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.192 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.192 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.192 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.192 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:49.192 { 00:18:49.192 "cntlid": 17, 00:18:49.192 "qid": 0, 00:18:49.192 "state": "enabled", 00:18:49.192 "thread": "nvmf_tgt_poll_group_000", 00:18:49.192 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:49.192 "listen_address": { 00:18:49.192 "trtype": "TCP", 00:18:49.192 "adrfam": "IPv4", 00:18:49.192 "traddr": "10.0.0.2", 00:18:49.192 "trsvcid": "4420" 00:18:49.192 }, 00:18:49.192 "peer_address": { 00:18:49.192 "trtype": "TCP", 00:18:49.192 "adrfam": "IPv4", 00:18:49.192 "traddr": "10.0.0.1", 00:18:49.192 "trsvcid": "53614" 00:18:49.192 }, 00:18:49.192 "auth": { 00:18:49.192 "state": "completed", 00:18:49.192 "digest": "sha256", 00:18:49.192 "dhgroup": "ffdhe3072" 00:18:49.192 } 00:18:49.192 } 00:18:49.192 ]' 00:18:49.192 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:49.192 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:49.192 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:49.192 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:49.192 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:49.192 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.192 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.192 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.453 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzNiYTg2YTkyOTU0ODQxOTAxMDAxNmI1YjEyYjJlMmI2OTc2ZTUxNzA4NDc5YmNm8TT1Rw==: --dhchap-ctrl-secret DHHC-1:03:MGU0YmExNGE1ODk4YTNmYmMxMzhiOTNiYWQ4ODFlMjc2MDFlYzI4MDYyMTliZDg1ZmEwMjk1Y2QzOTY1ZDcxZIwNWJ4=: 00:18:49.453 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NzNiYTg2YTkyOTU0ODQxOTAxMDAxNmI1YjEyYjJlMmI2OTc2ZTUxNzA4NDc5YmNm8TT1Rw==: --dhchap-ctrl-secret DHHC-1:03:MGU0YmExNGE1ODk4YTNmYmMxMzhiOTNiYWQ4ODFlMjc2MDFlYzI4MDYyMTliZDg1ZmEwMjk1Y2QzOTY1ZDcxZIwNWJ4=: 00:18:50.026 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.026 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:50.026 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.026 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.026 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.026 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:50.026 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:50.026 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:50.288 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:18:50.288 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:50.288 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:50.288 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:50.288 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:50.288 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.288 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.288 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.288 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.288 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.288 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.288 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.288 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.549 00:18:50.549 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:50.549 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.549 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:50.810 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.810 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.810 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.811 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.811 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.811 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:50.811 { 00:18:50.811 "cntlid": 19, 00:18:50.811 "qid": 0, 00:18:50.811 "state": "enabled", 00:18:50.811 "thread": "nvmf_tgt_poll_group_000", 00:18:50.811 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:50.811 "listen_address": { 00:18:50.811 "trtype": "TCP", 00:18:50.811 "adrfam": "IPv4", 00:18:50.811 "traddr": "10.0.0.2", 00:18:50.811 "trsvcid": "4420" 00:18:50.811 }, 00:18:50.811 "peer_address": { 00:18:50.811 "trtype": "TCP", 00:18:50.811 "adrfam": "IPv4", 00:18:50.811 "traddr": "10.0.0.1", 00:18:50.811 "trsvcid": "53630" 00:18:50.811 }, 00:18:50.811 "auth": { 00:18:50.811 "state": "completed", 00:18:50.811 "digest": "sha256", 00:18:50.811 "dhgroup": "ffdhe3072" 00:18:50.811 } 00:18:50.811 } 00:18:50.811 ]' 00:18:50.811 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:50.811 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:50.811 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:50.811 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:50.811 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:50.811 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.811 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.811 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.070 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: --dhchap-ctrl-secret DHHC-1:02:MGRlNTA1NjZiMzhkYTM5MDk5NGUyNDJiZTQzYmExZTVhMDA1MWZmM2Q2NzFhMzhl6hIcBA==: 00:18:51.070 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: --dhchap-ctrl-secret DHHC-1:02:MGRlNTA1NjZiMzhkYTM5MDk5NGUyNDJiZTQzYmExZTVhMDA1MWZmM2Q2NzFhMzhl6hIcBA==: 00:18:52.011 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.011 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:52.011 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.011 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.011 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.011 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:52.011 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:52.011 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:52.011 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:18:52.011 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:52.011 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:52.011 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:52.011 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:52.011 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.012 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.012 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.012 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.012 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.012 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.012 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.012 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.273 00:18:52.273 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:52.273 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:52.273 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.535 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.535 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.535 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.535 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.535 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.535 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:52.535 { 00:18:52.535 "cntlid": 21, 00:18:52.535 "qid": 0, 00:18:52.535 "state": "enabled", 00:18:52.535 "thread": "nvmf_tgt_poll_group_000", 00:18:52.535 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:52.535 "listen_address": { 00:18:52.535 "trtype": "TCP", 00:18:52.535 "adrfam": "IPv4", 00:18:52.535 "traddr": "10.0.0.2", 00:18:52.535 "trsvcid": "4420" 00:18:52.535 }, 00:18:52.535 "peer_address": { 00:18:52.535 "trtype": "TCP", 00:18:52.535 "adrfam": "IPv4", 00:18:52.535 "traddr": "10.0.0.1", 00:18:52.535 "trsvcid": "53654" 00:18:52.535 }, 00:18:52.535 "auth": { 00:18:52.535 "state": "completed", 00:18:52.535 "digest": "sha256", 00:18:52.535 "dhgroup": "ffdhe3072" 00:18:52.535 } 00:18:52.535 } 00:18:52.535 ]' 00:18:52.535 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:52.535 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:52.535 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:52.535 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:52.535 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:52.535 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.535 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.535 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.796 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: --dhchap-ctrl-secret DHHC-1:01:NjEwODNmZmE2OTc4MDQ0MzY2ZDY1MmU1NzA5ZDM1NmFfCOpC: 00:18:52.796 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: --dhchap-ctrl-secret DHHC-1:01:NjEwODNmZmE2OTc4MDQ0MzY2ZDY1MmU1NzA5ZDM1NmFfCOpC: 00:18:53.738 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.738 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:53.738 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.738 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.738 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.738 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:53.738 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:53.738 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:53.738 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:18:53.738 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:53.738 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:53.738 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:53.738 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:53.738 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.738 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:53.738 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.738 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.738 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.738 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:53.738 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:53.738 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:53.999 00:18:53.999 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:53.999 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:53.999 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.262 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.262 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.262 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.262 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.262 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.262 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:54.262 { 00:18:54.262 "cntlid": 23, 00:18:54.262 "qid": 0, 00:18:54.262 "state": "enabled", 00:18:54.262 "thread": "nvmf_tgt_poll_group_000", 00:18:54.262 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:54.262 "listen_address": { 00:18:54.262 "trtype": "TCP", 00:18:54.262 "adrfam": "IPv4", 00:18:54.262 "traddr": "10.0.0.2", 00:18:54.262 "trsvcid": "4420" 00:18:54.262 }, 00:18:54.262 "peer_address": { 00:18:54.262 "trtype": "TCP", 00:18:54.262 "adrfam": "IPv4", 00:18:54.262 "traddr": "10.0.0.1", 00:18:54.262 "trsvcid": "53670" 00:18:54.262 }, 00:18:54.262 "auth": { 00:18:54.262 "state": "completed", 00:18:54.262 "digest": "sha256", 00:18:54.262 "dhgroup": "ffdhe3072" 00:18:54.262 } 00:18:54.262 } 00:18:54.262 ]' 00:18:54.262 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:54.262 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:54.262 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:54.262 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:54.262 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:54.262 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.262 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.262 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.524 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:18:54.524 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:18:55.464 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.464 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:55.464 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.464 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.464 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.464 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:55.464 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:55.464 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:55.464 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:55.464 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:18:55.464 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:55.464 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:55.464 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:55.464 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:55.464 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.464 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.464 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.464 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.464 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.464 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.464 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.465 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.725 00:18:55.725 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:55.725 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.725 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:55.985 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.986 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.986 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.986 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.986 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.986 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:55.986 { 00:18:55.986 "cntlid": 25, 00:18:55.986 "qid": 0, 00:18:55.986 "state": "enabled", 00:18:55.986 "thread": "nvmf_tgt_poll_group_000", 00:18:55.986 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:55.986 "listen_address": { 00:18:55.986 "trtype": "TCP", 00:18:55.986 "adrfam": "IPv4", 00:18:55.986 "traddr": "10.0.0.2", 00:18:55.986 "trsvcid": "4420" 00:18:55.986 }, 00:18:55.986 "peer_address": { 00:18:55.986 "trtype": "TCP", 00:18:55.986 "adrfam": "IPv4", 00:18:55.986 "traddr": "10.0.0.1", 00:18:55.986 "trsvcid": "53684" 00:18:55.986 }, 00:18:55.986 "auth": { 00:18:55.986 "state": "completed", 00:18:55.986 "digest": "sha256", 00:18:55.986 "dhgroup": "ffdhe4096" 00:18:55.986 } 00:18:55.986 } 00:18:55.986 ]' 00:18:55.986 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:55.986 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:55.986 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:55.986 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:55.986 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:55.986 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.986 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.986 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.251 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzNiYTg2YTkyOTU0ODQxOTAxMDAxNmI1YjEyYjJlMmI2OTc2ZTUxNzA4NDc5YmNm8TT1Rw==: --dhchap-ctrl-secret DHHC-1:03:MGU0YmExNGE1ODk4YTNmYmMxMzhiOTNiYWQ4ODFlMjc2MDFlYzI4MDYyMTliZDg1ZmEwMjk1Y2QzOTY1ZDcxZIwNWJ4=: 00:18:56.251 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NzNiYTg2YTkyOTU0ODQxOTAxMDAxNmI1YjEyYjJlMmI2OTc2ZTUxNzA4NDc5YmNm8TT1Rw==: --dhchap-ctrl-secret DHHC-1:03:MGU0YmExNGE1ODk4YTNmYmMxMzhiOTNiYWQ4ODFlMjc2MDFlYzI4MDYyMTliZDg1ZmEwMjk1Y2QzOTY1ZDcxZIwNWJ4=: 00:18:57.192 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.192 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:57.192 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.192 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.192 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.192 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:57.192 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:57.192 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:57.192 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:18:57.192 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:57.192 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:57.192 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:57.192 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:57.192 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.192 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.192 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.192 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.192 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.192 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.192 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.192 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.452 00:18:57.452 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:57.452 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.452 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:57.712 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.712 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.712 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.712 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.712 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.712 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:57.712 { 00:18:57.712 "cntlid": 27, 00:18:57.712 "qid": 0, 00:18:57.712 "state": "enabled", 00:18:57.712 "thread": "nvmf_tgt_poll_group_000", 00:18:57.712 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:57.712 "listen_address": { 00:18:57.712 "trtype": "TCP", 00:18:57.712 "adrfam": "IPv4", 00:18:57.712 "traddr": "10.0.0.2", 00:18:57.712 "trsvcid": "4420" 00:18:57.712 }, 00:18:57.712 "peer_address": { 00:18:57.712 "trtype": "TCP", 00:18:57.712 "adrfam": "IPv4", 00:18:57.712 "traddr": "10.0.0.1", 00:18:57.712 "trsvcid": "53706" 00:18:57.712 }, 00:18:57.712 "auth": { 00:18:57.712 "state": "completed", 00:18:57.712 "digest": "sha256", 00:18:57.712 "dhgroup": "ffdhe4096" 00:18:57.712 } 00:18:57.712 } 00:18:57.712 ]' 00:18:57.712 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:57.712 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:57.712 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:57.712 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:57.712 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:57.712 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.712 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.712 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.973 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: --dhchap-ctrl-secret DHHC-1:02:MGRlNTA1NjZiMzhkYTM5MDk5NGUyNDJiZTQzYmExZTVhMDA1MWZmM2Q2NzFhMzhl6hIcBA==: 00:18:57.973 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: --dhchap-ctrl-secret DHHC-1:02:MGRlNTA1NjZiMzhkYTM5MDk5NGUyNDJiZTQzYmExZTVhMDA1MWZmM2Q2NzFhMzhl6hIcBA==: 00:18:58.915 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.915 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:58.915 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.915 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.915 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.915 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:58.915 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:58.915 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:58.915 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:18:58.915 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:58.915 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:58.915 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:58.915 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:58.915 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.915 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.915 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.915 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.915 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.915 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.915 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.915 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.175 00:18:59.175 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:59.175 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:59.175 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.436 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.436 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.436 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.436 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.436 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.436 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:59.436 { 00:18:59.436 "cntlid": 29, 00:18:59.436 "qid": 0, 00:18:59.436 "state": "enabled", 00:18:59.436 "thread": "nvmf_tgt_poll_group_000", 00:18:59.436 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:59.436 "listen_address": { 00:18:59.436 "trtype": "TCP", 00:18:59.436 "adrfam": "IPv4", 00:18:59.436 "traddr": "10.0.0.2", 00:18:59.436 "trsvcid": "4420" 00:18:59.436 }, 00:18:59.436 "peer_address": { 00:18:59.436 "trtype": "TCP", 00:18:59.436 "adrfam": "IPv4", 00:18:59.436 "traddr": "10.0.0.1", 00:18:59.436 "trsvcid": "45252" 00:18:59.436 }, 00:18:59.436 "auth": { 00:18:59.436 "state": "completed", 00:18:59.436 "digest": "sha256", 00:18:59.436 "dhgroup": "ffdhe4096" 00:18:59.436 } 00:18:59.436 } 00:18:59.436 ]' 00:18:59.436 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:59.436 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:59.436 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:59.436 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:59.436 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:59.436 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.436 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.436 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.697 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: --dhchap-ctrl-secret DHHC-1:01:NjEwODNmZmE2OTc4MDQ0MzY2ZDY1MmU1NzA5ZDM1NmFfCOpC: 00:18:59.697 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: --dhchap-ctrl-secret DHHC-1:01:NjEwODNmZmE2OTc4MDQ0MzY2ZDY1MmU1NzA5ZDM1NmFfCOpC: 00:19:00.702 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.702 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:00.702 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.702 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.702 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.702 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:00.702 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:00.702 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:00.702 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:00.702 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:00.702 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:00.702 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:00.702 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:00.702 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.702 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:00.702 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.702 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.702 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.702 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:00.702 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:00.702 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:01.056 00:19:01.056 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.056 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.056 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.346 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.346 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.346 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.346 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.346 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.346 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:01.346 { 00:19:01.346 "cntlid": 31, 00:19:01.346 "qid": 0, 00:19:01.346 "state": "enabled", 00:19:01.346 "thread": "nvmf_tgt_poll_group_000", 00:19:01.346 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:01.346 "listen_address": { 00:19:01.346 "trtype": "TCP", 00:19:01.346 "adrfam": "IPv4", 00:19:01.346 "traddr": "10.0.0.2", 00:19:01.346 "trsvcid": "4420" 00:19:01.346 }, 00:19:01.346 "peer_address": { 00:19:01.346 "trtype": "TCP", 00:19:01.346 "adrfam": "IPv4", 00:19:01.346 "traddr": "10.0.0.1", 00:19:01.346 "trsvcid": "45270" 00:19:01.346 }, 00:19:01.346 "auth": { 00:19:01.346 "state": "completed", 00:19:01.346 "digest": "sha256", 00:19:01.346 "dhgroup": "ffdhe4096" 00:19:01.346 } 00:19:01.346 } 00:19:01.346 ]' 00:19:01.346 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:01.346 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:01.346 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:01.346 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:01.346 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:01.346 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.346 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.346 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.608 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:19:01.608 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:19:02.179 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.179 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:02.179 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.179 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.179 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.179 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:02.179 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:02.179 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:02.179 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:02.439 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:02.439 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:02.439 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:02.439 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:02.439 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:02.439 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.439 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.439 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.439 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.439 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.439 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.439 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.439 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.700 00:19:02.700 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:02.700 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:02.700 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.961 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.961 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.961 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.961 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.961 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.961 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:02.961 { 00:19:02.961 "cntlid": 33, 00:19:02.961 "qid": 0, 00:19:02.961 "state": "enabled", 00:19:02.961 "thread": "nvmf_tgt_poll_group_000", 00:19:02.961 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:02.961 "listen_address": { 00:19:02.961 "trtype": "TCP", 00:19:02.961 "adrfam": "IPv4", 00:19:02.961 "traddr": "10.0.0.2", 00:19:02.961 "trsvcid": "4420" 00:19:02.961 }, 00:19:02.961 "peer_address": { 00:19:02.961 "trtype": "TCP", 00:19:02.961 "adrfam": "IPv4", 00:19:02.961 "traddr": "10.0.0.1", 00:19:02.961 "trsvcid": "45294" 00:19:02.961 }, 00:19:02.961 "auth": { 00:19:02.961 "state": "completed", 00:19:02.961 "digest": "sha256", 00:19:02.961 "dhgroup": "ffdhe6144" 00:19:02.961 } 00:19:02.961 } 00:19:02.961 ]' 00:19:02.961 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:02.961 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:02.961 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:02.961 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:02.961 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:02.961 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.961 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.961 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.222 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzNiYTg2YTkyOTU0ODQxOTAxMDAxNmI1YjEyYjJlMmI2OTc2ZTUxNzA4NDc5YmNm8TT1Rw==: --dhchap-ctrl-secret DHHC-1:03:MGU0YmExNGE1ODk4YTNmYmMxMzhiOTNiYWQ4ODFlMjc2MDFlYzI4MDYyMTliZDg1ZmEwMjk1Y2QzOTY1ZDcxZIwNWJ4=: 00:19:03.222 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NzNiYTg2YTkyOTU0ODQxOTAxMDAxNmI1YjEyYjJlMmI2OTc2ZTUxNzA4NDc5YmNm8TT1Rw==: --dhchap-ctrl-secret DHHC-1:03:MGU0YmExNGE1ODk4YTNmYmMxMzhiOTNiYWQ4ODFlMjc2MDFlYzI4MDYyMTliZDg1ZmEwMjk1Y2QzOTY1ZDcxZIwNWJ4=: 00:19:03.794 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.794 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:03.794 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.794 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.794 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.794 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:03.794 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:03.794 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:04.055 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:04.055 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:04.055 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:04.055 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:04.055 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:04.055 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.055 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.055 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.055 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.055 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.055 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.055 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.055 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.317 00:19:04.578 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:04.578 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:04.578 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.578 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.578 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.578 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.578 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.578 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.578 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.578 { 00:19:04.578 "cntlid": 35, 00:19:04.578 "qid": 0, 00:19:04.578 "state": "enabled", 00:19:04.578 "thread": "nvmf_tgt_poll_group_000", 00:19:04.578 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:04.578 "listen_address": { 00:19:04.578 "trtype": "TCP", 00:19:04.578 "adrfam": "IPv4", 00:19:04.578 "traddr": "10.0.0.2", 00:19:04.578 "trsvcid": "4420" 00:19:04.578 }, 00:19:04.578 "peer_address": { 00:19:04.578 "trtype": "TCP", 00:19:04.578 "adrfam": "IPv4", 00:19:04.578 "traddr": "10.0.0.1", 00:19:04.578 "trsvcid": "45324" 00:19:04.578 }, 00:19:04.578 "auth": { 00:19:04.578 "state": "completed", 00:19:04.578 "digest": "sha256", 00:19:04.579 "dhgroup": "ffdhe6144" 00:19:04.579 } 00:19:04.579 } 00:19:04.579 ]' 00:19:04.579 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.579 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:04.839 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.839 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:04.839 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.839 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.839 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.839 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.100 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: --dhchap-ctrl-secret DHHC-1:02:MGRlNTA1NjZiMzhkYTM5MDk5NGUyNDJiZTQzYmExZTVhMDA1MWZmM2Q2NzFhMzhl6hIcBA==: 00:19:05.100 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: --dhchap-ctrl-secret DHHC-1:02:MGRlNTA1NjZiMzhkYTM5MDk5NGUyNDJiZTQzYmExZTVhMDA1MWZmM2Q2NzFhMzhl6hIcBA==: 00:19:05.671 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.671 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:05.671 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.671 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.671 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.671 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.671 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:05.671 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:05.932 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:05.932 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:05.932 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:05.932 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:05.932 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:05.932 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.932 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.932 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.932 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.932 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.932 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.932 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.932 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.192 00:19:06.453 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.453 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.453 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.453 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.453 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.453 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.453 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.453 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.453 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.453 { 00:19:06.453 "cntlid": 37, 00:19:06.453 "qid": 0, 00:19:06.453 "state": "enabled", 00:19:06.453 "thread": "nvmf_tgt_poll_group_000", 00:19:06.453 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:06.453 "listen_address": { 00:19:06.453 "trtype": "TCP", 00:19:06.453 "adrfam": "IPv4", 00:19:06.453 "traddr": "10.0.0.2", 00:19:06.453 "trsvcid": "4420" 00:19:06.453 }, 00:19:06.453 "peer_address": { 00:19:06.453 "trtype": "TCP", 00:19:06.453 "adrfam": "IPv4", 00:19:06.453 "traddr": "10.0.0.1", 00:19:06.453 "trsvcid": "45350" 00:19:06.453 }, 00:19:06.453 "auth": { 00:19:06.453 "state": "completed", 00:19:06.453 "digest": "sha256", 00:19:06.453 "dhgroup": "ffdhe6144" 00:19:06.453 } 00:19:06.453 } 00:19:06.453 ]' 00:19:06.453 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.453 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:06.453 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.714 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:06.714 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.715 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.715 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.715 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.715 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: --dhchap-ctrl-secret DHHC-1:01:NjEwODNmZmE2OTc4MDQ0MzY2ZDY1MmU1NzA5ZDM1NmFfCOpC: 00:19:06.715 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: --dhchap-ctrl-secret DHHC-1:01:NjEwODNmZmE2OTc4MDQ0MzY2ZDY1MmU1NzA5ZDM1NmFfCOpC: 00:19:07.656 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.656 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:07.656 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.656 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.656 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.656 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:07.656 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:07.657 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:07.657 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:19:07.657 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:07.657 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:07.657 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:07.657 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:07.657 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.657 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:07.657 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.657 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.918 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.918 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:07.918 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:07.918 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:08.179 00:19:08.179 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.179 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.179 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.439 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.439 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.439 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.439 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.439 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.439 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.439 { 00:19:08.439 "cntlid": 39, 00:19:08.439 "qid": 0, 00:19:08.439 "state": "enabled", 00:19:08.439 "thread": "nvmf_tgt_poll_group_000", 00:19:08.439 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:08.440 "listen_address": { 00:19:08.440 "trtype": "TCP", 00:19:08.440 "adrfam": "IPv4", 00:19:08.440 "traddr": "10.0.0.2", 00:19:08.440 "trsvcid": "4420" 00:19:08.440 }, 00:19:08.440 "peer_address": { 00:19:08.440 "trtype": "TCP", 00:19:08.440 "adrfam": "IPv4", 00:19:08.440 "traddr": "10.0.0.1", 00:19:08.440 "trsvcid": "51108" 00:19:08.440 }, 00:19:08.440 "auth": { 00:19:08.440 "state": "completed", 00:19:08.440 "digest": "sha256", 00:19:08.440 "dhgroup": "ffdhe6144" 00:19:08.440 } 00:19:08.440 } 00:19:08.440 ]' 00:19:08.440 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.440 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:08.440 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:08.440 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:08.440 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:08.440 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.440 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.440 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.701 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:19:08.701 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:19:09.272 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.532 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:09.532 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.532 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.532 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.532 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:09.532 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:09.532 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:09.532 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:09.532 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:19:09.532 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:09.532 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:09.532 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:09.532 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:09.532 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.532 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.532 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.532 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.533 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.533 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.533 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.533 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.103 00:19:10.103 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.103 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:10.103 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.363 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.363 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.363 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.363 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.363 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.363 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:10.363 { 00:19:10.363 "cntlid": 41, 00:19:10.363 "qid": 0, 00:19:10.363 "state": "enabled", 00:19:10.363 "thread": "nvmf_tgt_poll_group_000", 00:19:10.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:10.363 "listen_address": { 00:19:10.363 "trtype": "TCP", 00:19:10.363 "adrfam": "IPv4", 00:19:10.363 "traddr": "10.0.0.2", 00:19:10.363 "trsvcid": "4420" 00:19:10.363 }, 00:19:10.363 "peer_address": { 00:19:10.363 "trtype": "TCP", 00:19:10.363 "adrfam": "IPv4", 00:19:10.363 "traddr": "10.0.0.1", 00:19:10.363 "trsvcid": "51128" 00:19:10.363 }, 00:19:10.363 "auth": { 00:19:10.363 "state": "completed", 00:19:10.363 "digest": "sha256", 00:19:10.363 "dhgroup": "ffdhe8192" 00:19:10.363 } 00:19:10.363 } 00:19:10.363 ]' 00:19:10.363 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.363 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:10.363 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.363 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:10.363 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:10.363 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.363 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.363 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.624 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzNiYTg2YTkyOTU0ODQxOTAxMDAxNmI1YjEyYjJlMmI2OTc2ZTUxNzA4NDc5YmNm8TT1Rw==: --dhchap-ctrl-secret DHHC-1:03:MGU0YmExNGE1ODk4YTNmYmMxMzhiOTNiYWQ4ODFlMjc2MDFlYzI4MDYyMTliZDg1ZmEwMjk1Y2QzOTY1ZDcxZIwNWJ4=: 00:19:10.624 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NzNiYTg2YTkyOTU0ODQxOTAxMDAxNmI1YjEyYjJlMmI2OTc2ZTUxNzA4NDc5YmNm8TT1Rw==: --dhchap-ctrl-secret DHHC-1:03:MGU0YmExNGE1ODk4YTNmYmMxMzhiOTNiYWQ4ODFlMjc2MDFlYzI4MDYyMTliZDg1ZmEwMjk1Y2QzOTY1ZDcxZIwNWJ4=: 00:19:11.568 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.568 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:11.568 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.568 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.568 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.568 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:11.568 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:11.568 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:11.568 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:19:11.568 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:11.568 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:11.568 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:11.568 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:11.568 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.568 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.568 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.568 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.568 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.568 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.568 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.568 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.142 00:19:12.142 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:12.142 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:12.142 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.402 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.402 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.402 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.402 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.402 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.402 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:12.402 { 00:19:12.402 "cntlid": 43, 00:19:12.402 "qid": 0, 00:19:12.402 "state": "enabled", 00:19:12.402 "thread": "nvmf_tgt_poll_group_000", 00:19:12.402 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:12.402 "listen_address": { 00:19:12.402 "trtype": "TCP", 00:19:12.402 "adrfam": "IPv4", 00:19:12.402 "traddr": "10.0.0.2", 00:19:12.402 "trsvcid": "4420" 00:19:12.402 }, 00:19:12.402 "peer_address": { 00:19:12.402 "trtype": "TCP", 00:19:12.402 "adrfam": "IPv4", 00:19:12.402 "traddr": "10.0.0.1", 00:19:12.402 "trsvcid": "51166" 00:19:12.402 }, 00:19:12.402 "auth": { 00:19:12.402 "state": "completed", 00:19:12.402 "digest": "sha256", 00:19:12.402 "dhgroup": "ffdhe8192" 00:19:12.402 } 00:19:12.402 } 00:19:12.402 ]' 00:19:12.402 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:12.402 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:12.403 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:12.403 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:12.403 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:12.403 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.403 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.403 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.663 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: --dhchap-ctrl-secret DHHC-1:02:MGRlNTA1NjZiMzhkYTM5MDk5NGUyNDJiZTQzYmExZTVhMDA1MWZmM2Q2NzFhMzhl6hIcBA==: 00:19:12.663 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: --dhchap-ctrl-secret DHHC-1:02:MGRlNTA1NjZiMzhkYTM5MDk5NGUyNDJiZTQzYmExZTVhMDA1MWZmM2Q2NzFhMzhl6hIcBA==: 00:19:13.606 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.606 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:13.606 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.606 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.606 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.606 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:13.606 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:13.606 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:13.606 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:19:13.606 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:13.606 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:13.606 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:13.606 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:13.606 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.606 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.606 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.606 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.606 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.606 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.606 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.606 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.177 00:19:14.177 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:14.177 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:14.177 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.438 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.438 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.438 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.438 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.438 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.438 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:14.438 { 00:19:14.438 "cntlid": 45, 00:19:14.438 "qid": 0, 00:19:14.438 "state": "enabled", 00:19:14.438 "thread": "nvmf_tgt_poll_group_000", 00:19:14.438 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:14.438 "listen_address": { 00:19:14.438 "trtype": "TCP", 00:19:14.438 "adrfam": "IPv4", 00:19:14.438 "traddr": "10.0.0.2", 00:19:14.438 "trsvcid": "4420" 00:19:14.438 }, 00:19:14.438 "peer_address": { 00:19:14.438 "trtype": "TCP", 00:19:14.438 "adrfam": "IPv4", 00:19:14.438 "traddr": "10.0.0.1", 00:19:14.438 "trsvcid": "51212" 00:19:14.438 }, 00:19:14.438 "auth": { 00:19:14.438 "state": "completed", 00:19:14.438 "digest": "sha256", 00:19:14.438 "dhgroup": "ffdhe8192" 00:19:14.438 } 00:19:14.438 } 00:19:14.438 ]' 00:19:14.438 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:14.438 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:14.438 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:14.438 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:14.438 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:14.438 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.438 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.438 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.700 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: --dhchap-ctrl-secret DHHC-1:01:NjEwODNmZmE2OTc4MDQ0MzY2ZDY1MmU1NzA5ZDM1NmFfCOpC: 00:19:14.700 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: --dhchap-ctrl-secret DHHC-1:01:NjEwODNmZmE2OTc4MDQ0MzY2ZDY1MmU1NzA5ZDM1NmFfCOpC: 00:19:15.271 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.532 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:15.532 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.532 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.532 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.532 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:15.532 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:15.532 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:15.532 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:19:15.532 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:15.532 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:15.532 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:15.532 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:15.532 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.532 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:15.532 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.532 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.532 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.532 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:15.532 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:15.532 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:16.104 00:19:16.104 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:16.104 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:16.104 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.365 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.365 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.365 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.365 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.365 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.365 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:16.365 { 00:19:16.365 "cntlid": 47, 00:19:16.365 "qid": 0, 00:19:16.365 "state": "enabled", 00:19:16.365 "thread": "nvmf_tgt_poll_group_000", 00:19:16.365 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:16.365 "listen_address": { 00:19:16.365 "trtype": "TCP", 00:19:16.365 "adrfam": "IPv4", 00:19:16.366 "traddr": "10.0.0.2", 00:19:16.366 "trsvcid": "4420" 00:19:16.366 }, 00:19:16.366 "peer_address": { 00:19:16.366 "trtype": "TCP", 00:19:16.366 "adrfam": "IPv4", 00:19:16.366 "traddr": "10.0.0.1", 00:19:16.366 "trsvcid": "51248" 00:19:16.366 }, 00:19:16.366 "auth": { 00:19:16.366 "state": "completed", 00:19:16.366 "digest": "sha256", 00:19:16.366 "dhgroup": "ffdhe8192" 00:19:16.366 } 00:19:16.366 } 00:19:16.366 ]' 00:19:16.366 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:16.366 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:16.366 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:16.366 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:16.366 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:16.366 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.366 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.366 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.627 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:19:16.627 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:19:17.569 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.569 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:17.569 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.569 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.569 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.569 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:17.569 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:17.569 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:17.569 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:17.569 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:17.569 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:19:17.569 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:17.569 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:17.569 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:17.569 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:17.569 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.569 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.569 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.569 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.569 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.569 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.569 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.569 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.829 00:19:17.829 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:17.829 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:17.829 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.090 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.090 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.090 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.090 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.090 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.090 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:18.090 { 00:19:18.090 "cntlid": 49, 00:19:18.090 "qid": 0, 00:19:18.090 "state": "enabled", 00:19:18.090 "thread": "nvmf_tgt_poll_group_000", 00:19:18.090 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:18.090 "listen_address": { 00:19:18.090 "trtype": "TCP", 00:19:18.090 "adrfam": "IPv4", 00:19:18.090 "traddr": "10.0.0.2", 00:19:18.090 "trsvcid": "4420" 00:19:18.090 }, 00:19:18.090 "peer_address": { 00:19:18.090 "trtype": "TCP", 00:19:18.090 "adrfam": "IPv4", 00:19:18.090 "traddr": "10.0.0.1", 00:19:18.090 "trsvcid": "47052" 00:19:18.090 }, 00:19:18.090 "auth": { 00:19:18.090 "state": "completed", 00:19:18.090 "digest": "sha384", 00:19:18.090 "dhgroup": "null" 00:19:18.090 } 00:19:18.090 } 00:19:18.090 ]' 00:19:18.090 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:18.090 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:18.090 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:18.090 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:18.090 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:18.090 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.090 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.090 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.350 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzNiYTg2YTkyOTU0ODQxOTAxMDAxNmI1YjEyYjJlMmI2OTc2ZTUxNzA4NDc5YmNm8TT1Rw==: --dhchap-ctrl-secret DHHC-1:03:MGU0YmExNGE1ODk4YTNmYmMxMzhiOTNiYWQ4ODFlMjc2MDFlYzI4MDYyMTliZDg1ZmEwMjk1Y2QzOTY1ZDcxZIwNWJ4=: 00:19:18.350 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NzNiYTg2YTkyOTU0ODQxOTAxMDAxNmI1YjEyYjJlMmI2OTc2ZTUxNzA4NDc5YmNm8TT1Rw==: --dhchap-ctrl-secret DHHC-1:03:MGU0YmExNGE1ODk4YTNmYmMxMzhiOTNiYWQ4ODFlMjc2MDFlYzI4MDYyMTliZDg1ZmEwMjk1Y2QzOTY1ZDcxZIwNWJ4=: 00:19:18.921 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.921 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:18.921 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.921 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.181 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.181 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:19.181 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:19.181 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:19.181 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:19:19.181 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:19.181 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:19.181 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:19.181 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:19.181 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.181 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.181 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.181 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.181 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.181 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.181 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.181 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.441 00:19:19.441 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:19.441 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:19.441 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.703 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.703 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.703 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.703 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.703 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.703 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:19.703 { 00:19:19.703 "cntlid": 51, 00:19:19.703 "qid": 0, 00:19:19.703 "state": "enabled", 00:19:19.703 "thread": "nvmf_tgt_poll_group_000", 00:19:19.703 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:19.703 "listen_address": { 00:19:19.703 "trtype": "TCP", 00:19:19.703 "adrfam": "IPv4", 00:19:19.703 "traddr": "10.0.0.2", 00:19:19.703 "trsvcid": "4420" 00:19:19.703 }, 00:19:19.703 "peer_address": { 00:19:19.703 "trtype": "TCP", 00:19:19.703 "adrfam": "IPv4", 00:19:19.703 "traddr": "10.0.0.1", 00:19:19.703 "trsvcid": "47072" 00:19:19.703 }, 00:19:19.703 "auth": { 00:19:19.703 "state": "completed", 00:19:19.703 "digest": "sha384", 00:19:19.703 "dhgroup": "null" 00:19:19.703 } 00:19:19.703 } 00:19:19.703 ]' 00:19:19.703 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:19.703 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:19.703 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:19.703 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:19.703 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:19.703 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.703 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.703 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.964 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: --dhchap-ctrl-secret DHHC-1:02:MGRlNTA1NjZiMzhkYTM5MDk5NGUyNDJiZTQzYmExZTVhMDA1MWZmM2Q2NzFhMzhl6hIcBA==: 00:19:19.964 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: --dhchap-ctrl-secret DHHC-1:02:MGRlNTA1NjZiMzhkYTM5MDk5NGUyNDJiZTQzYmExZTVhMDA1MWZmM2Q2NzFhMzhl6hIcBA==: 00:19:20.905 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.905 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:20.905 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.905 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.905 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.905 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:20.905 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:20.905 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:20.905 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:19:20.905 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:20.905 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:20.905 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:20.905 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:20.905 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.905 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.905 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.905 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.905 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.905 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.905 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.905 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.164 00:19:21.164 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:21.164 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:21.164 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.425 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.425 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.425 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.425 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.425 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.425 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:21.425 { 00:19:21.425 "cntlid": 53, 00:19:21.425 "qid": 0, 00:19:21.425 "state": "enabled", 00:19:21.425 "thread": "nvmf_tgt_poll_group_000", 00:19:21.425 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:21.425 "listen_address": { 00:19:21.425 "trtype": "TCP", 00:19:21.425 "adrfam": "IPv4", 00:19:21.425 "traddr": "10.0.0.2", 00:19:21.425 "trsvcid": "4420" 00:19:21.425 }, 00:19:21.425 "peer_address": { 00:19:21.425 "trtype": "TCP", 00:19:21.425 "adrfam": "IPv4", 00:19:21.425 "traddr": "10.0.0.1", 00:19:21.425 "trsvcid": "47110" 00:19:21.425 }, 00:19:21.425 "auth": { 00:19:21.425 "state": "completed", 00:19:21.425 "digest": "sha384", 00:19:21.425 "dhgroup": "null" 00:19:21.425 } 00:19:21.425 } 00:19:21.425 ]' 00:19:21.425 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:21.425 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:21.425 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:21.425 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:21.425 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:21.425 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.425 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.425 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.685 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: --dhchap-ctrl-secret DHHC-1:01:NjEwODNmZmE2OTc4MDQ0MzY2ZDY1MmU1NzA5ZDM1NmFfCOpC: 00:19:21.685 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: --dhchap-ctrl-secret DHHC-1:01:NjEwODNmZmE2OTc4MDQ0MzY2ZDY1MmU1NzA5ZDM1NmFfCOpC: 00:19:22.628 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.628 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:22.628 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.628 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.628 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.628 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:22.628 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:22.628 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:22.628 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:19:22.628 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:22.628 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:22.628 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:22.628 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:22.628 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.628 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:22.628 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.628 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.628 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.628 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:22.628 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:22.628 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:22.889 00:19:22.889 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:22.889 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:22.889 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.150 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.150 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.150 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.150 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.150 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.150 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:23.150 { 00:19:23.150 "cntlid": 55, 00:19:23.150 "qid": 0, 00:19:23.150 "state": "enabled", 00:19:23.150 "thread": "nvmf_tgt_poll_group_000", 00:19:23.150 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:23.150 "listen_address": { 00:19:23.150 "trtype": "TCP", 00:19:23.150 "adrfam": "IPv4", 00:19:23.150 "traddr": "10.0.0.2", 00:19:23.150 "trsvcid": "4420" 00:19:23.150 }, 00:19:23.150 "peer_address": { 00:19:23.150 "trtype": "TCP", 00:19:23.150 "adrfam": "IPv4", 00:19:23.150 "traddr": "10.0.0.1", 00:19:23.150 "trsvcid": "47150" 00:19:23.150 }, 00:19:23.150 "auth": { 00:19:23.150 "state": "completed", 00:19:23.150 "digest": "sha384", 00:19:23.150 "dhgroup": "null" 00:19:23.150 } 00:19:23.150 } 00:19:23.150 ]' 00:19:23.150 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:23.150 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:23.150 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:23.150 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:23.150 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:23.150 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.150 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.150 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.411 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:19:23.411 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:19:24.354 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.354 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:24.354 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.354 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.354 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.354 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:24.354 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:24.354 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:24.354 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:24.354 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:19:24.354 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:24.354 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:24.354 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:24.354 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:24.354 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.354 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.354 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.354 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.354 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.354 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.354 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.354 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.615 00:19:24.615 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:24.615 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:24.615 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.876 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.876 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.876 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.876 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.876 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.876 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:24.876 { 00:19:24.876 "cntlid": 57, 00:19:24.876 "qid": 0, 00:19:24.876 "state": "enabled", 00:19:24.876 "thread": "nvmf_tgt_poll_group_000", 00:19:24.876 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:24.876 "listen_address": { 00:19:24.876 "trtype": "TCP", 00:19:24.876 "adrfam": "IPv4", 00:19:24.876 "traddr": "10.0.0.2", 00:19:24.876 "trsvcid": "4420" 00:19:24.876 }, 00:19:24.876 "peer_address": { 00:19:24.876 "trtype": "TCP", 00:19:24.876 "adrfam": "IPv4", 00:19:24.876 "traddr": "10.0.0.1", 00:19:24.876 "trsvcid": "47184" 00:19:24.876 }, 00:19:24.876 "auth": { 00:19:24.876 "state": "completed", 00:19:24.876 "digest": "sha384", 00:19:24.876 "dhgroup": "ffdhe2048" 00:19:24.876 } 00:19:24.876 } 00:19:24.876 ]' 00:19:24.876 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:24.876 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:24.876 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:24.876 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:24.876 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:24.876 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.876 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.876 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.137 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzNiYTg2YTkyOTU0ODQxOTAxMDAxNmI1YjEyYjJlMmI2OTc2ZTUxNzA4NDc5YmNm8TT1Rw==: --dhchap-ctrl-secret DHHC-1:03:MGU0YmExNGE1ODk4YTNmYmMxMzhiOTNiYWQ4ODFlMjc2MDFlYzI4MDYyMTliZDg1ZmEwMjk1Y2QzOTY1ZDcxZIwNWJ4=: 00:19:25.137 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NzNiYTg2YTkyOTU0ODQxOTAxMDAxNmI1YjEyYjJlMmI2OTc2ZTUxNzA4NDc5YmNm8TT1Rw==: --dhchap-ctrl-secret DHHC-1:03:MGU0YmExNGE1ODk4YTNmYmMxMzhiOTNiYWQ4ODFlMjc2MDFlYzI4MDYyMTliZDg1ZmEwMjk1Y2QzOTY1ZDcxZIwNWJ4=: 00:19:26.078 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.078 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:26.078 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.078 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.078 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.078 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:26.078 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:26.078 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:26.078 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:19:26.078 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:26.078 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:26.078 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:26.078 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:26.078 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.078 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.078 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.078 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.078 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.078 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.078 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.078 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.362 00:19:26.362 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:26.362 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:26.362 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.653 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.653 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.653 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.653 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.653 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.653 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:26.653 { 00:19:26.653 "cntlid": 59, 00:19:26.653 "qid": 0, 00:19:26.653 "state": "enabled", 00:19:26.653 "thread": "nvmf_tgt_poll_group_000", 00:19:26.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:26.653 "listen_address": { 00:19:26.653 "trtype": "TCP", 00:19:26.653 "adrfam": "IPv4", 00:19:26.653 "traddr": "10.0.0.2", 00:19:26.653 "trsvcid": "4420" 00:19:26.653 }, 00:19:26.653 "peer_address": { 00:19:26.653 "trtype": "TCP", 00:19:26.653 "adrfam": "IPv4", 00:19:26.653 "traddr": "10.0.0.1", 00:19:26.653 "trsvcid": "47212" 00:19:26.653 }, 00:19:26.653 "auth": { 00:19:26.653 "state": "completed", 00:19:26.653 "digest": "sha384", 00:19:26.653 "dhgroup": "ffdhe2048" 00:19:26.653 } 00:19:26.653 } 00:19:26.653 ]' 00:19:26.653 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:26.653 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:26.653 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:26.653 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:26.653 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:26.653 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.653 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.653 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.924 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: --dhchap-ctrl-secret DHHC-1:02:MGRlNTA1NjZiMzhkYTM5MDk5NGUyNDJiZTQzYmExZTVhMDA1MWZmM2Q2NzFhMzhl6hIcBA==: 00:19:26.924 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: --dhchap-ctrl-secret DHHC-1:02:MGRlNTA1NjZiMzhkYTM5MDk5NGUyNDJiZTQzYmExZTVhMDA1MWZmM2Q2NzFhMzhl6hIcBA==: 00:19:27.497 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.497 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:27.497 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.497 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.497 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.498 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:27.498 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:27.498 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:27.758 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:19:27.758 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:27.758 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:27.758 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:27.758 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:27.758 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.758 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.758 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.758 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.758 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.758 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.758 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.758 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.020 00:19:28.020 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:28.020 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:28.020 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.282 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.282 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.282 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.282 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.282 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.282 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:28.282 { 00:19:28.282 "cntlid": 61, 00:19:28.282 "qid": 0, 00:19:28.282 "state": "enabled", 00:19:28.282 "thread": "nvmf_tgt_poll_group_000", 00:19:28.282 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:28.282 "listen_address": { 00:19:28.282 "trtype": "TCP", 00:19:28.282 "adrfam": "IPv4", 00:19:28.282 "traddr": "10.0.0.2", 00:19:28.282 "trsvcid": "4420" 00:19:28.282 }, 00:19:28.282 "peer_address": { 00:19:28.282 "trtype": "TCP", 00:19:28.282 "adrfam": "IPv4", 00:19:28.282 "traddr": "10.0.0.1", 00:19:28.282 "trsvcid": "53968" 00:19:28.282 }, 00:19:28.282 "auth": { 00:19:28.282 "state": "completed", 00:19:28.282 "digest": "sha384", 00:19:28.282 "dhgroup": "ffdhe2048" 00:19:28.282 } 00:19:28.282 } 00:19:28.282 ]' 00:19:28.282 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:28.282 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:28.282 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:28.282 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:28.282 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:28.282 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.282 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.282 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.543 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: --dhchap-ctrl-secret DHHC-1:01:NjEwODNmZmE2OTc4MDQ0MzY2ZDY1MmU1NzA5ZDM1NmFfCOpC: 00:19:28.543 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: --dhchap-ctrl-secret DHHC-1:01:NjEwODNmZmE2OTc4MDQ0MzY2ZDY1MmU1NzA5ZDM1NmFfCOpC: 00:19:29.487 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.487 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:29.488 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.488 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.488 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.488 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.488 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:29.488 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:29.488 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:19:29.488 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:29.488 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:29.488 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:29.488 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:29.488 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.488 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:29.488 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.488 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.488 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.488 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:29.488 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:29.488 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:29.748 00:19:29.748 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:29.748 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:29.748 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.009 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.009 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.009 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.009 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.009 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.009 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:30.009 { 00:19:30.009 "cntlid": 63, 00:19:30.009 "qid": 0, 00:19:30.009 "state": "enabled", 00:19:30.009 "thread": "nvmf_tgt_poll_group_000", 00:19:30.009 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:30.009 "listen_address": { 00:19:30.009 "trtype": "TCP", 00:19:30.009 "adrfam": "IPv4", 00:19:30.009 "traddr": "10.0.0.2", 00:19:30.009 "trsvcid": "4420" 00:19:30.009 }, 00:19:30.009 "peer_address": { 00:19:30.009 "trtype": "TCP", 00:19:30.009 "adrfam": "IPv4", 00:19:30.009 "traddr": "10.0.0.1", 00:19:30.009 "trsvcid": "53998" 00:19:30.009 }, 00:19:30.009 "auth": { 00:19:30.009 "state": "completed", 00:19:30.009 "digest": "sha384", 00:19:30.009 "dhgroup": "ffdhe2048" 00:19:30.009 } 00:19:30.009 } 00:19:30.009 ]' 00:19:30.009 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:30.009 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:30.009 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:30.009 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:30.009 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:30.009 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.009 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.009 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.270 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:19:30.270 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:19:30.840 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.108 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:31.108 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.108 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.108 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.108 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:31.108 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:31.108 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:31.108 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:31.108 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:19:31.108 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:31.109 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:31.109 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:31.109 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:31.109 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.109 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.109 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.109 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.109 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.109 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.109 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.109 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.374 00:19:31.374 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:31.375 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:31.375 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.635 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.635 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.635 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.635 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.635 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.635 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.635 { 00:19:31.635 "cntlid": 65, 00:19:31.635 "qid": 0, 00:19:31.635 "state": "enabled", 00:19:31.635 "thread": "nvmf_tgt_poll_group_000", 00:19:31.635 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:31.635 "listen_address": { 00:19:31.635 "trtype": "TCP", 00:19:31.635 "adrfam": "IPv4", 00:19:31.635 "traddr": "10.0.0.2", 00:19:31.635 "trsvcid": "4420" 00:19:31.635 }, 00:19:31.635 "peer_address": { 00:19:31.635 "trtype": "TCP", 00:19:31.635 "adrfam": "IPv4", 00:19:31.635 "traddr": "10.0.0.1", 00:19:31.635 "trsvcid": "54028" 00:19:31.635 }, 00:19:31.635 "auth": { 00:19:31.635 "state": "completed", 00:19:31.635 "digest": "sha384", 00:19:31.635 "dhgroup": "ffdhe3072" 00:19:31.635 } 00:19:31.635 } 00:19:31.635 ]' 00:19:31.635 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.635 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:31.635 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.635 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:31.635 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:31.635 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.635 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.635 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.895 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzNiYTg2YTkyOTU0ODQxOTAxMDAxNmI1YjEyYjJlMmI2OTc2ZTUxNzA4NDc5YmNm8TT1Rw==: --dhchap-ctrl-secret DHHC-1:03:MGU0YmExNGE1ODk4YTNmYmMxMzhiOTNiYWQ4ODFlMjc2MDFlYzI4MDYyMTliZDg1ZmEwMjk1Y2QzOTY1ZDcxZIwNWJ4=: 00:19:31.895 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NzNiYTg2YTkyOTU0ODQxOTAxMDAxNmI1YjEyYjJlMmI2OTc2ZTUxNzA4NDc5YmNm8TT1Rw==: --dhchap-ctrl-secret DHHC-1:03:MGU0YmExNGE1ODk4YTNmYmMxMzhiOTNiYWQ4ODFlMjc2MDFlYzI4MDYyMTliZDg1ZmEwMjk1Y2QzOTY1ZDcxZIwNWJ4=: 00:19:32.837 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.837 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:32.837 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.837 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.837 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.837 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:32.837 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:32.837 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:32.837 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:19:32.837 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:32.837 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:32.837 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:32.837 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:32.837 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.837 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.837 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.837 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.837 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.837 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.837 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.837 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.097 00:19:33.097 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:33.097 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:33.097 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.357 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.357 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.357 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.357 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.357 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.357 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.357 { 00:19:33.357 "cntlid": 67, 00:19:33.357 "qid": 0, 00:19:33.357 "state": "enabled", 00:19:33.357 "thread": "nvmf_tgt_poll_group_000", 00:19:33.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:33.357 "listen_address": { 00:19:33.357 "trtype": "TCP", 00:19:33.357 "adrfam": "IPv4", 00:19:33.357 "traddr": "10.0.0.2", 00:19:33.357 "trsvcid": "4420" 00:19:33.357 }, 00:19:33.357 "peer_address": { 00:19:33.357 "trtype": "TCP", 00:19:33.357 "adrfam": "IPv4", 00:19:33.357 "traddr": "10.0.0.1", 00:19:33.357 "trsvcid": "54062" 00:19:33.357 }, 00:19:33.357 "auth": { 00:19:33.357 "state": "completed", 00:19:33.357 "digest": "sha384", 00:19:33.357 "dhgroup": "ffdhe3072" 00:19:33.357 } 00:19:33.357 } 00:19:33.357 ]' 00:19:33.357 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:33.357 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:33.357 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:33.357 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:33.357 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:33.617 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.617 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.617 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.617 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: --dhchap-ctrl-secret DHHC-1:02:MGRlNTA1NjZiMzhkYTM5MDk5NGUyNDJiZTQzYmExZTVhMDA1MWZmM2Q2NzFhMzhl6hIcBA==: 00:19:33.617 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: --dhchap-ctrl-secret DHHC-1:02:MGRlNTA1NjZiMzhkYTM5MDk5NGUyNDJiZTQzYmExZTVhMDA1MWZmM2Q2NzFhMzhl6hIcBA==: 00:19:34.556 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.556 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:34.556 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.556 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.556 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.556 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.556 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:34.556 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:34.816 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:19:34.816 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:34.816 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:34.816 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:34.816 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:34.816 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.816 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.816 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.816 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.816 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.816 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.816 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.816 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.077 00:19:35.077 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.077 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.077 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.077 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.077 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.077 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.077 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.077 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.077 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:35.077 { 00:19:35.077 "cntlid": 69, 00:19:35.077 "qid": 0, 00:19:35.077 "state": "enabled", 00:19:35.077 "thread": "nvmf_tgt_poll_group_000", 00:19:35.077 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:35.077 "listen_address": { 00:19:35.077 "trtype": "TCP", 00:19:35.077 "adrfam": "IPv4", 00:19:35.077 "traddr": "10.0.0.2", 00:19:35.077 "trsvcid": "4420" 00:19:35.077 }, 00:19:35.077 "peer_address": { 00:19:35.077 "trtype": "TCP", 00:19:35.077 "adrfam": "IPv4", 00:19:35.077 "traddr": "10.0.0.1", 00:19:35.077 "trsvcid": "54076" 00:19:35.077 }, 00:19:35.077 "auth": { 00:19:35.077 "state": "completed", 00:19:35.077 "digest": "sha384", 00:19:35.077 "dhgroup": "ffdhe3072" 00:19:35.077 } 00:19:35.077 } 00:19:35.077 ]' 00:19:35.077 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.340 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:35.340 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.340 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:35.340 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:35.340 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.340 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.340 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.599 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: --dhchap-ctrl-secret DHHC-1:01:NjEwODNmZmE2OTc4MDQ0MzY2ZDY1MmU1NzA5ZDM1NmFfCOpC: 00:19:35.599 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: --dhchap-ctrl-secret DHHC-1:01:NjEwODNmZmE2OTc4MDQ0MzY2ZDY1MmU1NzA5ZDM1NmFfCOpC: 00:19:36.170 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.170 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:36.170 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.170 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.170 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.170 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:36.170 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:36.171 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:36.430 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:19:36.430 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:36.430 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:36.430 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:36.430 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:36.430 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.430 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:36.430 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.430 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.430 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.430 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:36.431 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:36.431 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:36.692 00:19:36.692 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:36.692 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:36.692 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.953 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.953 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.953 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.953 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.953 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.953 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:36.953 { 00:19:36.953 "cntlid": 71, 00:19:36.953 "qid": 0, 00:19:36.953 "state": "enabled", 00:19:36.953 "thread": "nvmf_tgt_poll_group_000", 00:19:36.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:36.953 "listen_address": { 00:19:36.953 "trtype": "TCP", 00:19:36.953 "adrfam": "IPv4", 00:19:36.953 "traddr": "10.0.0.2", 00:19:36.953 "trsvcid": "4420" 00:19:36.953 }, 00:19:36.953 "peer_address": { 00:19:36.953 "trtype": "TCP", 00:19:36.953 "adrfam": "IPv4", 00:19:36.953 "traddr": "10.0.0.1", 00:19:36.953 "trsvcid": "54116" 00:19:36.953 }, 00:19:36.953 "auth": { 00:19:36.953 "state": "completed", 00:19:36.953 "digest": "sha384", 00:19:36.953 "dhgroup": "ffdhe3072" 00:19:36.953 } 00:19:36.953 } 00:19:36.953 ]' 00:19:36.953 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:36.953 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:36.953 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:36.953 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:36.953 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:36.953 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.953 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.953 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.215 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:19:37.215 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:19:38.156 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.156 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:38.156 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.156 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.156 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.156 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:38.156 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:38.156 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:38.156 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:38.156 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:19:38.156 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:38.156 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:38.156 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:38.156 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:38.156 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.156 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.156 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.156 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.156 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.156 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.156 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.156 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.416 00:19:38.417 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:38.417 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:38.417 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.677 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.677 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.677 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.677 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.677 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.677 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:38.677 { 00:19:38.677 "cntlid": 73, 00:19:38.677 "qid": 0, 00:19:38.677 "state": "enabled", 00:19:38.677 "thread": "nvmf_tgt_poll_group_000", 00:19:38.677 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:38.677 "listen_address": { 00:19:38.677 "trtype": "TCP", 00:19:38.677 "adrfam": "IPv4", 00:19:38.677 "traddr": "10.0.0.2", 00:19:38.677 "trsvcid": "4420" 00:19:38.677 }, 00:19:38.677 "peer_address": { 00:19:38.677 "trtype": "TCP", 00:19:38.677 "adrfam": "IPv4", 00:19:38.677 "traddr": "10.0.0.1", 00:19:38.677 "trsvcid": "43694" 00:19:38.677 }, 00:19:38.677 "auth": { 00:19:38.677 "state": "completed", 00:19:38.677 "digest": "sha384", 00:19:38.677 "dhgroup": "ffdhe4096" 00:19:38.677 } 00:19:38.677 } 00:19:38.677 ]' 00:19:38.677 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:38.677 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:38.677 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:38.677 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:38.677 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:38.938 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.938 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.938 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.938 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzNiYTg2YTkyOTU0ODQxOTAxMDAxNmI1YjEyYjJlMmI2OTc2ZTUxNzA4NDc5YmNm8TT1Rw==: --dhchap-ctrl-secret DHHC-1:03:MGU0YmExNGE1ODk4YTNmYmMxMzhiOTNiYWQ4ODFlMjc2MDFlYzI4MDYyMTliZDg1ZmEwMjk1Y2QzOTY1ZDcxZIwNWJ4=: 00:19:38.938 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NzNiYTg2YTkyOTU0ODQxOTAxMDAxNmI1YjEyYjJlMmI2OTc2ZTUxNzA4NDc5YmNm8TT1Rw==: --dhchap-ctrl-secret DHHC-1:03:MGU0YmExNGE1ODk4YTNmYmMxMzhiOTNiYWQ4ODFlMjc2MDFlYzI4MDYyMTliZDg1ZmEwMjk1Y2QzOTY1ZDcxZIwNWJ4=: 00:19:39.880 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.880 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:39.880 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.880 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.880 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.880 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.880 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:39.880 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:39.880 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:19:39.880 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.880 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:39.880 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:39.880 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:39.880 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.880 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.880 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.880 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.880 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.880 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.880 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.880 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.140 00:19:40.140 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.140 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.140 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.401 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.401 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.401 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.401 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.401 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.401 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:40.401 { 00:19:40.401 "cntlid": 75, 00:19:40.401 "qid": 0, 00:19:40.401 "state": "enabled", 00:19:40.401 "thread": "nvmf_tgt_poll_group_000", 00:19:40.401 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:40.401 "listen_address": { 00:19:40.401 "trtype": "TCP", 00:19:40.401 "adrfam": "IPv4", 00:19:40.401 "traddr": "10.0.0.2", 00:19:40.401 "trsvcid": "4420" 00:19:40.401 }, 00:19:40.401 "peer_address": { 00:19:40.401 "trtype": "TCP", 00:19:40.401 "adrfam": "IPv4", 00:19:40.401 "traddr": "10.0.0.1", 00:19:40.401 "trsvcid": "43716" 00:19:40.401 }, 00:19:40.401 "auth": { 00:19:40.401 "state": "completed", 00:19:40.401 "digest": "sha384", 00:19:40.401 "dhgroup": "ffdhe4096" 00:19:40.401 } 00:19:40.401 } 00:19:40.401 ]' 00:19:40.401 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.401 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:40.401 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.401 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:40.401 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.662 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.662 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.662 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.662 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: --dhchap-ctrl-secret DHHC-1:02:MGRlNTA1NjZiMzhkYTM5MDk5NGUyNDJiZTQzYmExZTVhMDA1MWZmM2Q2NzFhMzhl6hIcBA==: 00:19:40.662 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: --dhchap-ctrl-secret DHHC-1:02:MGRlNTA1NjZiMzhkYTM5MDk5NGUyNDJiZTQzYmExZTVhMDA1MWZmM2Q2NzFhMzhl6hIcBA==: 00:19:41.603 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.603 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:41.603 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.603 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.603 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.603 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.603 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:41.603 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:41.603 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:19:41.603 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.603 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:41.603 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:41.603 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:41.603 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.603 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.603 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.603 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.603 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.603 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.603 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.603 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.865 00:19:42.126 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.126 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.127 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.127 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.127 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.127 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.127 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.127 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.127 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.127 { 00:19:42.127 "cntlid": 77, 00:19:42.127 "qid": 0, 00:19:42.127 "state": "enabled", 00:19:42.127 "thread": "nvmf_tgt_poll_group_000", 00:19:42.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:42.127 "listen_address": { 00:19:42.127 "trtype": "TCP", 00:19:42.127 "adrfam": "IPv4", 00:19:42.127 "traddr": "10.0.0.2", 00:19:42.127 "trsvcid": "4420" 00:19:42.127 }, 00:19:42.127 "peer_address": { 00:19:42.127 "trtype": "TCP", 00:19:42.127 "adrfam": "IPv4", 00:19:42.127 "traddr": "10.0.0.1", 00:19:42.127 "trsvcid": "43754" 00:19:42.127 }, 00:19:42.127 "auth": { 00:19:42.127 "state": "completed", 00:19:42.127 "digest": "sha384", 00:19:42.127 "dhgroup": "ffdhe4096" 00:19:42.127 } 00:19:42.127 } 00:19:42.127 ]' 00:19:42.127 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.127 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:42.127 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.388 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:42.388 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.388 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.388 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.388 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.648 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: --dhchap-ctrl-secret DHHC-1:01:NjEwODNmZmE2OTc4MDQ0MzY2ZDY1MmU1NzA5ZDM1NmFfCOpC: 00:19:42.648 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: --dhchap-ctrl-secret DHHC-1:01:NjEwODNmZmE2OTc4MDQ0MzY2ZDY1MmU1NzA5ZDM1NmFfCOpC: 00:19:43.218 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.218 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:43.218 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.219 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.219 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.219 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.219 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:43.219 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:43.479 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:19:43.479 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:43.479 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:43.479 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:43.479 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:43.479 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.479 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:43.479 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.479 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.479 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.479 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:43.479 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:43.479 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:43.739 00:19:43.739 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:43.739 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:43.739 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.000 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.000 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.000 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.000 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.000 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.000 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.000 { 00:19:44.000 "cntlid": 79, 00:19:44.000 "qid": 0, 00:19:44.000 "state": "enabled", 00:19:44.000 "thread": "nvmf_tgt_poll_group_000", 00:19:44.000 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:44.000 "listen_address": { 00:19:44.000 "trtype": "TCP", 00:19:44.000 "adrfam": "IPv4", 00:19:44.000 "traddr": "10.0.0.2", 00:19:44.000 "trsvcid": "4420" 00:19:44.000 }, 00:19:44.000 "peer_address": { 00:19:44.000 "trtype": "TCP", 00:19:44.000 "adrfam": "IPv4", 00:19:44.000 "traddr": "10.0.0.1", 00:19:44.000 "trsvcid": "43770" 00:19:44.000 }, 00:19:44.000 "auth": { 00:19:44.000 "state": "completed", 00:19:44.000 "digest": "sha384", 00:19:44.000 "dhgroup": "ffdhe4096" 00:19:44.000 } 00:19:44.000 } 00:19:44.000 ]' 00:19:44.000 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.000 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:44.000 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.000 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:44.000 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:44.000 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.000 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.000 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.261 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:19:44.261 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:19:45.202 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.202 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:45.202 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.202 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.202 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.202 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:45.202 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:45.202 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:45.202 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:45.202 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:19:45.202 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:45.202 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:45.202 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:45.202 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:45.202 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.202 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.203 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.203 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.203 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.203 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.203 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.203 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.464 00:19:45.725 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.725 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.725 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.725 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.725 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.725 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.725 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.725 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.725 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.725 { 00:19:45.725 "cntlid": 81, 00:19:45.725 "qid": 0, 00:19:45.725 "state": "enabled", 00:19:45.725 "thread": "nvmf_tgt_poll_group_000", 00:19:45.725 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:45.725 "listen_address": { 00:19:45.725 "trtype": "TCP", 00:19:45.725 "adrfam": "IPv4", 00:19:45.725 "traddr": "10.0.0.2", 00:19:45.725 "trsvcid": "4420" 00:19:45.725 }, 00:19:45.725 "peer_address": { 00:19:45.725 "trtype": "TCP", 00:19:45.725 "adrfam": "IPv4", 00:19:45.725 "traddr": "10.0.0.1", 00:19:45.725 "trsvcid": "43806" 00:19:45.725 }, 00:19:45.725 "auth": { 00:19:45.725 "state": "completed", 00:19:45.725 "digest": "sha384", 00:19:45.725 "dhgroup": "ffdhe6144" 00:19:45.725 } 00:19:45.725 } 00:19:45.725 ]' 00:19:45.725 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.725 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:45.725 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.985 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:45.985 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.985 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.985 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.985 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.985 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzNiYTg2YTkyOTU0ODQxOTAxMDAxNmI1YjEyYjJlMmI2OTc2ZTUxNzA4NDc5YmNm8TT1Rw==: --dhchap-ctrl-secret DHHC-1:03:MGU0YmExNGE1ODk4YTNmYmMxMzhiOTNiYWQ4ODFlMjc2MDFlYzI4MDYyMTliZDg1ZmEwMjk1Y2QzOTY1ZDcxZIwNWJ4=: 00:19:45.985 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NzNiYTg2YTkyOTU0ODQxOTAxMDAxNmI1YjEyYjJlMmI2OTc2ZTUxNzA4NDc5YmNm8TT1Rw==: --dhchap-ctrl-secret DHHC-1:03:MGU0YmExNGE1ODk4YTNmYmMxMzhiOTNiYWQ4ODFlMjc2MDFlYzI4MDYyMTliZDg1ZmEwMjk1Y2QzOTY1ZDcxZIwNWJ4=: 00:19:46.926 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.926 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:46.926 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.926 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.926 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.926 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.926 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:46.926 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:47.187 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:19:47.187 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:47.187 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:47.187 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:47.187 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:47.187 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.187 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.187 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.187 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.187 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.187 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.187 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.187 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.448 00:19:47.448 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:47.448 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:47.448 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.709 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.709 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.709 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.709 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.709 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.709 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.709 { 00:19:47.709 "cntlid": 83, 00:19:47.709 "qid": 0, 00:19:47.709 "state": "enabled", 00:19:47.709 "thread": "nvmf_tgt_poll_group_000", 00:19:47.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:47.709 "listen_address": { 00:19:47.709 "trtype": "TCP", 00:19:47.709 "adrfam": "IPv4", 00:19:47.709 "traddr": "10.0.0.2", 00:19:47.709 "trsvcid": "4420" 00:19:47.709 }, 00:19:47.709 "peer_address": { 00:19:47.709 "trtype": "TCP", 00:19:47.709 "adrfam": "IPv4", 00:19:47.709 "traddr": "10.0.0.1", 00:19:47.709 "trsvcid": "43822" 00:19:47.709 }, 00:19:47.709 "auth": { 00:19:47.709 "state": "completed", 00:19:47.709 "digest": "sha384", 00:19:47.709 "dhgroup": "ffdhe6144" 00:19:47.709 } 00:19:47.709 } 00:19:47.709 ]' 00:19:47.709 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.709 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:47.709 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.709 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:47.709 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.709 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.709 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.709 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.969 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: --dhchap-ctrl-secret DHHC-1:02:MGRlNTA1NjZiMzhkYTM5MDk5NGUyNDJiZTQzYmExZTVhMDA1MWZmM2Q2NzFhMzhl6hIcBA==: 00:19:47.969 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: --dhchap-ctrl-secret DHHC-1:02:MGRlNTA1NjZiMzhkYTM5MDk5NGUyNDJiZTQzYmExZTVhMDA1MWZmM2Q2NzFhMzhl6hIcBA==: 00:19:48.542 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.542 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:48.542 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.542 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.802 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.802 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.802 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:48.802 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:48.802 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:19:48.802 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.802 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:48.802 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:48.802 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:48.802 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.802 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.802 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.802 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.802 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.803 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.803 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.803 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.375 00:19:49.375 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.375 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:49.375 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.375 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.375 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.375 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.375 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.375 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.375 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:49.375 { 00:19:49.375 "cntlid": 85, 00:19:49.375 "qid": 0, 00:19:49.375 "state": "enabled", 00:19:49.375 "thread": "nvmf_tgt_poll_group_000", 00:19:49.375 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:49.375 "listen_address": { 00:19:49.375 "trtype": "TCP", 00:19:49.375 "adrfam": "IPv4", 00:19:49.375 "traddr": "10.0.0.2", 00:19:49.375 "trsvcid": "4420" 00:19:49.375 }, 00:19:49.375 "peer_address": { 00:19:49.375 "trtype": "TCP", 00:19:49.375 "adrfam": "IPv4", 00:19:49.375 "traddr": "10.0.0.1", 00:19:49.375 "trsvcid": "42062" 00:19:49.375 }, 00:19:49.375 "auth": { 00:19:49.375 "state": "completed", 00:19:49.375 "digest": "sha384", 00:19:49.375 "dhgroup": "ffdhe6144" 00:19:49.375 } 00:19:49.375 } 00:19:49.375 ]' 00:19:49.375 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:49.375 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:49.375 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:49.637 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:49.637 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.637 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.637 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.637 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.637 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: --dhchap-ctrl-secret DHHC-1:01:NjEwODNmZmE2OTc4MDQ0MzY2ZDY1MmU1NzA5ZDM1NmFfCOpC: 00:19:49.637 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: --dhchap-ctrl-secret DHHC-1:01:NjEwODNmZmE2OTc4MDQ0MzY2ZDY1MmU1NzA5ZDM1NmFfCOpC: 00:19:50.579 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.579 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:50.579 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.579 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.579 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.579 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.579 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:50.579 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:50.579 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:19:50.579 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.579 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:50.579 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:50.579 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:50.579 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.579 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:50.579 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.579 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.840 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.840 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:50.840 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:50.840 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:51.102 00:19:51.102 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.102 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.102 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.363 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.363 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.363 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.363 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.363 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.363 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.363 { 00:19:51.363 "cntlid": 87, 00:19:51.363 "qid": 0, 00:19:51.363 "state": "enabled", 00:19:51.363 "thread": "nvmf_tgt_poll_group_000", 00:19:51.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:51.363 "listen_address": { 00:19:51.363 "trtype": "TCP", 00:19:51.363 "adrfam": "IPv4", 00:19:51.363 "traddr": "10.0.0.2", 00:19:51.363 "trsvcid": "4420" 00:19:51.363 }, 00:19:51.363 "peer_address": { 00:19:51.363 "trtype": "TCP", 00:19:51.363 "adrfam": "IPv4", 00:19:51.363 "traddr": "10.0.0.1", 00:19:51.363 "trsvcid": "42092" 00:19:51.363 }, 00:19:51.363 "auth": { 00:19:51.363 "state": "completed", 00:19:51.363 "digest": "sha384", 00:19:51.363 "dhgroup": "ffdhe6144" 00:19:51.363 } 00:19:51.363 } 00:19:51.363 ]' 00:19:51.363 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.363 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:51.363 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.363 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:51.363 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.363 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.363 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.363 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.625 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:19:51.625 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:19:52.196 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.196 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:52.196 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.196 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.196 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.197 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:52.197 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.197 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:52.197 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:52.457 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:19:52.457 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.457 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:52.457 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:52.457 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:52.457 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.457 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.457 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.457 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.457 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.457 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.457 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.457 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.028 00:19:53.028 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.028 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.028 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.028 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.028 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.028 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.028 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.028 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.028 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.028 { 00:19:53.028 "cntlid": 89, 00:19:53.028 "qid": 0, 00:19:53.028 "state": "enabled", 00:19:53.028 "thread": "nvmf_tgt_poll_group_000", 00:19:53.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:53.028 "listen_address": { 00:19:53.028 "trtype": "TCP", 00:19:53.028 "adrfam": "IPv4", 00:19:53.028 "traddr": "10.0.0.2", 00:19:53.028 "trsvcid": "4420" 00:19:53.028 }, 00:19:53.028 "peer_address": { 00:19:53.028 "trtype": "TCP", 00:19:53.028 "adrfam": "IPv4", 00:19:53.028 "traddr": "10.0.0.1", 00:19:53.028 "trsvcid": "42126" 00:19:53.028 }, 00:19:53.028 "auth": { 00:19:53.028 "state": "completed", 00:19:53.028 "digest": "sha384", 00:19:53.028 "dhgroup": "ffdhe8192" 00:19:53.028 } 00:19:53.028 } 00:19:53.028 ]' 00:19:53.028 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.289 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:53.289 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.289 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:53.289 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.289 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.289 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.289 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.550 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzNiYTg2YTkyOTU0ODQxOTAxMDAxNmI1YjEyYjJlMmI2OTc2ZTUxNzA4NDc5YmNm8TT1Rw==: --dhchap-ctrl-secret DHHC-1:03:MGU0YmExNGE1ODk4YTNmYmMxMzhiOTNiYWQ4ODFlMjc2MDFlYzI4MDYyMTliZDg1ZmEwMjk1Y2QzOTY1ZDcxZIwNWJ4=: 00:19:53.550 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NzNiYTg2YTkyOTU0ODQxOTAxMDAxNmI1YjEyYjJlMmI2OTc2ZTUxNzA4NDc5YmNm8TT1Rw==: --dhchap-ctrl-secret DHHC-1:03:MGU0YmExNGE1ODk4YTNmYmMxMzhiOTNiYWQ4ODFlMjc2MDFlYzI4MDYyMTliZDg1ZmEwMjk1Y2QzOTY1ZDcxZIwNWJ4=: 00:19:54.133 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.133 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:54.133 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.133 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.133 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.133 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.133 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:54.133 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:54.394 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:19:54.394 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.394 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:54.394 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:54.394 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:54.394 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.394 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.394 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.394 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.394 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.394 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.394 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.394 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.964 00:19:54.964 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:54.964 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:54.964 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.223 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.223 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.223 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.223 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.223 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.223 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:55.223 { 00:19:55.223 "cntlid": 91, 00:19:55.223 "qid": 0, 00:19:55.223 "state": "enabled", 00:19:55.223 "thread": "nvmf_tgt_poll_group_000", 00:19:55.223 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:55.223 "listen_address": { 00:19:55.223 "trtype": "TCP", 00:19:55.223 "adrfam": "IPv4", 00:19:55.223 "traddr": "10.0.0.2", 00:19:55.223 "trsvcid": "4420" 00:19:55.223 }, 00:19:55.223 "peer_address": { 00:19:55.223 "trtype": "TCP", 00:19:55.223 "adrfam": "IPv4", 00:19:55.223 "traddr": "10.0.0.1", 00:19:55.223 "trsvcid": "42152" 00:19:55.223 }, 00:19:55.223 "auth": { 00:19:55.223 "state": "completed", 00:19:55.223 "digest": "sha384", 00:19:55.223 "dhgroup": "ffdhe8192" 00:19:55.223 } 00:19:55.223 } 00:19:55.223 ]' 00:19:55.223 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:55.223 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:55.223 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:55.223 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:55.223 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:55.224 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.224 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.224 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.484 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: --dhchap-ctrl-secret DHHC-1:02:MGRlNTA1NjZiMzhkYTM5MDk5NGUyNDJiZTQzYmExZTVhMDA1MWZmM2Q2NzFhMzhl6hIcBA==: 00:19:55.484 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: --dhchap-ctrl-secret DHHC-1:02:MGRlNTA1NjZiMzhkYTM5MDk5NGUyNDJiZTQzYmExZTVhMDA1MWZmM2Q2NzFhMzhl6hIcBA==: 00:19:56.053 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.053 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:56.053 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.053 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.053 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.053 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:56.053 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:56.053 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:56.314 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:19:56.314 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:56.314 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:56.314 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:56.314 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:56.314 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.314 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.314 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.314 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.314 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.314 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.314 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.314 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.003 00:19:57.003 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:57.003 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:57.003 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.003 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.003 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.003 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.003 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.003 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.003 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:57.003 { 00:19:57.003 "cntlid": 93, 00:19:57.003 "qid": 0, 00:19:57.003 "state": "enabled", 00:19:57.003 "thread": "nvmf_tgt_poll_group_000", 00:19:57.003 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:57.003 "listen_address": { 00:19:57.003 "trtype": "TCP", 00:19:57.003 "adrfam": "IPv4", 00:19:57.003 "traddr": "10.0.0.2", 00:19:57.003 "trsvcid": "4420" 00:19:57.003 }, 00:19:57.003 "peer_address": { 00:19:57.003 "trtype": "TCP", 00:19:57.003 "adrfam": "IPv4", 00:19:57.003 "traddr": "10.0.0.1", 00:19:57.003 "trsvcid": "42176" 00:19:57.003 }, 00:19:57.003 "auth": { 00:19:57.003 "state": "completed", 00:19:57.003 "digest": "sha384", 00:19:57.003 "dhgroup": "ffdhe8192" 00:19:57.003 } 00:19:57.003 } 00:19:57.003 ]' 00:19:57.003 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:57.003 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:57.003 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:57.003 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:57.003 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:57.326 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.326 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.326 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.326 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: --dhchap-ctrl-secret DHHC-1:01:NjEwODNmZmE2OTc4MDQ0MzY2ZDY1MmU1NzA5ZDM1NmFfCOpC: 00:19:57.326 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: --dhchap-ctrl-secret DHHC-1:01:NjEwODNmZmE2OTc4MDQ0MzY2ZDY1MmU1NzA5ZDM1NmFfCOpC: 00:19:58.295 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.295 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:58.295 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.295 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.295 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.295 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:58.295 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:58.295 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:58.295 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:19:58.295 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.295 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:58.295 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:58.295 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:58.295 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.295 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:58.295 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.295 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.295 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.295 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:58.295 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:58.295 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:58.868 00:19:58.868 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.868 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.868 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.129 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.129 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.129 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.129 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.129 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.129 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:59.129 { 00:19:59.129 "cntlid": 95, 00:19:59.129 "qid": 0, 00:19:59.129 "state": "enabled", 00:19:59.129 "thread": "nvmf_tgt_poll_group_000", 00:19:59.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:59.129 "listen_address": { 00:19:59.129 "trtype": "TCP", 00:19:59.129 "adrfam": "IPv4", 00:19:59.129 "traddr": "10.0.0.2", 00:19:59.129 "trsvcid": "4420" 00:19:59.129 }, 00:19:59.129 "peer_address": { 00:19:59.129 "trtype": "TCP", 00:19:59.129 "adrfam": "IPv4", 00:19:59.129 "traddr": "10.0.0.1", 00:19:59.129 "trsvcid": "41848" 00:19:59.129 }, 00:19:59.129 "auth": { 00:19:59.129 "state": "completed", 00:19:59.129 "digest": "sha384", 00:19:59.129 "dhgroup": "ffdhe8192" 00:19:59.129 } 00:19:59.129 } 00:19:59.129 ]' 00:19:59.129 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:59.129 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:59.129 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:59.129 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:59.129 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:59.129 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.129 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.129 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.390 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:19:59.390 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:19:59.961 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.961 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:59.961 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.961 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.961 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.961 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:59.961 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:59.961 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.961 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:59.962 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:00.222 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:20:00.222 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.222 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:00.222 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:00.222 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:00.222 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.222 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.222 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.222 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.222 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.222 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.222 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.222 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.482 00:20:00.482 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.482 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.482 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.741 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.741 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.741 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.741 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.741 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.741 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.741 { 00:20:00.741 "cntlid": 97, 00:20:00.741 "qid": 0, 00:20:00.741 "state": "enabled", 00:20:00.741 "thread": "nvmf_tgt_poll_group_000", 00:20:00.741 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:00.741 "listen_address": { 00:20:00.741 "trtype": "TCP", 00:20:00.741 "adrfam": "IPv4", 00:20:00.741 "traddr": "10.0.0.2", 00:20:00.741 "trsvcid": "4420" 00:20:00.741 }, 00:20:00.741 "peer_address": { 00:20:00.741 "trtype": "TCP", 00:20:00.741 "adrfam": "IPv4", 00:20:00.741 "traddr": "10.0.0.1", 00:20:00.741 "trsvcid": "41864" 00:20:00.741 }, 00:20:00.741 "auth": { 00:20:00.741 "state": "completed", 00:20:00.741 "digest": "sha512", 00:20:00.741 "dhgroup": "null" 00:20:00.741 } 00:20:00.741 } 00:20:00.741 ]' 00:20:00.741 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.741 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:00.741 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.741 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:00.741 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.741 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.741 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.741 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.001 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzNiYTg2YTkyOTU0ODQxOTAxMDAxNmI1YjEyYjJlMmI2OTc2ZTUxNzA4NDc5YmNm8TT1Rw==: --dhchap-ctrl-secret DHHC-1:03:MGU0YmExNGE1ODk4YTNmYmMxMzhiOTNiYWQ4ODFlMjc2MDFlYzI4MDYyMTliZDg1ZmEwMjk1Y2QzOTY1ZDcxZIwNWJ4=: 00:20:01.002 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NzNiYTg2YTkyOTU0ODQxOTAxMDAxNmI1YjEyYjJlMmI2OTc2ZTUxNzA4NDc5YmNm8TT1Rw==: --dhchap-ctrl-secret DHHC-1:03:MGU0YmExNGE1ODk4YTNmYmMxMzhiOTNiYWQ4ODFlMjc2MDFlYzI4MDYyMTliZDg1ZmEwMjk1Y2QzOTY1ZDcxZIwNWJ4=: 00:20:01.943 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.943 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:01.943 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.943 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.943 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.943 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:01.943 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:01.943 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:01.943 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:20:01.943 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:01.943 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:01.943 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:01.943 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:01.943 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.943 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.943 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.943 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.943 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.943 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.943 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.943 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.203 00:20:02.203 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.203 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.203 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.464 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.464 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.464 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.464 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.464 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.464 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.464 { 00:20:02.464 "cntlid": 99, 00:20:02.464 "qid": 0, 00:20:02.464 "state": "enabled", 00:20:02.464 "thread": "nvmf_tgt_poll_group_000", 00:20:02.464 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:02.464 "listen_address": { 00:20:02.464 "trtype": "TCP", 00:20:02.464 "adrfam": "IPv4", 00:20:02.464 "traddr": "10.0.0.2", 00:20:02.464 "trsvcid": "4420" 00:20:02.464 }, 00:20:02.464 "peer_address": { 00:20:02.464 "trtype": "TCP", 00:20:02.464 "adrfam": "IPv4", 00:20:02.464 "traddr": "10.0.0.1", 00:20:02.464 "trsvcid": "41884" 00:20:02.464 }, 00:20:02.464 "auth": { 00:20:02.464 "state": "completed", 00:20:02.464 "digest": "sha512", 00:20:02.464 "dhgroup": "null" 00:20:02.464 } 00:20:02.464 } 00:20:02.464 ]' 00:20:02.464 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.464 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:02.464 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.464 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:02.464 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.464 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.464 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.464 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.725 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: --dhchap-ctrl-secret DHHC-1:02:MGRlNTA1NjZiMzhkYTM5MDk5NGUyNDJiZTQzYmExZTVhMDA1MWZmM2Q2NzFhMzhl6hIcBA==: 00:20:02.725 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: --dhchap-ctrl-secret DHHC-1:02:MGRlNTA1NjZiMzhkYTM5MDk5NGUyNDJiZTQzYmExZTVhMDA1MWZmM2Q2NzFhMzhl6hIcBA==: 00:20:03.296 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.557 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:03.557 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.557 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.557 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.557 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.557 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:03.557 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:03.557 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:20:03.557 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.557 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:03.557 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:03.557 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:03.557 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.557 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.557 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.557 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.557 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.557 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.557 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.557 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.817 00:20:03.817 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:03.817 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:03.817 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.078 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.078 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.078 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.078 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.078 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.078 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.078 { 00:20:04.078 "cntlid": 101, 00:20:04.078 "qid": 0, 00:20:04.078 "state": "enabled", 00:20:04.078 "thread": "nvmf_tgt_poll_group_000", 00:20:04.078 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:04.078 "listen_address": { 00:20:04.078 "trtype": "TCP", 00:20:04.078 "adrfam": "IPv4", 00:20:04.078 "traddr": "10.0.0.2", 00:20:04.078 "trsvcid": "4420" 00:20:04.078 }, 00:20:04.078 "peer_address": { 00:20:04.078 "trtype": "TCP", 00:20:04.078 "adrfam": "IPv4", 00:20:04.078 "traddr": "10.0.0.1", 00:20:04.078 "trsvcid": "41896" 00:20:04.078 }, 00:20:04.078 "auth": { 00:20:04.078 "state": "completed", 00:20:04.078 "digest": "sha512", 00:20:04.078 "dhgroup": "null" 00:20:04.078 } 00:20:04.078 } 00:20:04.078 ]' 00:20:04.078 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.078 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:04.078 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.078 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:04.078 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.078 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.078 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.078 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.338 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: --dhchap-ctrl-secret DHHC-1:01:NjEwODNmZmE2OTc4MDQ0MzY2ZDY1MmU1NzA5ZDM1NmFfCOpC: 00:20:04.338 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: --dhchap-ctrl-secret DHHC-1:01:NjEwODNmZmE2OTc4MDQ0MzY2ZDY1MmU1NzA5ZDM1NmFfCOpC: 00:20:05.281 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.281 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:05.281 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.281 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.281 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.281 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.281 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:05.281 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:05.281 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:20:05.281 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.281 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:05.281 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:05.281 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:05.281 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.281 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:05.281 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.281 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.281 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.281 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:05.281 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:05.281 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:05.542 00:20:05.542 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.542 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.542 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.803 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.803 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.803 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.803 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.803 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.803 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.803 { 00:20:05.803 "cntlid": 103, 00:20:05.803 "qid": 0, 00:20:05.803 "state": "enabled", 00:20:05.803 "thread": "nvmf_tgt_poll_group_000", 00:20:05.803 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:05.803 "listen_address": { 00:20:05.803 "trtype": "TCP", 00:20:05.803 "adrfam": "IPv4", 00:20:05.803 "traddr": "10.0.0.2", 00:20:05.803 "trsvcid": "4420" 00:20:05.803 }, 00:20:05.803 "peer_address": { 00:20:05.803 "trtype": "TCP", 00:20:05.803 "adrfam": "IPv4", 00:20:05.803 "traddr": "10.0.0.1", 00:20:05.803 "trsvcid": "41914" 00:20:05.803 }, 00:20:05.803 "auth": { 00:20:05.803 "state": "completed", 00:20:05.803 "digest": "sha512", 00:20:05.803 "dhgroup": "null" 00:20:05.803 } 00:20:05.803 } 00:20:05.803 ]' 00:20:05.803 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.803 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:05.803 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.803 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:05.803 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.803 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.803 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.803 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.064 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:20:06.064 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:20:07.004 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.004 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:07.004 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.004 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.004 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.004 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:07.004 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:07.004 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:07.004 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:07.004 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:20:07.004 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.004 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:07.004 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:07.004 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:07.004 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.004 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.004 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.004 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.004 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.004 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.004 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.004 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.265 00:20:07.265 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.265 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.265 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.548 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.548 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.548 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.548 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.548 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.548 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.548 { 00:20:07.548 "cntlid": 105, 00:20:07.548 "qid": 0, 00:20:07.548 "state": "enabled", 00:20:07.548 "thread": "nvmf_tgt_poll_group_000", 00:20:07.548 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:07.548 "listen_address": { 00:20:07.548 "trtype": "TCP", 00:20:07.548 "adrfam": "IPv4", 00:20:07.548 "traddr": "10.0.0.2", 00:20:07.548 "trsvcid": "4420" 00:20:07.548 }, 00:20:07.548 "peer_address": { 00:20:07.548 "trtype": "TCP", 00:20:07.548 "adrfam": "IPv4", 00:20:07.548 "traddr": "10.0.0.1", 00:20:07.548 "trsvcid": "41950" 00:20:07.548 }, 00:20:07.548 "auth": { 00:20:07.548 "state": "completed", 00:20:07.548 "digest": "sha512", 00:20:07.548 "dhgroup": "ffdhe2048" 00:20:07.548 } 00:20:07.548 } 00:20:07.548 ]' 00:20:07.548 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.548 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:07.548 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.548 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:07.548 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.548 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.548 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.548 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.809 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzNiYTg2YTkyOTU0ODQxOTAxMDAxNmI1YjEyYjJlMmI2OTc2ZTUxNzA4NDc5YmNm8TT1Rw==: --dhchap-ctrl-secret DHHC-1:03:MGU0YmExNGE1ODk4YTNmYmMxMzhiOTNiYWQ4ODFlMjc2MDFlYzI4MDYyMTliZDg1ZmEwMjk1Y2QzOTY1ZDcxZIwNWJ4=: 00:20:07.809 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NzNiYTg2YTkyOTU0ODQxOTAxMDAxNmI1YjEyYjJlMmI2OTc2ZTUxNzA4NDc5YmNm8TT1Rw==: --dhchap-ctrl-secret DHHC-1:03:MGU0YmExNGE1ODk4YTNmYmMxMzhiOTNiYWQ4ODFlMjc2MDFlYzI4MDYyMTliZDg1ZmEwMjk1Y2QzOTY1ZDcxZIwNWJ4=: 00:20:08.378 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.379 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:08.379 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.379 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.640 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.640 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.640 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:08.640 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:08.640 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:20:08.640 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:08.640 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:08.640 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:08.640 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:08.640 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.640 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.640 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.640 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.640 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.641 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.641 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.641 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.902 00:20:08.902 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:08.902 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.902 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.162 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.162 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.162 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.162 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.162 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.162 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.162 { 00:20:09.162 "cntlid": 107, 00:20:09.162 "qid": 0, 00:20:09.162 "state": "enabled", 00:20:09.162 "thread": "nvmf_tgt_poll_group_000", 00:20:09.162 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:09.162 "listen_address": { 00:20:09.162 "trtype": "TCP", 00:20:09.162 "adrfam": "IPv4", 00:20:09.163 "traddr": "10.0.0.2", 00:20:09.163 "trsvcid": "4420" 00:20:09.163 }, 00:20:09.163 "peer_address": { 00:20:09.163 "trtype": "TCP", 00:20:09.163 "adrfam": "IPv4", 00:20:09.163 "traddr": "10.0.0.1", 00:20:09.163 "trsvcid": "45502" 00:20:09.163 }, 00:20:09.163 "auth": { 00:20:09.163 "state": "completed", 00:20:09.163 "digest": "sha512", 00:20:09.163 "dhgroup": "ffdhe2048" 00:20:09.163 } 00:20:09.163 } 00:20:09.163 ]' 00:20:09.163 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.163 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:09.163 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.163 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:09.163 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.163 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.163 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.163 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.423 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: --dhchap-ctrl-secret DHHC-1:02:MGRlNTA1NjZiMzhkYTM5MDk5NGUyNDJiZTQzYmExZTVhMDA1MWZmM2Q2NzFhMzhl6hIcBA==: 00:20:09.423 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: --dhchap-ctrl-secret DHHC-1:02:MGRlNTA1NjZiMzhkYTM5MDk5NGUyNDJiZTQzYmExZTVhMDA1MWZmM2Q2NzFhMzhl6hIcBA==: 00:20:10.003 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.003 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:10.003 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.003 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.264 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.264 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.264 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:10.264 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:10.264 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:20:10.264 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.264 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:10.264 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:10.264 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:10.264 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.264 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.264 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.264 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.264 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.264 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.264 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.264 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.525 00:20:10.525 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.525 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.525 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.785 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.786 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.786 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.786 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.786 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.786 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.786 { 00:20:10.786 "cntlid": 109, 00:20:10.786 "qid": 0, 00:20:10.786 "state": "enabled", 00:20:10.786 "thread": "nvmf_tgt_poll_group_000", 00:20:10.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:10.786 "listen_address": { 00:20:10.786 "trtype": "TCP", 00:20:10.786 "adrfam": "IPv4", 00:20:10.786 "traddr": "10.0.0.2", 00:20:10.786 "trsvcid": "4420" 00:20:10.786 }, 00:20:10.786 "peer_address": { 00:20:10.786 "trtype": "TCP", 00:20:10.786 "adrfam": "IPv4", 00:20:10.786 "traddr": "10.0.0.1", 00:20:10.786 "trsvcid": "45530" 00:20:10.786 }, 00:20:10.786 "auth": { 00:20:10.786 "state": "completed", 00:20:10.786 "digest": "sha512", 00:20:10.786 "dhgroup": "ffdhe2048" 00:20:10.786 } 00:20:10.786 } 00:20:10.786 ]' 00:20:10.786 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.786 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:10.786 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.786 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:10.786 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.786 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.786 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.786 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.046 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: --dhchap-ctrl-secret DHHC-1:01:NjEwODNmZmE2OTc4MDQ0MzY2ZDY1MmU1NzA5ZDM1NmFfCOpC: 00:20:11.046 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: --dhchap-ctrl-secret DHHC-1:01:NjEwODNmZmE2OTc4MDQ0MzY2ZDY1MmU1NzA5ZDM1NmFfCOpC: 00:20:11.987 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.987 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:11.987 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.987 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.987 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.987 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.987 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:11.987 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:11.987 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:20:11.987 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.987 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:11.987 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:11.987 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:11.987 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.987 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:11.987 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.987 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.987 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.987 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:11.987 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:11.987 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:12.249 00:20:12.249 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.249 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.249 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.510 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.510 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.510 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.510 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.510 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.510 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.510 { 00:20:12.510 "cntlid": 111, 00:20:12.510 "qid": 0, 00:20:12.510 "state": "enabled", 00:20:12.510 "thread": "nvmf_tgt_poll_group_000", 00:20:12.510 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:12.510 "listen_address": { 00:20:12.510 "trtype": "TCP", 00:20:12.510 "adrfam": "IPv4", 00:20:12.510 "traddr": "10.0.0.2", 00:20:12.510 "trsvcid": "4420" 00:20:12.510 }, 00:20:12.510 "peer_address": { 00:20:12.510 "trtype": "TCP", 00:20:12.510 "adrfam": "IPv4", 00:20:12.510 "traddr": "10.0.0.1", 00:20:12.510 "trsvcid": "45552" 00:20:12.510 }, 00:20:12.510 "auth": { 00:20:12.510 "state": "completed", 00:20:12.510 "digest": "sha512", 00:20:12.510 "dhgroup": "ffdhe2048" 00:20:12.510 } 00:20:12.510 } 00:20:12.510 ]' 00:20:12.510 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.510 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:12.510 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.510 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:12.510 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.510 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.510 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.510 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.771 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:20:12.771 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:20:13.713 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.713 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:13.713 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.713 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.713 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.713 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:13.713 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.713 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:13.713 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:13.714 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:20:13.714 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.714 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:13.714 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:13.714 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:13.714 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.714 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.714 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.714 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.714 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.714 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.714 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.714 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.974 00:20:13.974 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:13.974 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.974 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.235 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.235 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.235 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.235 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.235 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.235 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.235 { 00:20:14.235 "cntlid": 113, 00:20:14.235 "qid": 0, 00:20:14.235 "state": "enabled", 00:20:14.235 "thread": "nvmf_tgt_poll_group_000", 00:20:14.235 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:14.235 "listen_address": { 00:20:14.235 "trtype": "TCP", 00:20:14.235 "adrfam": "IPv4", 00:20:14.235 "traddr": "10.0.0.2", 00:20:14.235 "trsvcid": "4420" 00:20:14.235 }, 00:20:14.235 "peer_address": { 00:20:14.235 "trtype": "TCP", 00:20:14.235 "adrfam": "IPv4", 00:20:14.235 "traddr": "10.0.0.1", 00:20:14.235 "trsvcid": "45582" 00:20:14.235 }, 00:20:14.235 "auth": { 00:20:14.235 "state": "completed", 00:20:14.235 "digest": "sha512", 00:20:14.235 "dhgroup": "ffdhe3072" 00:20:14.235 } 00:20:14.235 } 00:20:14.235 ]' 00:20:14.235 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.235 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:14.235 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.235 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:14.235 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.235 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.235 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.235 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.496 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzNiYTg2YTkyOTU0ODQxOTAxMDAxNmI1YjEyYjJlMmI2OTc2ZTUxNzA4NDc5YmNm8TT1Rw==: --dhchap-ctrl-secret DHHC-1:03:MGU0YmExNGE1ODk4YTNmYmMxMzhiOTNiYWQ4ODFlMjc2MDFlYzI4MDYyMTliZDg1ZmEwMjk1Y2QzOTY1ZDcxZIwNWJ4=: 00:20:14.496 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NzNiYTg2YTkyOTU0ODQxOTAxMDAxNmI1YjEyYjJlMmI2OTc2ZTUxNzA4NDc5YmNm8TT1Rw==: --dhchap-ctrl-secret DHHC-1:03:MGU0YmExNGE1ODk4YTNmYmMxMzhiOTNiYWQ4ODFlMjc2MDFlYzI4MDYyMTliZDg1ZmEwMjk1Y2QzOTY1ZDcxZIwNWJ4=: 00:20:15.067 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.067 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:15.067 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.067 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.067 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.067 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.067 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:15.067 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:15.326 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:20:15.326 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:15.326 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:15.326 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:15.326 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:15.326 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.326 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.326 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.326 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.326 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.326 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.326 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.326 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.586 00:20:15.586 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:15.586 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:15.586 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.847 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.847 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.847 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.847 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.847 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.847 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.847 { 00:20:15.847 "cntlid": 115, 00:20:15.847 "qid": 0, 00:20:15.847 "state": "enabled", 00:20:15.847 "thread": "nvmf_tgt_poll_group_000", 00:20:15.847 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:15.847 "listen_address": { 00:20:15.847 "trtype": "TCP", 00:20:15.847 "adrfam": "IPv4", 00:20:15.847 "traddr": "10.0.0.2", 00:20:15.847 "trsvcid": "4420" 00:20:15.847 }, 00:20:15.847 "peer_address": { 00:20:15.847 "trtype": "TCP", 00:20:15.847 "adrfam": "IPv4", 00:20:15.847 "traddr": "10.0.0.1", 00:20:15.847 "trsvcid": "45602" 00:20:15.847 }, 00:20:15.847 "auth": { 00:20:15.847 "state": "completed", 00:20:15.847 "digest": "sha512", 00:20:15.847 "dhgroup": "ffdhe3072" 00:20:15.847 } 00:20:15.847 } 00:20:15.847 ]' 00:20:15.847 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.847 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:15.847 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.847 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:15.847 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.847 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.847 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.847 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.108 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: --dhchap-ctrl-secret DHHC-1:02:MGRlNTA1NjZiMzhkYTM5MDk5NGUyNDJiZTQzYmExZTVhMDA1MWZmM2Q2NzFhMzhl6hIcBA==: 00:20:16.108 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: --dhchap-ctrl-secret DHHC-1:02:MGRlNTA1NjZiMzhkYTM5MDk5NGUyNDJiZTQzYmExZTVhMDA1MWZmM2Q2NzFhMzhl6hIcBA==: 00:20:17.049 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.049 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:17.049 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.049 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.049 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.049 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.049 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:17.049 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:17.049 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:20:17.049 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.049 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:17.049 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:17.049 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:17.049 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.049 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.049 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.049 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.049 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.049 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.049 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.049 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.309 00:20:17.309 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.309 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.309 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.570 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.570 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.570 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.570 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.570 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.570 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.570 { 00:20:17.570 "cntlid": 117, 00:20:17.570 "qid": 0, 00:20:17.570 "state": "enabled", 00:20:17.570 "thread": "nvmf_tgt_poll_group_000", 00:20:17.570 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:17.570 "listen_address": { 00:20:17.570 "trtype": "TCP", 00:20:17.570 "adrfam": "IPv4", 00:20:17.570 "traddr": "10.0.0.2", 00:20:17.570 "trsvcid": "4420" 00:20:17.570 }, 00:20:17.570 "peer_address": { 00:20:17.570 "trtype": "TCP", 00:20:17.570 "adrfam": "IPv4", 00:20:17.570 "traddr": "10.0.0.1", 00:20:17.570 "trsvcid": "45636" 00:20:17.570 }, 00:20:17.570 "auth": { 00:20:17.570 "state": "completed", 00:20:17.570 "digest": "sha512", 00:20:17.570 "dhgroup": "ffdhe3072" 00:20:17.570 } 00:20:17.570 } 00:20:17.570 ]' 00:20:17.570 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.570 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:17.570 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.570 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:17.570 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.570 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.570 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.570 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.832 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: --dhchap-ctrl-secret DHHC-1:01:NjEwODNmZmE2OTc4MDQ0MzY2ZDY1MmU1NzA5ZDM1NmFfCOpC: 00:20:17.832 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: --dhchap-ctrl-secret DHHC-1:01:NjEwODNmZmE2OTc4MDQ0MzY2ZDY1MmU1NzA5ZDM1NmFfCOpC: 00:20:18.402 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.402 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:18.402 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.402 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.664 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.664 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.664 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:18.664 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:18.664 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:20:18.664 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.664 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:18.664 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:18.664 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:18.664 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.664 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:18.664 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.664 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.664 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.664 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:18.664 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:18.664 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:18.926 00:20:18.926 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.926 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.926 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.187 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.187 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.187 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.187 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.187 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.187 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.187 { 00:20:19.187 "cntlid": 119, 00:20:19.187 "qid": 0, 00:20:19.187 "state": "enabled", 00:20:19.187 "thread": "nvmf_tgt_poll_group_000", 00:20:19.187 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:19.187 "listen_address": { 00:20:19.187 "trtype": "TCP", 00:20:19.187 "adrfam": "IPv4", 00:20:19.187 "traddr": "10.0.0.2", 00:20:19.187 "trsvcid": "4420" 00:20:19.187 }, 00:20:19.187 "peer_address": { 00:20:19.187 "trtype": "TCP", 00:20:19.187 "adrfam": "IPv4", 00:20:19.187 "traddr": "10.0.0.1", 00:20:19.187 "trsvcid": "57108" 00:20:19.187 }, 00:20:19.187 "auth": { 00:20:19.187 "state": "completed", 00:20:19.187 "digest": "sha512", 00:20:19.187 "dhgroup": "ffdhe3072" 00:20:19.187 } 00:20:19.187 } 00:20:19.187 ]' 00:20:19.187 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.187 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:19.187 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.187 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:19.187 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.187 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.187 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.187 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.448 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:20:19.448 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:20:20.390 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.390 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:20.390 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.390 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.390 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.390 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:20.390 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.390 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:20.390 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:20.390 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:20:20.390 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.390 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:20.390 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:20.390 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:20.390 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.390 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.390 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.390 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.390 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.390 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.390 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.390 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.655 00:20:20.655 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:20.655 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:20.655 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.917 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.917 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.917 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.917 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.917 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.917 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.917 { 00:20:20.917 "cntlid": 121, 00:20:20.917 "qid": 0, 00:20:20.917 "state": "enabled", 00:20:20.917 "thread": "nvmf_tgt_poll_group_000", 00:20:20.917 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:20.917 "listen_address": { 00:20:20.917 "trtype": "TCP", 00:20:20.917 "adrfam": "IPv4", 00:20:20.917 "traddr": "10.0.0.2", 00:20:20.917 "trsvcid": "4420" 00:20:20.917 }, 00:20:20.917 "peer_address": { 00:20:20.917 "trtype": "TCP", 00:20:20.917 "adrfam": "IPv4", 00:20:20.917 "traddr": "10.0.0.1", 00:20:20.917 "trsvcid": "57140" 00:20:20.917 }, 00:20:20.917 "auth": { 00:20:20.917 "state": "completed", 00:20:20.917 "digest": "sha512", 00:20:20.917 "dhgroup": "ffdhe4096" 00:20:20.917 } 00:20:20.917 } 00:20:20.917 ]' 00:20:20.917 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.917 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:20.917 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.917 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:20.917 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.178 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.178 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.178 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.178 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzNiYTg2YTkyOTU0ODQxOTAxMDAxNmI1YjEyYjJlMmI2OTc2ZTUxNzA4NDc5YmNm8TT1Rw==: --dhchap-ctrl-secret DHHC-1:03:MGU0YmExNGE1ODk4YTNmYmMxMzhiOTNiYWQ4ODFlMjc2MDFlYzI4MDYyMTliZDg1ZmEwMjk1Y2QzOTY1ZDcxZIwNWJ4=: 00:20:21.178 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NzNiYTg2YTkyOTU0ODQxOTAxMDAxNmI1YjEyYjJlMmI2OTc2ZTUxNzA4NDc5YmNm8TT1Rw==: --dhchap-ctrl-secret DHHC-1:03:MGU0YmExNGE1ODk4YTNmYmMxMzhiOTNiYWQ4ODFlMjc2MDFlYzI4MDYyMTliZDg1ZmEwMjk1Y2QzOTY1ZDcxZIwNWJ4=: 00:20:22.120 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.120 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:22.120 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.120 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.120 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.120 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.120 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:22.120 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:22.120 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:20:22.120 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.120 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:22.120 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:22.120 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:22.120 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.120 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.120 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.120 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.120 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.120 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.120 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.120 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.380 00:20:22.380 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.380 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.380 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.641 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.641 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.641 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.641 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.641 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.641 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.641 { 00:20:22.641 "cntlid": 123, 00:20:22.641 "qid": 0, 00:20:22.641 "state": "enabled", 00:20:22.641 "thread": "nvmf_tgt_poll_group_000", 00:20:22.641 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:22.641 "listen_address": { 00:20:22.641 "trtype": "TCP", 00:20:22.641 "adrfam": "IPv4", 00:20:22.641 "traddr": "10.0.0.2", 00:20:22.641 "trsvcid": "4420" 00:20:22.641 }, 00:20:22.641 "peer_address": { 00:20:22.641 "trtype": "TCP", 00:20:22.641 "adrfam": "IPv4", 00:20:22.641 "traddr": "10.0.0.1", 00:20:22.641 "trsvcid": "57170" 00:20:22.641 }, 00:20:22.641 "auth": { 00:20:22.641 "state": "completed", 00:20:22.641 "digest": "sha512", 00:20:22.641 "dhgroup": "ffdhe4096" 00:20:22.641 } 00:20:22.641 } 00:20:22.641 ]' 00:20:22.641 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.641 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:22.641 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.641 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:22.641 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.901 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.901 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.901 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.902 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: --dhchap-ctrl-secret DHHC-1:02:MGRlNTA1NjZiMzhkYTM5MDk5NGUyNDJiZTQzYmExZTVhMDA1MWZmM2Q2NzFhMzhl6hIcBA==: 00:20:22.902 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: --dhchap-ctrl-secret DHHC-1:02:MGRlNTA1NjZiMzhkYTM5MDk5NGUyNDJiZTQzYmExZTVhMDA1MWZmM2Q2NzFhMzhl6hIcBA==: 00:20:23.842 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.842 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:23.842 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.842 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.842 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.842 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.842 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:23.842 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:23.842 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:20:23.842 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.842 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:23.842 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:23.842 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:23.842 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.842 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.842 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.842 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.842 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.842 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.842 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.842 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.103 00:20:24.103 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.103 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.103 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.363 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.363 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.363 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.363 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.363 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.363 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.363 { 00:20:24.363 "cntlid": 125, 00:20:24.363 "qid": 0, 00:20:24.363 "state": "enabled", 00:20:24.363 "thread": "nvmf_tgt_poll_group_000", 00:20:24.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:24.363 "listen_address": { 00:20:24.363 "trtype": "TCP", 00:20:24.363 "adrfam": "IPv4", 00:20:24.363 "traddr": "10.0.0.2", 00:20:24.363 "trsvcid": "4420" 00:20:24.363 }, 00:20:24.363 "peer_address": { 00:20:24.363 "trtype": "TCP", 00:20:24.363 "adrfam": "IPv4", 00:20:24.363 "traddr": "10.0.0.1", 00:20:24.363 "trsvcid": "57200" 00:20:24.363 }, 00:20:24.363 "auth": { 00:20:24.363 "state": "completed", 00:20:24.363 "digest": "sha512", 00:20:24.363 "dhgroup": "ffdhe4096" 00:20:24.363 } 00:20:24.363 } 00:20:24.363 ]' 00:20:24.363 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.363 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:24.363 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.624 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:24.624 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.624 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.624 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.624 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.624 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: --dhchap-ctrl-secret DHHC-1:01:NjEwODNmZmE2OTc4MDQ0MzY2ZDY1MmU1NzA5ZDM1NmFfCOpC: 00:20:24.624 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: --dhchap-ctrl-secret DHHC-1:01:NjEwODNmZmE2OTc4MDQ0MzY2ZDY1MmU1NzA5ZDM1NmFfCOpC: 00:20:25.566 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.566 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:25.566 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.566 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.566 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.566 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.567 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:25.567 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:25.567 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:20:25.567 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.567 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:25.567 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:25.567 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:25.567 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.567 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:25.567 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.567 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.567 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.567 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:25.567 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:25.567 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:25.828 00:20:26.089 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.089 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.089 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.089 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.089 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.089 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.089 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.089 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.089 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.089 { 00:20:26.089 "cntlid": 127, 00:20:26.089 "qid": 0, 00:20:26.089 "state": "enabled", 00:20:26.089 "thread": "nvmf_tgt_poll_group_000", 00:20:26.089 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:26.089 "listen_address": { 00:20:26.089 "trtype": "TCP", 00:20:26.089 "adrfam": "IPv4", 00:20:26.089 "traddr": "10.0.0.2", 00:20:26.089 "trsvcid": "4420" 00:20:26.089 }, 00:20:26.089 "peer_address": { 00:20:26.089 "trtype": "TCP", 00:20:26.089 "adrfam": "IPv4", 00:20:26.089 "traddr": "10.0.0.1", 00:20:26.089 "trsvcid": "57228" 00:20:26.089 }, 00:20:26.089 "auth": { 00:20:26.089 "state": "completed", 00:20:26.089 "digest": "sha512", 00:20:26.089 "dhgroup": "ffdhe4096" 00:20:26.089 } 00:20:26.089 } 00:20:26.089 ]' 00:20:26.089 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.089 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:26.089 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.349 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:26.349 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.349 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.349 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.349 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.609 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:20:26.609 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:20:27.180 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.180 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:27.180 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.180 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.180 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.180 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:27.180 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.180 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:27.180 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:27.441 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:20:27.441 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.441 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:27.441 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:27.441 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:27.441 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.441 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.441 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.441 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.441 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.441 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.441 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.441 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.702 00:20:27.702 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.962 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.962 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.962 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.962 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.962 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.962 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.962 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.962 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.962 { 00:20:27.962 "cntlid": 129, 00:20:27.962 "qid": 0, 00:20:27.962 "state": "enabled", 00:20:27.962 "thread": "nvmf_tgt_poll_group_000", 00:20:27.962 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:27.962 "listen_address": { 00:20:27.962 "trtype": "TCP", 00:20:27.962 "adrfam": "IPv4", 00:20:27.962 "traddr": "10.0.0.2", 00:20:27.962 "trsvcid": "4420" 00:20:27.962 }, 00:20:27.962 "peer_address": { 00:20:27.962 "trtype": "TCP", 00:20:27.962 "adrfam": "IPv4", 00:20:27.962 "traddr": "10.0.0.1", 00:20:27.962 "trsvcid": "57248" 00:20:27.962 }, 00:20:27.962 "auth": { 00:20:27.962 "state": "completed", 00:20:27.962 "digest": "sha512", 00:20:27.962 "dhgroup": "ffdhe6144" 00:20:27.962 } 00:20:27.962 } 00:20:27.962 ]' 00:20:27.962 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.963 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:27.963 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.223 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:28.223 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.223 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.223 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.223 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.223 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzNiYTg2YTkyOTU0ODQxOTAxMDAxNmI1YjEyYjJlMmI2OTc2ZTUxNzA4NDc5YmNm8TT1Rw==: --dhchap-ctrl-secret DHHC-1:03:MGU0YmExNGE1ODk4YTNmYmMxMzhiOTNiYWQ4ODFlMjc2MDFlYzI4MDYyMTliZDg1ZmEwMjk1Y2QzOTY1ZDcxZIwNWJ4=: 00:20:28.223 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NzNiYTg2YTkyOTU0ODQxOTAxMDAxNmI1YjEyYjJlMmI2OTc2ZTUxNzA4NDc5YmNm8TT1Rw==: --dhchap-ctrl-secret DHHC-1:03:MGU0YmExNGE1ODk4YTNmYmMxMzhiOTNiYWQ4ODFlMjc2MDFlYzI4MDYyMTliZDg1ZmEwMjk1Y2QzOTY1ZDcxZIwNWJ4=: 00:20:29.164 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.164 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:29.164 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.164 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.164 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.164 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.164 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:29.164 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:29.164 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:20:29.164 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.164 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:29.164 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:29.164 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:29.164 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.164 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.164 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.164 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.164 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.164 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.164 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.164 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.735 00:20:29.735 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.735 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.735 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.735 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.735 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.735 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.735 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.735 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.735 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.735 { 00:20:29.735 "cntlid": 131, 00:20:29.735 "qid": 0, 00:20:29.735 "state": "enabled", 00:20:29.735 "thread": "nvmf_tgt_poll_group_000", 00:20:29.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:29.735 "listen_address": { 00:20:29.735 "trtype": "TCP", 00:20:29.735 "adrfam": "IPv4", 00:20:29.735 "traddr": "10.0.0.2", 00:20:29.735 "trsvcid": "4420" 00:20:29.735 }, 00:20:29.735 "peer_address": { 00:20:29.735 "trtype": "TCP", 00:20:29.735 "adrfam": "IPv4", 00:20:29.735 "traddr": "10.0.0.1", 00:20:29.735 "trsvcid": "37616" 00:20:29.735 }, 00:20:29.735 "auth": { 00:20:29.735 "state": "completed", 00:20:29.735 "digest": "sha512", 00:20:29.735 "dhgroup": "ffdhe6144" 00:20:29.735 } 00:20:29.735 } 00:20:29.735 ]' 00:20:29.735 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.735 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:29.735 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.996 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:29.996 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.996 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.996 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.996 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.256 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: --dhchap-ctrl-secret DHHC-1:02:MGRlNTA1NjZiMzhkYTM5MDk5NGUyNDJiZTQzYmExZTVhMDA1MWZmM2Q2NzFhMzhl6hIcBA==: 00:20:30.256 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: --dhchap-ctrl-secret DHHC-1:02:MGRlNTA1NjZiMzhkYTM5MDk5NGUyNDJiZTQzYmExZTVhMDA1MWZmM2Q2NzFhMzhl6hIcBA==: 00:20:30.825 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.825 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:30.825 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.825 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.825 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.825 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.825 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:30.825 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:31.085 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:20:31.085 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.085 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:31.085 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:31.085 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:31.085 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.085 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.085 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.085 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.085 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.085 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.085 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.085 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.346 00:20:31.346 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.346 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.346 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.607 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.607 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.607 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.607 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.607 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.607 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.607 { 00:20:31.607 "cntlid": 133, 00:20:31.607 "qid": 0, 00:20:31.607 "state": "enabled", 00:20:31.607 "thread": "nvmf_tgt_poll_group_000", 00:20:31.607 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:31.607 "listen_address": { 00:20:31.607 "trtype": "TCP", 00:20:31.607 "adrfam": "IPv4", 00:20:31.607 "traddr": "10.0.0.2", 00:20:31.607 "trsvcid": "4420" 00:20:31.607 }, 00:20:31.607 "peer_address": { 00:20:31.607 "trtype": "TCP", 00:20:31.607 "adrfam": "IPv4", 00:20:31.607 "traddr": "10.0.0.1", 00:20:31.607 "trsvcid": "37648" 00:20:31.607 }, 00:20:31.607 "auth": { 00:20:31.607 "state": "completed", 00:20:31.607 "digest": "sha512", 00:20:31.607 "dhgroup": "ffdhe6144" 00:20:31.607 } 00:20:31.607 } 00:20:31.607 ]' 00:20:31.607 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.607 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:31.607 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.607 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:31.607 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.868 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.868 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.868 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.868 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: --dhchap-ctrl-secret DHHC-1:01:NjEwODNmZmE2OTc4MDQ0MzY2ZDY1MmU1NzA5ZDM1NmFfCOpC: 00:20:31.868 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: --dhchap-ctrl-secret DHHC-1:01:NjEwODNmZmE2OTc4MDQ0MzY2ZDY1MmU1NzA5ZDM1NmFfCOpC: 00:20:32.811 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.811 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:32.811 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.811 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.811 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.811 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:32.811 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:32.811 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:32.811 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:20:32.811 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.811 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:32.811 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:32.811 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:32.811 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.811 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:32.811 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.811 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.811 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.811 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:32.811 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:32.811 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:33.383 00:20:33.383 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:33.383 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:33.383 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.383 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.383 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.383 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.383 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.383 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.383 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:33.383 { 00:20:33.383 "cntlid": 135, 00:20:33.383 "qid": 0, 00:20:33.383 "state": "enabled", 00:20:33.383 "thread": "nvmf_tgt_poll_group_000", 00:20:33.383 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:33.383 "listen_address": { 00:20:33.383 "trtype": "TCP", 00:20:33.383 "adrfam": "IPv4", 00:20:33.383 "traddr": "10.0.0.2", 00:20:33.383 "trsvcid": "4420" 00:20:33.383 }, 00:20:33.384 "peer_address": { 00:20:33.384 "trtype": "TCP", 00:20:33.384 "adrfam": "IPv4", 00:20:33.384 "traddr": "10.0.0.1", 00:20:33.384 "trsvcid": "37668" 00:20:33.384 }, 00:20:33.384 "auth": { 00:20:33.384 "state": "completed", 00:20:33.384 "digest": "sha512", 00:20:33.384 "dhgroup": "ffdhe6144" 00:20:33.384 } 00:20:33.384 } 00:20:33.384 ]' 00:20:33.384 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:33.384 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:33.384 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:33.645 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:33.645 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:33.645 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.645 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.645 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.645 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:20:33.645 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:20:34.587 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.587 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:34.587 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.587 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.587 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.587 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:34.587 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.587 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:34.587 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:34.587 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:20:34.849 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:34.849 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:34.849 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:34.849 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:34.849 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.849 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.849 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.849 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.849 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.849 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.849 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.849 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.421 00:20:35.421 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.421 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.421 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.421 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.421 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.421 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.421 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.421 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.421 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.421 { 00:20:35.421 "cntlid": 137, 00:20:35.421 "qid": 0, 00:20:35.421 "state": "enabled", 00:20:35.421 "thread": "nvmf_tgt_poll_group_000", 00:20:35.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:35.421 "listen_address": { 00:20:35.421 "trtype": "TCP", 00:20:35.421 "adrfam": "IPv4", 00:20:35.421 "traddr": "10.0.0.2", 00:20:35.421 "trsvcid": "4420" 00:20:35.421 }, 00:20:35.421 "peer_address": { 00:20:35.421 "trtype": "TCP", 00:20:35.421 "adrfam": "IPv4", 00:20:35.421 "traddr": "10.0.0.1", 00:20:35.421 "trsvcid": "37686" 00:20:35.421 }, 00:20:35.421 "auth": { 00:20:35.421 "state": "completed", 00:20:35.421 "digest": "sha512", 00:20:35.421 "dhgroup": "ffdhe8192" 00:20:35.421 } 00:20:35.421 } 00:20:35.421 ]' 00:20:35.421 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.421 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:35.421 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.421 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:35.421 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.682 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.682 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.682 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.682 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzNiYTg2YTkyOTU0ODQxOTAxMDAxNmI1YjEyYjJlMmI2OTc2ZTUxNzA4NDc5YmNm8TT1Rw==: --dhchap-ctrl-secret DHHC-1:03:MGU0YmExNGE1ODk4YTNmYmMxMzhiOTNiYWQ4ODFlMjc2MDFlYzI4MDYyMTliZDg1ZmEwMjk1Y2QzOTY1ZDcxZIwNWJ4=: 00:20:35.682 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NzNiYTg2YTkyOTU0ODQxOTAxMDAxNmI1YjEyYjJlMmI2OTc2ZTUxNzA4NDc5YmNm8TT1Rw==: --dhchap-ctrl-secret DHHC-1:03:MGU0YmExNGE1ODk4YTNmYmMxMzhiOTNiYWQ4ODFlMjc2MDFlYzI4MDYyMTliZDg1ZmEwMjk1Y2QzOTY1ZDcxZIwNWJ4=: 00:20:36.624 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.624 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:36.624 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.624 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.624 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.624 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.624 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:36.624 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:36.624 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:20:36.624 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.624 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:36.624 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:36.624 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:36.624 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.625 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.625 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.625 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.887 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.887 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.887 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.887 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.459 00:20:37.459 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.459 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.459 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.459 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.459 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.459 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.459 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.459 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.459 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.459 { 00:20:37.459 "cntlid": 139, 00:20:37.459 "qid": 0, 00:20:37.459 "state": "enabled", 00:20:37.459 "thread": "nvmf_tgt_poll_group_000", 00:20:37.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:37.459 "listen_address": { 00:20:37.459 "trtype": "TCP", 00:20:37.459 "adrfam": "IPv4", 00:20:37.459 "traddr": "10.0.0.2", 00:20:37.459 "trsvcid": "4420" 00:20:37.459 }, 00:20:37.459 "peer_address": { 00:20:37.459 "trtype": "TCP", 00:20:37.459 "adrfam": "IPv4", 00:20:37.459 "traddr": "10.0.0.1", 00:20:37.459 "trsvcid": "37714" 00:20:37.459 }, 00:20:37.459 "auth": { 00:20:37.459 "state": "completed", 00:20:37.459 "digest": "sha512", 00:20:37.459 "dhgroup": "ffdhe8192" 00:20:37.459 } 00:20:37.459 } 00:20:37.459 ]' 00:20:37.459 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.459 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:37.459 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.720 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:37.720 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.720 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.720 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.720 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.720 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: --dhchap-ctrl-secret DHHC-1:02:MGRlNTA1NjZiMzhkYTM5MDk5NGUyNDJiZTQzYmExZTVhMDA1MWZmM2Q2NzFhMzhl6hIcBA==: 00:20:37.720 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: --dhchap-ctrl-secret DHHC-1:02:MGRlNTA1NjZiMzhkYTM5MDk5NGUyNDJiZTQzYmExZTVhMDA1MWZmM2Q2NzFhMzhl6hIcBA==: 00:20:38.663 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.663 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:38.663 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.663 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.663 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.663 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.664 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:38.664 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:38.924 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:20:38.924 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.924 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:38.924 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:38.924 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:38.924 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.924 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.924 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.924 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.924 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.924 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.924 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.924 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.495 00:20:39.495 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.495 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.495 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.495 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.495 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.495 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.495 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.495 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.495 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.495 { 00:20:39.495 "cntlid": 141, 00:20:39.495 "qid": 0, 00:20:39.495 "state": "enabled", 00:20:39.495 "thread": "nvmf_tgt_poll_group_000", 00:20:39.495 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:39.495 "listen_address": { 00:20:39.495 "trtype": "TCP", 00:20:39.495 "adrfam": "IPv4", 00:20:39.495 "traddr": "10.0.0.2", 00:20:39.495 "trsvcid": "4420" 00:20:39.495 }, 00:20:39.495 "peer_address": { 00:20:39.495 "trtype": "TCP", 00:20:39.495 "adrfam": "IPv4", 00:20:39.495 "traddr": "10.0.0.1", 00:20:39.495 "trsvcid": "54580" 00:20:39.495 }, 00:20:39.495 "auth": { 00:20:39.495 "state": "completed", 00:20:39.495 "digest": "sha512", 00:20:39.495 "dhgroup": "ffdhe8192" 00:20:39.495 } 00:20:39.495 } 00:20:39.495 ]' 00:20:39.495 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.755 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:39.755 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.755 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:39.755 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.755 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.755 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.755 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.015 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: --dhchap-ctrl-secret DHHC-1:01:NjEwODNmZmE2OTc4MDQ0MzY2ZDY1MmU1NzA5ZDM1NmFfCOpC: 00:20:40.015 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: --dhchap-ctrl-secret DHHC-1:01:NjEwODNmZmE2OTc4MDQ0MzY2ZDY1MmU1NzA5ZDM1NmFfCOpC: 00:20:40.589 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.589 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:40.589 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.589 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.850 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.850 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.850 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:40.850 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:40.850 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:20:40.850 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.850 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:40.851 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:40.851 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:40.851 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.851 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:40.851 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.851 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.851 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.851 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:40.851 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:40.851 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:41.423 00:20:41.423 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:41.423 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.423 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.685 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.685 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.685 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.685 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.685 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.685 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.685 { 00:20:41.685 "cntlid": 143, 00:20:41.685 "qid": 0, 00:20:41.685 "state": "enabled", 00:20:41.685 "thread": "nvmf_tgt_poll_group_000", 00:20:41.685 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:41.685 "listen_address": { 00:20:41.685 "trtype": "TCP", 00:20:41.685 "adrfam": "IPv4", 00:20:41.685 "traddr": "10.0.0.2", 00:20:41.685 "trsvcid": "4420" 00:20:41.685 }, 00:20:41.685 "peer_address": { 00:20:41.685 "trtype": "TCP", 00:20:41.685 "adrfam": "IPv4", 00:20:41.685 "traddr": "10.0.0.1", 00:20:41.685 "trsvcid": "54616" 00:20:41.685 }, 00:20:41.685 "auth": { 00:20:41.685 "state": "completed", 00:20:41.685 "digest": "sha512", 00:20:41.685 "dhgroup": "ffdhe8192" 00:20:41.685 } 00:20:41.685 } 00:20:41.685 ]' 00:20:41.685 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.685 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:41.685 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.685 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:41.685 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.685 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.685 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.685 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.946 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:20:41.946 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:20:42.888 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.888 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:42.888 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.889 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.889 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.889 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:20:42.889 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:20:42.889 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:20:42.889 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:42.889 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:42.889 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:42.889 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:20:42.889 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.889 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:42.889 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:42.889 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:42.889 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.889 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.889 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.889 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.889 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.889 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.889 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.889 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.461 00:20:43.461 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.461 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.461 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.722 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.722 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.722 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.722 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.722 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.722 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:43.722 { 00:20:43.722 "cntlid": 145, 00:20:43.722 "qid": 0, 00:20:43.722 "state": "enabled", 00:20:43.722 "thread": "nvmf_tgt_poll_group_000", 00:20:43.722 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:43.722 "listen_address": { 00:20:43.722 "trtype": "TCP", 00:20:43.722 "adrfam": "IPv4", 00:20:43.722 "traddr": "10.0.0.2", 00:20:43.722 "trsvcid": "4420" 00:20:43.722 }, 00:20:43.722 "peer_address": { 00:20:43.722 "trtype": "TCP", 00:20:43.722 "adrfam": "IPv4", 00:20:43.722 "traddr": "10.0.0.1", 00:20:43.722 "trsvcid": "54626" 00:20:43.722 }, 00:20:43.722 "auth": { 00:20:43.722 "state": "completed", 00:20:43.722 "digest": "sha512", 00:20:43.722 "dhgroup": "ffdhe8192" 00:20:43.722 } 00:20:43.722 } 00:20:43.722 ]' 00:20:43.722 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:43.722 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:43.722 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:43.722 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:43.722 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:43.722 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.722 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.722 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.984 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzNiYTg2YTkyOTU0ODQxOTAxMDAxNmI1YjEyYjJlMmI2OTc2ZTUxNzA4NDc5YmNm8TT1Rw==: --dhchap-ctrl-secret DHHC-1:03:MGU0YmExNGE1ODk4YTNmYmMxMzhiOTNiYWQ4ODFlMjc2MDFlYzI4MDYyMTliZDg1ZmEwMjk1Y2QzOTY1ZDcxZIwNWJ4=: 00:20:43.984 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NzNiYTg2YTkyOTU0ODQxOTAxMDAxNmI1YjEyYjJlMmI2OTc2ZTUxNzA4NDc5YmNm8TT1Rw==: --dhchap-ctrl-secret DHHC-1:03:MGU0YmExNGE1ODk4YTNmYmMxMzhiOTNiYWQ4ODFlMjc2MDFlYzI4MDYyMTliZDg1ZmEwMjk1Y2QzOTY1ZDcxZIwNWJ4=: 00:20:44.927 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.927 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:44.927 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.927 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.927 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.927 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:20:44.927 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.927 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.927 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.927 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:20:44.927 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:44.927 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:20:44.927 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:44.927 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:44.927 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:44.927 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:44.927 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:20:44.927 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:20:44.927 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:20:45.187 request: 00:20:45.187 { 00:20:45.187 "name": "nvme0", 00:20:45.187 "trtype": "tcp", 00:20:45.187 "traddr": "10.0.0.2", 00:20:45.187 "adrfam": "ipv4", 00:20:45.187 "trsvcid": "4420", 00:20:45.187 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:45.187 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:45.187 "prchk_reftag": false, 00:20:45.187 "prchk_guard": false, 00:20:45.187 "hdgst": false, 00:20:45.187 "ddgst": false, 00:20:45.187 "dhchap_key": "key2", 00:20:45.187 "allow_unrecognized_csi": false, 00:20:45.187 "method": "bdev_nvme_attach_controller", 00:20:45.187 "req_id": 1 00:20:45.187 } 00:20:45.187 Got JSON-RPC error response 00:20:45.187 response: 00:20:45.187 { 00:20:45.187 "code": -5, 00:20:45.187 "message": "Input/output error" 00:20:45.187 } 00:20:45.187 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:45.187 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:45.187 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:45.187 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:45.187 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:45.187 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.187 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.187 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.187 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.187 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.187 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.187 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.188 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:45.188 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:45.188 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:45.188 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:45.188 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:45.188 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:45.188 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:45.188 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:45.188 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:45.188 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:45.760 request: 00:20:45.760 { 00:20:45.760 "name": "nvme0", 00:20:45.760 "trtype": "tcp", 00:20:45.760 "traddr": "10.0.0.2", 00:20:45.760 "adrfam": "ipv4", 00:20:45.760 "trsvcid": "4420", 00:20:45.760 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:45.760 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:45.760 "prchk_reftag": false, 00:20:45.760 "prchk_guard": false, 00:20:45.760 "hdgst": false, 00:20:45.760 "ddgst": false, 00:20:45.760 "dhchap_key": "key1", 00:20:45.760 "dhchap_ctrlr_key": "ckey2", 00:20:45.760 "allow_unrecognized_csi": false, 00:20:45.760 "method": "bdev_nvme_attach_controller", 00:20:45.760 "req_id": 1 00:20:45.760 } 00:20:45.760 Got JSON-RPC error response 00:20:45.760 response: 00:20:45.760 { 00:20:45.760 "code": -5, 00:20:45.760 "message": "Input/output error" 00:20:45.760 } 00:20:45.760 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:45.760 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:45.760 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:45.760 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:45.760 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:45.760 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.760 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.760 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.760 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:20:45.760 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.760 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.760 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.760 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.760 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:45.760 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.760 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:45.760 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:45.760 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:45.760 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:45.760 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.760 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.760 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.331 request: 00:20:46.331 { 00:20:46.331 "name": "nvme0", 00:20:46.331 "trtype": "tcp", 00:20:46.331 "traddr": "10.0.0.2", 00:20:46.331 "adrfam": "ipv4", 00:20:46.331 "trsvcid": "4420", 00:20:46.331 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:46.331 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:46.331 "prchk_reftag": false, 00:20:46.331 "prchk_guard": false, 00:20:46.331 "hdgst": false, 00:20:46.331 "ddgst": false, 00:20:46.331 "dhchap_key": "key1", 00:20:46.331 "dhchap_ctrlr_key": "ckey1", 00:20:46.331 "allow_unrecognized_csi": false, 00:20:46.331 "method": "bdev_nvme_attach_controller", 00:20:46.331 "req_id": 1 00:20:46.331 } 00:20:46.331 Got JSON-RPC error response 00:20:46.331 response: 00:20:46.331 { 00:20:46.331 "code": -5, 00:20:46.331 "message": "Input/output error" 00:20:46.331 } 00:20:46.331 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:46.331 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:46.331 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:46.331 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:46.331 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:46.331 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.331 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.331 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.331 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2083854 00:20:46.331 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2083854 ']' 00:20:46.331 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2083854 00:20:46.331 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:46.331 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:46.332 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2083854 00:20:46.332 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:46.332 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:46.332 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2083854' 00:20:46.332 killing process with pid 2083854 00:20:46.332 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2083854 00:20:46.332 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2083854 00:20:46.592 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:46.593 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:46.593 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:46.593 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.593 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2111466 00:20:46.593 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2111466 00:20:46.593 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:46.593 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2111466 ']' 00:20:46.593 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.593 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:46.593 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.593 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:46.593 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.534 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:47.534 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:47.534 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:47.534 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:47.534 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.534 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.534 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:47.534 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2111466 00:20:47.534 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2111466 ']' 00:20:47.534 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.534 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:47.534 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.534 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:47.534 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.534 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:47.534 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:47.534 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:20:47.534 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.534 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.534 null0 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.f8f 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.OpQ ]] 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.OpQ 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Ueb 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Ggq ]] 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Ggq 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.7xJ 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.x9E ]] 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.x9E 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.UZ6 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:47.794 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:48.736 nvme0n1 00:20:48.736 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.736 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.736 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.736 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.736 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.736 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.736 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.736 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.736 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.736 { 00:20:48.736 "cntlid": 1, 00:20:48.736 "qid": 0, 00:20:48.736 "state": "enabled", 00:20:48.736 "thread": "nvmf_tgt_poll_group_000", 00:20:48.736 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:48.736 "listen_address": { 00:20:48.736 "trtype": "TCP", 00:20:48.736 "adrfam": "IPv4", 00:20:48.736 "traddr": "10.0.0.2", 00:20:48.736 "trsvcid": "4420" 00:20:48.736 }, 00:20:48.736 "peer_address": { 00:20:48.736 "trtype": "TCP", 00:20:48.736 "adrfam": "IPv4", 00:20:48.736 "traddr": "10.0.0.1", 00:20:48.736 "trsvcid": "44028" 00:20:48.736 }, 00:20:48.736 "auth": { 00:20:48.736 "state": "completed", 00:20:48.736 "digest": "sha512", 00:20:48.736 "dhgroup": "ffdhe8192" 00:20:48.736 } 00:20:48.736 } 00:20:48.736 ]' 00:20:48.736 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.736 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:48.736 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.996 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:48.996 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.996 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.996 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.996 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.254 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:20:49.254 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:20:49.822 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.822 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:49.822 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.822 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.822 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.822 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:49.822 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.822 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.822 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.822 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:49.822 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:50.084 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:20:50.084 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:50.084 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:20:50.084 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:50.084 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:50.084 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:50.084 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:50.084 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:50.084 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:50.084 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:50.084 request: 00:20:50.084 { 00:20:50.084 "name": "nvme0", 00:20:50.084 "trtype": "tcp", 00:20:50.084 "traddr": "10.0.0.2", 00:20:50.084 "adrfam": "ipv4", 00:20:50.084 "trsvcid": "4420", 00:20:50.084 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:50.084 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:50.084 "prchk_reftag": false, 00:20:50.084 "prchk_guard": false, 00:20:50.084 "hdgst": false, 00:20:50.084 "ddgst": false, 00:20:50.084 "dhchap_key": "key3", 00:20:50.084 "allow_unrecognized_csi": false, 00:20:50.084 "method": "bdev_nvme_attach_controller", 00:20:50.084 "req_id": 1 00:20:50.084 } 00:20:50.084 Got JSON-RPC error response 00:20:50.084 response: 00:20:50.084 { 00:20:50.084 "code": -5, 00:20:50.084 "message": "Input/output error" 00:20:50.084 } 00:20:50.084 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:50.084 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:50.084 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:50.084 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:50.346 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:20:50.346 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:20:50.346 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:50.346 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:50.346 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:20:50.346 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:50.346 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:20:50.346 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:50.346 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:50.346 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:50.346 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:50.346 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:50.346 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:50.346 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:50.607 request: 00:20:50.607 { 00:20:50.607 "name": "nvme0", 00:20:50.607 "trtype": "tcp", 00:20:50.607 "traddr": "10.0.0.2", 00:20:50.607 "adrfam": "ipv4", 00:20:50.607 "trsvcid": "4420", 00:20:50.607 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:50.607 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:50.607 "prchk_reftag": false, 00:20:50.607 "prchk_guard": false, 00:20:50.607 "hdgst": false, 00:20:50.607 "ddgst": false, 00:20:50.607 "dhchap_key": "key3", 00:20:50.607 "allow_unrecognized_csi": false, 00:20:50.607 "method": "bdev_nvme_attach_controller", 00:20:50.607 "req_id": 1 00:20:50.607 } 00:20:50.607 Got JSON-RPC error response 00:20:50.607 response: 00:20:50.607 { 00:20:50.607 "code": -5, 00:20:50.607 "message": "Input/output error" 00:20:50.607 } 00:20:50.607 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:50.607 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:50.607 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:50.607 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:50.607 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:20:50.607 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:20:50.607 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:20:50.607 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:50.608 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:50.608 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:50.608 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:50.608 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.608 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.608 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.868 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:50.868 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.868 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.868 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.868 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:50.868 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:50.868 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:50.868 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:50.868 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:50.868 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:50.868 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:50.868 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:50.868 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:50.868 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:51.130 request: 00:20:51.130 { 00:20:51.130 "name": "nvme0", 00:20:51.130 "trtype": "tcp", 00:20:51.130 "traddr": "10.0.0.2", 00:20:51.130 "adrfam": "ipv4", 00:20:51.130 "trsvcid": "4420", 00:20:51.130 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:51.130 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:51.130 "prchk_reftag": false, 00:20:51.130 "prchk_guard": false, 00:20:51.130 "hdgst": false, 00:20:51.130 "ddgst": false, 00:20:51.130 "dhchap_key": "key0", 00:20:51.130 "dhchap_ctrlr_key": "key1", 00:20:51.130 "allow_unrecognized_csi": false, 00:20:51.130 "method": "bdev_nvme_attach_controller", 00:20:51.130 "req_id": 1 00:20:51.130 } 00:20:51.130 Got JSON-RPC error response 00:20:51.130 response: 00:20:51.130 { 00:20:51.130 "code": -5, 00:20:51.130 "message": "Input/output error" 00:20:51.130 } 00:20:51.130 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:51.130 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:51.130 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:51.130 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:51.130 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:20:51.130 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:20:51.130 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:20:51.391 nvme0n1 00:20:51.391 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:20:51.391 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:20:51.391 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.391 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.391 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.391 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.653 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:20:51.653 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.653 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.653 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.653 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:20:51.653 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:51.653 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:52.594 nvme0n1 00:20:52.594 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:20:52.594 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:20:52.594 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.855 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.855 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:52.855 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.855 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.855 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.855 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:20:52.855 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:20:52.855 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.855 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.855 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: --dhchap-ctrl-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:20:52.855 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: --dhchap-ctrl-secret DHHC-1:03:MDE1Y2NhYzY2NDNjOGFhZTUxNWQyODU4MjY3MTUxOWEyZGQ2Njg0MzNjMjY5YzE3YzkzOTVkNGY3MzYyOWI2Meccg2U=: 00:20:53.796 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:20:53.796 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:20:53.796 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:20:53.796 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:20:53.796 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:20:53.796 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:20:53.796 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:20:53.796 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.796 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.796 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:20:53.796 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:53.796 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:20:53.796 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:53.796 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:53.796 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:53.796 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:53.796 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:20:53.796 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:53.796 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:54.408 request: 00:20:54.408 { 00:20:54.408 "name": "nvme0", 00:20:54.408 "trtype": "tcp", 00:20:54.408 "traddr": "10.0.0.2", 00:20:54.408 "adrfam": "ipv4", 00:20:54.408 "trsvcid": "4420", 00:20:54.408 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:54.408 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:54.408 "prchk_reftag": false, 00:20:54.408 "prchk_guard": false, 00:20:54.408 "hdgst": false, 00:20:54.408 "ddgst": false, 00:20:54.408 "dhchap_key": "key1", 00:20:54.408 "allow_unrecognized_csi": false, 00:20:54.408 "method": "bdev_nvme_attach_controller", 00:20:54.408 "req_id": 1 00:20:54.408 } 00:20:54.408 Got JSON-RPC error response 00:20:54.408 response: 00:20:54.408 { 00:20:54.408 "code": -5, 00:20:54.408 "message": "Input/output error" 00:20:54.408 } 00:20:54.408 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:54.408 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:54.408 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:54.408 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:54.408 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:54.408 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:54.408 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:55.038 nvme0n1 00:20:55.038 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:20:55.038 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:20:55.038 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.328 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.328 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.328 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.622 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:55.622 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.622 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.622 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.622 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:20:55.622 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:20:55.622 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:20:55.622 nvme0n1 00:20:55.622 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:20:55.622 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:20:55.622 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.883 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.883 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.883 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.144 07:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:56.144 07:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.144 07:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.144 07:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.144 07:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: '' 2s 00:20:56.144 07:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:20:56.144 07:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:20:56.144 07:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: 00:20:56.144 07:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:20:56.144 07:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:20:56.144 07:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:20:56.144 07:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: ]] 00:20:56.144 07:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YzdkZDBiNGVlZWViMjc2ZDdlZTdhYjFjM2I0OWU0NTCWbS3c: 00:20:56.144 07:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:20:56.144 07:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:20:56.144 07:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:20:58.059 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:20:58.059 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:20:58.059 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:58.059 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:58.059 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:58.060 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:58.060 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:20:58.060 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key2 00:20:58.060 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.060 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.060 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.060 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: 2s 00:20:58.060 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:20:58.060 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:20:58.060 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:20:58.060 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: 00:20:58.060 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:20:58.060 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:20:58.060 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:20:58.060 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: ]] 00:20:58.060 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MGYwMmUxMmNlOTA5NGVjNzY4N2EyODM4NzU5OWNkM2NlMDYxN2Y4YmRlNDA1YTk0loyv6Q==: 00:20:58.060 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:20:58.060 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:00.605 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:21:00.605 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:00.605 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:00.605 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:00.605 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:00.605 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:00.605 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:00.605 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.605 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:00.605 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.605 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.605 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.605 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:00.605 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:00.605 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:01.177 nvme0n1 00:21:01.177 07:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:01.177 07:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.177 07:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.177 07:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.177 07:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:01.177 07:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:01.749 07:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:21:01.749 07:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:21:01.749 07:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.749 07:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.749 07:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:01.749 07:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.749 07:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.749 07:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.749 07:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:21:01.749 07:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:21:02.009 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:21:02.009 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.009 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:21:02.268 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.268 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:02.268 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.268 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.268 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.268 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:02.268 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:02.268 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:02.268 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:02.268 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:02.268 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:02.268 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:02.268 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:02.268 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:02.837 request: 00:21:02.837 { 00:21:02.837 "name": "nvme0", 00:21:02.837 "dhchap_key": "key1", 00:21:02.837 "dhchap_ctrlr_key": "key3", 00:21:02.837 "method": "bdev_nvme_set_keys", 00:21:02.837 "req_id": 1 00:21:02.837 } 00:21:02.837 Got JSON-RPC error response 00:21:02.837 response: 00:21:02.837 { 00:21:02.837 "code": -13, 00:21:02.837 "message": "Permission denied" 00:21:02.837 } 00:21:02.837 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:02.837 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:02.837 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:02.837 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:02.837 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:02.837 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:02.838 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.838 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:21:02.838 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:21:03.778 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:03.779 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:03.779 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.039 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:21:04.039 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:04.039 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.039 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.039 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.039 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:04.039 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:04.039 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:04.976 nvme0n1 00:21:04.976 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:04.976 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.976 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.976 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.976 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:04.976 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:04.976 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:04.976 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:04.976 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:04.976 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:04.976 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:04.976 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:04.976 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:05.548 request: 00:21:05.548 { 00:21:05.548 "name": "nvme0", 00:21:05.548 "dhchap_key": "key2", 00:21:05.548 "dhchap_ctrlr_key": "key0", 00:21:05.548 "method": "bdev_nvme_set_keys", 00:21:05.548 "req_id": 1 00:21:05.548 } 00:21:05.548 Got JSON-RPC error response 00:21:05.548 response: 00:21:05.548 { 00:21:05.548 "code": -13, 00:21:05.548 "message": "Permission denied" 00:21:05.548 } 00:21:05.548 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:05.548 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:05.548 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:05.548 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:05.548 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:05.548 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:05.548 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.548 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:21:05.548 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:21:06.930 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:06.930 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:06.930 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.930 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:21:06.930 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:21:06.930 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:21:06.931 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2083888 00:21:06.931 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2083888 ']' 00:21:06.931 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2083888 00:21:06.931 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:06.931 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:06.931 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2083888 00:21:06.931 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:06.931 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:06.931 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2083888' 00:21:06.931 killing process with pid 2083888 00:21:06.931 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2083888 00:21:06.931 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2083888 00:21:07.190 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:07.190 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:07.190 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:21:07.190 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:07.190 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:21:07.190 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:07.190 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:07.190 rmmod nvme_tcp 00:21:07.190 rmmod nvme_fabrics 00:21:07.190 rmmod nvme_keyring 00:21:07.190 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:07.190 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:21:07.190 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:21:07.190 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2111466 ']' 00:21:07.190 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2111466 00:21:07.190 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2111466 ']' 00:21:07.190 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2111466 00:21:07.190 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:07.190 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:07.190 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2111466 00:21:07.190 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:07.190 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:07.190 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2111466' 00:21:07.190 killing process with pid 2111466 00:21:07.190 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2111466 00:21:07.190 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2111466 00:21:07.450 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:07.450 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:07.450 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:07.450 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:21:07.450 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:21:07.450 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:07.450 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:21:07.450 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:07.450 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:07.450 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.450 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:07.450 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.359 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:09.359 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.f8f /tmp/spdk.key-sha256.Ueb /tmp/spdk.key-sha384.7xJ /tmp/spdk.key-sha512.UZ6 /tmp/spdk.key-sha512.OpQ /tmp/spdk.key-sha384.Ggq /tmp/spdk.key-sha256.x9E '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:09.359 00:21:09.359 real 2m45.934s 00:21:09.359 user 6m8.843s 00:21:09.359 sys 0m25.013s 00:21:09.359 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:09.359 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.359 ************************************ 00:21:09.359 END TEST nvmf_auth_target 00:21:09.359 ************************************ 00:21:09.359 07:30:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:21:09.359 07:30:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:09.359 07:30:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:09.359 07:30:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:09.359 07:30:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:09.621 ************************************ 00:21:09.621 START TEST nvmf_bdevio_no_huge 00:21:09.621 ************************************ 00:21:09.621 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:09.621 * Looking for test storage... 00:21:09.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:09.621 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:09.621 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:21:09.621 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:09.621 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:09.621 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:09.621 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:09.621 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:09.621 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:21:09.621 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:21:09.621 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:21:09.621 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:21:09.621 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:21:09.621 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:21:09.621 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:21:09.621 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:09.621 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:21:09.621 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:21:09.621 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:09.621 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:09.621 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:21:09.621 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:21:09.621 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:09.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.622 --rc genhtml_branch_coverage=1 00:21:09.622 --rc genhtml_function_coverage=1 00:21:09.622 --rc genhtml_legend=1 00:21:09.622 --rc geninfo_all_blocks=1 00:21:09.622 --rc geninfo_unexecuted_blocks=1 00:21:09.622 00:21:09.622 ' 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:09.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.622 --rc genhtml_branch_coverage=1 00:21:09.622 --rc genhtml_function_coverage=1 00:21:09.622 --rc genhtml_legend=1 00:21:09.622 --rc geninfo_all_blocks=1 00:21:09.622 --rc geninfo_unexecuted_blocks=1 00:21:09.622 00:21:09.622 ' 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:09.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.622 --rc genhtml_branch_coverage=1 00:21:09.622 --rc genhtml_function_coverage=1 00:21:09.622 --rc genhtml_legend=1 00:21:09.622 --rc geninfo_all_blocks=1 00:21:09.622 --rc geninfo_unexecuted_blocks=1 00:21:09.622 00:21:09.622 ' 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:09.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.622 --rc genhtml_branch_coverage=1 00:21:09.622 --rc genhtml_function_coverage=1 00:21:09.622 --rc genhtml_legend=1 00:21:09.622 --rc geninfo_all_blocks=1 00:21:09.622 --rc geninfo_unexecuted_blocks=1 00:21:09.622 00:21:09.622 ' 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:09.622 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.622 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:09.623 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.884 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:09.884 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:09.884 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:21:09.884 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:18.022 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:18.022 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:18.022 Found net devices under 0000:31:00.0: cvl_0_0 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:18.022 Found net devices under 0000:31:00.1: cvl_0_1 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:18.022 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:18.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:18.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.685 ms 00:21:18.023 00:21:18.023 --- 10.0.0.2 ping statistics --- 00:21:18.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.023 rtt min/avg/max/mdev = 0.685/0.685/0.685/0.000 ms 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:18.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:18.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:21:18.023 00:21:18.023 --- 10.0.0.1 ping statistics --- 00:21:18.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.023 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2120315 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2120315 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2120315 ']' 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:18.023 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:18.023 [2024-11-26 07:31:01.942697] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:21:18.023 [2024-11-26 07:31:01.942773] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:18.023 [2024-11-26 07:31:02.056912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:18.023 [2024-11-26 07:31:02.116432] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:18.023 [2024-11-26 07:31:02.116478] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:18.023 [2024-11-26 07:31:02.116486] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:18.023 [2024-11-26 07:31:02.116493] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:18.023 [2024-11-26 07:31:02.116499] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:18.023 [2024-11-26 07:31:02.118039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:18.023 [2024-11-26 07:31:02.118202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:21:18.023 [2024-11-26 07:31:02.118362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:18.023 [2024-11-26 07:31:02.118362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:21:18.967 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:18.967 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:21:18.967 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:18.967 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:18.967 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:18.967 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:18.967 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:18.967 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.967 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:18.967 [2024-11-26 07:31:02.802776] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.967 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.967 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:18.967 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.967 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:18.967 Malloc0 00:21:18.967 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.967 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:18.967 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.967 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:18.967 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.967 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:18.967 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.967 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:18.967 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.967 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:18.967 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.967 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:18.967 [2024-11-26 07:31:02.856601] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.967 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.967 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:18.967 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:18.967 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:21:18.967 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:21:18.967 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:18.968 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:18.968 { 00:21:18.968 "params": { 00:21:18.968 "name": "Nvme$subsystem", 00:21:18.968 "trtype": "$TEST_TRANSPORT", 00:21:18.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.968 "adrfam": "ipv4", 00:21:18.968 "trsvcid": "$NVMF_PORT", 00:21:18.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.968 "hdgst": ${hdgst:-false}, 00:21:18.968 "ddgst": ${ddgst:-false} 00:21:18.968 }, 00:21:18.968 "method": "bdev_nvme_attach_controller" 00:21:18.968 } 00:21:18.968 EOF 00:21:18.968 )") 00:21:18.968 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:21:18.968 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:21:18.968 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:21:18.968 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:18.968 "params": { 00:21:18.968 "name": "Nvme1", 00:21:18.968 "trtype": "tcp", 00:21:18.968 "traddr": "10.0.0.2", 00:21:18.968 "adrfam": "ipv4", 00:21:18.968 "trsvcid": "4420", 00:21:18.968 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:18.968 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:18.968 "hdgst": false, 00:21:18.968 "ddgst": false 00:21:18.968 }, 00:21:18.968 "method": "bdev_nvme_attach_controller" 00:21:18.968 }' 00:21:18.968 [2024-11-26 07:31:02.914393] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:21:18.968 [2024-11-26 07:31:02.914467] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2120549 ] 00:21:18.968 [2024-11-26 07:31:03.004459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:18.968 [2024-11-26 07:31:03.059923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.968 [2024-11-26 07:31:03.060056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:18.968 [2024-11-26 07:31:03.060060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.229 I/O targets: 00:21:19.229 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:19.229 00:21:19.229 00:21:19.229 CUnit - A unit testing framework for C - Version 2.1-3 00:21:19.229 http://cunit.sourceforge.net/ 00:21:19.229 00:21:19.229 00:21:19.229 Suite: bdevio tests on: Nvme1n1 00:21:19.229 Test: blockdev write read block ...passed 00:21:19.229 Test: blockdev write zeroes read block ...passed 00:21:19.490 Test: blockdev write zeroes read no split ...passed 00:21:19.490 Test: blockdev write zeroes read split ...passed 00:21:19.490 Test: blockdev write zeroes read split partial ...passed 00:21:19.490 Test: blockdev reset ...[2024-11-26 07:31:03.440786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:19.490 [2024-11-26 07:31:03.440854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x93afb0 (9): Bad file descriptor 00:21:19.490 [2024-11-26 07:31:03.455689] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:21:19.490 passed 00:21:19.490 Test: blockdev write read 8 blocks ...passed 00:21:19.490 Test: blockdev write read size > 128k ...passed 00:21:19.490 Test: blockdev write read invalid size ...passed 00:21:19.490 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:19.490 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:19.490 Test: blockdev write read max offset ...passed 00:21:19.750 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:19.750 Test: blockdev writev readv 8 blocks ...passed 00:21:19.750 Test: blockdev writev readv 30 x 1block ...passed 00:21:19.750 Test: blockdev writev readv block ...passed 00:21:19.750 Test: blockdev writev readv size > 128k ...passed 00:21:19.750 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:19.750 Test: blockdev comparev and writev ...[2024-11-26 07:31:03.721832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:19.750 [2024-11-26 07:31:03.721857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.750 [2024-11-26 07:31:03.721872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:19.750 [2024-11-26 07:31:03.721877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.750 [2024-11-26 07:31:03.722334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:19.750 [2024-11-26 07:31:03.722342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:19.750 [2024-11-26 07:31:03.722352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:19.750 [2024-11-26 07:31:03.722357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:19.750 [2024-11-26 07:31:03.722833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:19.750 [2024-11-26 07:31:03.722840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:19.750 [2024-11-26 07:31:03.722849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:19.750 [2024-11-26 07:31:03.722855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:19.750 [2024-11-26 07:31:03.723356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:19.750 [2024-11-26 07:31:03.723364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:19.750 [2024-11-26 07:31:03.723374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:19.750 [2024-11-26 07:31:03.723380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:19.750 passed 00:21:19.750 Test: blockdev nvme passthru rw ...passed 00:21:19.750 Test: blockdev nvme passthru vendor specific ...[2024-11-26 07:31:03.806703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:19.750 [2024-11-26 07:31:03.806712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:19.750 [2024-11-26 07:31:03.807115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:19.750 [2024-11-26 07:31:03.807124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:19.750 [2024-11-26 07:31:03.807376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:19.750 [2024-11-26 07:31:03.807383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:19.750 [2024-11-26 07:31:03.807625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:19.750 [2024-11-26 07:31:03.807632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:19.750 passed 00:21:19.750 Test: blockdev nvme admin passthru ...passed 00:21:19.750 Test: blockdev copy ...passed 00:21:19.750 00:21:19.750 Run Summary: Type Total Ran Passed Failed Inactive 00:21:19.750 suites 1 1 n/a 0 0 00:21:19.750 tests 23 23 23 0 0 00:21:19.750 asserts 152 152 152 0 n/a 00:21:19.750 00:21:19.750 Elapsed time = 1.203 seconds 00:21:20.009 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:20.009 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.009 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:20.009 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.009 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:20.010 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:20.010 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:20.010 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:21:20.010 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:20.010 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:21:20.010 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:20.010 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:20.010 rmmod nvme_tcp 00:21:20.270 rmmod nvme_fabrics 00:21:20.270 rmmod nvme_keyring 00:21:20.270 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:20.270 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:21:20.270 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:21:20.270 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2120315 ']' 00:21:20.270 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2120315 00:21:20.270 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2120315 ']' 00:21:20.270 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2120315 00:21:20.270 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:21:20.270 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:20.270 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2120315 00:21:20.270 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:21:20.270 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:21:20.270 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2120315' 00:21:20.270 killing process with pid 2120315 00:21:20.270 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2120315 00:21:20.270 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2120315 00:21:20.530 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:20.530 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:20.530 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:20.530 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:21:20.530 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:21:20.530 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:20.530 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:21:20.530 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:20.530 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:20.530 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:20.530 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:20.530 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:23.075 00:21:23.075 real 0m13.209s 00:21:23.075 user 0m13.980s 00:21:23.075 sys 0m7.343s 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:23.075 ************************************ 00:21:23.075 END TEST nvmf_bdevio_no_huge 00:21:23.075 ************************************ 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:23.075 ************************************ 00:21:23.075 START TEST nvmf_tls 00:21:23.075 ************************************ 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:23.075 * Looking for test storage... 00:21:23.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:23.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.075 --rc genhtml_branch_coverage=1 00:21:23.075 --rc genhtml_function_coverage=1 00:21:23.075 --rc genhtml_legend=1 00:21:23.075 --rc geninfo_all_blocks=1 00:21:23.075 --rc geninfo_unexecuted_blocks=1 00:21:23.075 00:21:23.075 ' 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:23.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.075 --rc genhtml_branch_coverage=1 00:21:23.075 --rc genhtml_function_coverage=1 00:21:23.075 --rc genhtml_legend=1 00:21:23.075 --rc geninfo_all_blocks=1 00:21:23.075 --rc geninfo_unexecuted_blocks=1 00:21:23.075 00:21:23.075 ' 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:23.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.075 --rc genhtml_branch_coverage=1 00:21:23.075 --rc genhtml_function_coverage=1 00:21:23.075 --rc genhtml_legend=1 00:21:23.075 --rc geninfo_all_blocks=1 00:21:23.075 --rc geninfo_unexecuted_blocks=1 00:21:23.075 00:21:23.075 ' 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:23.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.075 --rc genhtml_branch_coverage=1 00:21:23.075 --rc genhtml_function_coverage=1 00:21:23.075 --rc genhtml_legend=1 00:21:23.075 --rc geninfo_all_blocks=1 00:21:23.075 --rc geninfo_unexecuted_blocks=1 00:21:23.075 00:21:23.075 ' 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:23.075 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:23.076 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:23.076 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:23.076 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:23.076 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:21:23.076 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:31.217 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:31.217 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:31.217 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:31.218 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.218 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:31.218 Found net devices under 0000:31:00.0: cvl_0_0 00:21:31.218 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.218 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:31.218 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.218 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:31.218 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:31.218 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:31.218 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:31.218 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.218 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:31.218 Found net devices under 0000:31:00.1: cvl_0_1 00:21:31.218 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.218 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:31.218 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:21:31.218 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:31.218 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:31.218 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:31.218 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:31.218 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:31.218 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:31.218 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:31.218 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:31.218 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:31.218 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:31.218 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:31.218 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:31.218 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:31.218 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:31.218 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:31.218 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:31.218 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:31.218 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:31.479 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:31.479 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:31.479 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:31.479 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:31.479 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:31.479 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:31.479 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:31.479 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:31.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:31.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:21:31.479 00:21:31.479 --- 10.0.0.2 ping statistics --- 00:21:31.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.479 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:21:31.479 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:31.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:31.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:21:31.479 00:21:31.479 --- 10.0.0.1 ping statistics --- 00:21:31.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.479 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:21:31.479 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:31.479 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:21:31.479 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:31.740 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:31.740 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:31.740 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:31.740 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:31.740 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:31.740 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:31.740 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:31.740 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:31.740 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:31.740 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.740 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2125677 00:21:31.740 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2125677 00:21:31.740 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:31.740 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2125677 ']' 00:21:31.740 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.740 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:31.740 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.740 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:31.740 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.740 [2024-11-26 07:31:15.736937] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:21:31.740 [2024-11-26 07:31:15.737008] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:31.740 [2024-11-26 07:31:15.845231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.002 [2024-11-26 07:31:15.895315] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:32.002 [2024-11-26 07:31:15.895373] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:32.002 [2024-11-26 07:31:15.895381] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:32.002 [2024-11-26 07:31:15.895388] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:32.002 [2024-11-26 07:31:15.895395] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:32.002 [2024-11-26 07:31:15.896243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.575 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:32.575 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:32.575 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:32.575 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:32.575 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:32.575 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:32.575 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:21:32.575 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:32.836 true 00:21:32.836 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:32.836 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:21:33.097 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:21:33.097 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:21:33.098 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:33.098 07:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:33.098 07:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:21:33.359 07:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:21:33.359 07:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:21:33.359 07:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:33.619 07:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:33.619 07:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:21:33.619 07:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:21:33.619 07:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:21:33.619 07:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:33.619 07:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:21:33.880 07:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:21:33.880 07:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:21:33.880 07:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:34.140 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:34.140 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:21:34.140 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:21:34.140 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:21:34.140 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:34.401 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:34.401 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:21:34.662 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:21:34.662 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:21:34.662 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:34.662 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:34.662 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:21:34.662 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:34.662 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:34.662 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:21:34.662 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:21:34.662 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:34.662 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:34.662 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:34.662 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:21:34.662 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:34.662 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:21:34.662 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:21:34.662 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:21:34.662 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:34.662 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:34.662 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.otgW8jklEc 00:21:34.662 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:21:34.662 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.1jP4s5eYbO 00:21:34.662 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:34.662 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:34.662 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.otgW8jklEc 00:21:34.662 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.1jP4s5eYbO 00:21:34.662 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:34.922 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:35.183 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.otgW8jklEc 00:21:35.183 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.otgW8jklEc 00:21:35.183 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:35.183 [2024-11-26 07:31:19.286189] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.183 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:35.443 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:35.703 [2024-11-26 07:31:19.623000] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:35.703 [2024-11-26 07:31:19.623187] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:35.704 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:35.704 malloc0 00:21:35.704 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:35.964 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.otgW8jklEc 00:21:36.225 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:36.225 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.otgW8jklEc 00:21:48.476 Initializing NVMe Controllers 00:21:48.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:48.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:48.476 Initialization complete. Launching workers. 00:21:48.476 ======================================================== 00:21:48.476 Latency(us) 00:21:48.476 Device Information : IOPS MiB/s Average min max 00:21:48.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18564.20 72.52 3447.50 1166.69 4312.48 00:21:48.476 ======================================================== 00:21:48.476 Total : 18564.20 72.52 3447.50 1166.69 4312.48 00:21:48.476 00:21:48.476 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.otgW8jklEc 00:21:48.476 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:48.476 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:48.476 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:48.476 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.otgW8jklEc 00:21:48.476 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:48.476 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2128441 00:21:48.476 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:48.476 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2128441 /var/tmp/bdevperf.sock 00:21:48.476 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:48.476 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2128441 ']' 00:21:48.476 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:48.476 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:48.476 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:48.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:48.476 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:48.476 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.476 [2024-11-26 07:31:30.469351] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:21:48.476 [2024-11-26 07:31:30.469406] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2128441 ] 00:21:48.476 [2024-11-26 07:31:30.534702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.476 [2024-11-26 07:31:30.563521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:48.476 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:48.476 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:48.476 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.otgW8jklEc 00:21:48.476 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:48.476 [2024-11-26 07:31:30.964898] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:48.476 TLSTESTn1 00:21:48.476 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:48.476 Running I/O for 10 seconds... 00:21:49.126 4940.00 IOPS, 19.30 MiB/s [2024-11-26T06:31:34.204Z] 5675.50 IOPS, 22.17 MiB/s [2024-11-26T06:31:35.592Z] 5677.00 IOPS, 22.18 MiB/s [2024-11-26T06:31:36.535Z] 5701.75 IOPS, 22.27 MiB/s [2024-11-26T06:31:37.478Z] 5622.00 IOPS, 21.96 MiB/s [2024-11-26T06:31:38.423Z] 5691.50 IOPS, 22.23 MiB/s [2024-11-26T06:31:39.367Z] 5703.29 IOPS, 22.28 MiB/s [2024-11-26T06:31:40.310Z] 5647.00 IOPS, 22.06 MiB/s [2024-11-26T06:31:41.254Z] 5693.44 IOPS, 22.24 MiB/s [2024-11-26T06:31:41.254Z] 5718.40 IOPS, 22.34 MiB/s 00:21:57.117 Latency(us) 00:21:57.117 [2024-11-26T06:31:41.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.117 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:57.117 Verification LBA range: start 0x0 length 0x2000 00:21:57.117 TLSTESTn1 : 10.02 5719.25 22.34 0.00 0.00 22339.12 5707.09 40850.77 00:21:57.117 [2024-11-26T06:31:41.254Z] =================================================================================================================== 00:21:57.117 [2024-11-26T06:31:41.254Z] Total : 5719.25 22.34 0.00 0.00 22339.12 5707.09 40850.77 00:21:57.117 { 00:21:57.117 "results": [ 00:21:57.117 { 00:21:57.117 "job": "TLSTESTn1", 00:21:57.117 "core_mask": "0x4", 00:21:57.117 "workload": "verify", 00:21:57.117 "status": "finished", 00:21:57.117 "verify_range": { 00:21:57.117 "start": 0, 00:21:57.117 "length": 8192 00:21:57.117 }, 00:21:57.117 "queue_depth": 128, 00:21:57.117 "io_size": 4096, 00:21:57.117 "runtime": 10.020722, 00:21:57.117 "iops": 5719.248573106808, 00:21:57.117 "mibps": 22.34081473869847, 00:21:57.117 "io_failed": 0, 00:21:57.117 "io_timeout": 0, 00:21:57.117 "avg_latency_us": 22339.123687482912, 00:21:57.117 "min_latency_us": 5707.093333333333, 00:21:57.117 "max_latency_us": 40850.77333333333 00:21:57.117 } 00:21:57.117 ], 00:21:57.117 "core_count": 1 00:21:57.117 } 00:21:57.117 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:57.117 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2128441 00:21:57.117 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2128441 ']' 00:21:57.117 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2128441 00:21:57.117 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:57.117 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:57.117 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2128441 00:21:57.379 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:57.379 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:57.379 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2128441' 00:21:57.379 killing process with pid 2128441 00:21:57.379 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2128441 00:21:57.379 Received shutdown signal, test time was about 10.000000 seconds 00:21:57.379 00:21:57.379 Latency(us) 00:21:57.379 [2024-11-26T06:31:41.516Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.379 [2024-11-26T06:31:41.516Z] =================================================================================================================== 00:21:57.379 [2024-11-26T06:31:41.516Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:57.379 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2128441 00:21:57.379 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1jP4s5eYbO 00:21:57.379 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:57.379 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1jP4s5eYbO 00:21:57.379 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:57.379 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:57.379 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:57.379 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:57.379 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1jP4s5eYbO 00:21:57.379 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:57.379 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:57.379 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:57.379 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.1jP4s5eYbO 00:21:57.379 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:57.379 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2130581 00:21:57.379 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:57.379 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2130581 /var/tmp/bdevperf.sock 00:21:57.379 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:57.379 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2130581 ']' 00:21:57.379 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:57.379 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:57.379 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:57.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:57.379 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:57.379 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:57.379 [2024-11-26 07:31:41.434778] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:21:57.379 [2024-11-26 07:31:41.434836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2130581 ] 00:21:57.379 [2024-11-26 07:31:41.497650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.641 [2024-11-26 07:31:41.526351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:57.641 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:57.641 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:57.641 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1jP4s5eYbO 00:21:57.641 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:57.902 [2024-11-26 07:31:41.911716] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:57.902 [2024-11-26 07:31:41.915928] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:57.902 [2024-11-26 07:31:41.916675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1187980 (107): Transport endpoint is not connected 00:21:57.902 [2024-11-26 07:31:41.917670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1187980 (9): Bad file descriptor 00:21:57.902 [2024-11-26 07:31:41.918671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:21:57.902 [2024-11-26 07:31:41.918679] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:57.902 [2024-11-26 07:31:41.918685] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:21:57.902 [2024-11-26 07:31:41.918693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:21:57.902 request: 00:21:57.902 { 00:21:57.902 "name": "TLSTEST", 00:21:57.902 "trtype": "tcp", 00:21:57.902 "traddr": "10.0.0.2", 00:21:57.902 "adrfam": "ipv4", 00:21:57.902 "trsvcid": "4420", 00:21:57.902 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:57.902 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:57.902 "prchk_reftag": false, 00:21:57.902 "prchk_guard": false, 00:21:57.902 "hdgst": false, 00:21:57.902 "ddgst": false, 00:21:57.902 "psk": "key0", 00:21:57.902 "allow_unrecognized_csi": false, 00:21:57.902 "method": "bdev_nvme_attach_controller", 00:21:57.902 "req_id": 1 00:21:57.902 } 00:21:57.902 Got JSON-RPC error response 00:21:57.902 response: 00:21:57.902 { 00:21:57.902 "code": -5, 00:21:57.902 "message": "Input/output error" 00:21:57.902 } 00:21:57.902 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2130581 00:21:57.902 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2130581 ']' 00:21:57.902 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2130581 00:21:57.902 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:57.902 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:57.902 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2130581 00:21:57.902 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:57.902 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:57.902 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2130581' 00:21:57.902 killing process with pid 2130581 00:21:57.902 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2130581 00:21:57.902 Received shutdown signal, test time was about 10.000000 seconds 00:21:57.902 00:21:57.902 Latency(us) 00:21:57.902 [2024-11-26T06:31:42.040Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.903 [2024-11-26T06:31:42.040Z] =================================================================================================================== 00:21:57.903 [2024-11-26T06:31:42.040Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:57.903 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2130581 00:21:58.164 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:58.164 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:58.164 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:58.164 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:58.164 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:58.164 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.otgW8jklEc 00:21:58.164 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:58.164 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.otgW8jklEc 00:21:58.164 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:58.164 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:58.164 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:58.164 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:58.164 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.otgW8jklEc 00:21:58.164 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:58.164 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:58.164 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:58.164 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.otgW8jklEc 00:21:58.164 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:58.164 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2130804 00:21:58.164 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:58.164 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2130804 /var/tmp/bdevperf.sock 00:21:58.164 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:58.164 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2130804 ']' 00:21:58.164 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:58.164 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:58.164 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:58.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:58.164 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:58.164 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:58.164 [2024-11-26 07:31:42.144929] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:21:58.164 [2024-11-26 07:31:42.144988] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2130804 ] 00:21:58.164 [2024-11-26 07:31:42.208185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.164 [2024-11-26 07:31:42.236756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.425 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:58.425 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:58.425 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.otgW8jklEc 00:21:58.425 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:21:58.686 [2024-11-26 07:31:42.622086] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:58.686 [2024-11-26 07:31:42.627032] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:58.686 [2024-11-26 07:31:42.627052] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:58.686 [2024-11-26 07:31:42.627070] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:58.686 [2024-11-26 07:31:42.627309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ca980 (107): Transport endpoint is not connected 00:21:58.686 [2024-11-26 07:31:42.628304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ca980 (9): Bad file descriptor 00:21:58.686 [2024-11-26 07:31:42.629306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:21:58.686 [2024-11-26 07:31:42.629314] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:58.686 [2024-11-26 07:31:42.629320] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:21:58.686 [2024-11-26 07:31:42.629327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:21:58.686 request: 00:21:58.686 { 00:21:58.686 "name": "TLSTEST", 00:21:58.686 "trtype": "tcp", 00:21:58.686 "traddr": "10.0.0.2", 00:21:58.686 "adrfam": "ipv4", 00:21:58.686 "trsvcid": "4420", 00:21:58.686 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.686 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:58.686 "prchk_reftag": false, 00:21:58.686 "prchk_guard": false, 00:21:58.686 "hdgst": false, 00:21:58.686 "ddgst": false, 00:21:58.686 "psk": "key0", 00:21:58.686 "allow_unrecognized_csi": false, 00:21:58.686 "method": "bdev_nvme_attach_controller", 00:21:58.686 "req_id": 1 00:21:58.686 } 00:21:58.686 Got JSON-RPC error response 00:21:58.686 response: 00:21:58.686 { 00:21:58.686 "code": -5, 00:21:58.686 "message": "Input/output error" 00:21:58.686 } 00:21:58.686 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2130804 00:21:58.686 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2130804 ']' 00:21:58.686 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2130804 00:21:58.686 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:58.686 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:58.686 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2130804 00:21:58.686 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:58.686 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:58.686 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2130804' 00:21:58.686 killing process with pid 2130804 00:21:58.686 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2130804 00:21:58.686 Received shutdown signal, test time was about 10.000000 seconds 00:21:58.686 00:21:58.686 Latency(us) 00:21:58.686 [2024-11-26T06:31:42.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.686 [2024-11-26T06:31:42.823Z] =================================================================================================================== 00:21:58.686 [2024-11-26T06:31:42.823Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:58.686 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2130804 00:21:58.686 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:58.686 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:58.686 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:58.686 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:58.686 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:58.686 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.otgW8jklEc 00:21:58.686 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:58.686 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.otgW8jklEc 00:21:58.686 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:58.686 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:58.686 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:58.686 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:58.686 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.otgW8jklEc 00:21:58.686 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:58.686 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:58.686 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:58.686 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.otgW8jklEc 00:21:58.686 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:58.686 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2130817 00:21:58.686 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:58.686 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2130817 /var/tmp/bdevperf.sock 00:21:58.686 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:58.686 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2130817 ']' 00:21:58.686 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:58.686 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:58.686 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:58.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:58.687 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:58.687 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:58.948 [2024-11-26 07:31:42.859120] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:21:58.948 [2024-11-26 07:31:42.859176] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2130817 ] 00:21:58.948 [2024-11-26 07:31:42.921998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.948 [2024-11-26 07:31:42.950038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.948 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:58.948 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:58.948 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.otgW8jklEc 00:21:59.210 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:59.471 [2024-11-26 07:31:43.355331] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:59.471 [2024-11-26 07:31:43.359920] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:59.471 [2024-11-26 07:31:43.359939] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:59.471 [2024-11-26 07:31:43.359958] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:59.471 [2024-11-26 07:31:43.360611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1150980 (107): Transport endpoint is not connected 00:21:59.471 [2024-11-26 07:31:43.361607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1150980 (9): Bad file descriptor 00:21:59.471 [2024-11-26 07:31:43.362609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:21:59.471 [2024-11-26 07:31:43.362617] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:59.471 [2024-11-26 07:31:43.362622] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:21:59.471 [2024-11-26 07:31:43.362631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:21:59.471 request: 00:21:59.471 { 00:21:59.471 "name": "TLSTEST", 00:21:59.471 "trtype": "tcp", 00:21:59.471 "traddr": "10.0.0.2", 00:21:59.471 "adrfam": "ipv4", 00:21:59.471 "trsvcid": "4420", 00:21:59.471 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:59.471 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:59.471 "prchk_reftag": false, 00:21:59.471 "prchk_guard": false, 00:21:59.471 "hdgst": false, 00:21:59.471 "ddgst": false, 00:21:59.471 "psk": "key0", 00:21:59.471 "allow_unrecognized_csi": false, 00:21:59.471 "method": "bdev_nvme_attach_controller", 00:21:59.471 "req_id": 1 00:21:59.471 } 00:21:59.471 Got JSON-RPC error response 00:21:59.471 response: 00:21:59.471 { 00:21:59.471 "code": -5, 00:21:59.471 "message": "Input/output error" 00:21:59.471 } 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2130817 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2130817 ']' 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2130817 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2130817 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2130817' 00:21:59.472 killing process with pid 2130817 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2130817 00:21:59.472 Received shutdown signal, test time was about 10.000000 seconds 00:21:59.472 00:21:59.472 Latency(us) 00:21:59.472 [2024-11-26T06:31:43.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.472 [2024-11-26T06:31:43.609Z] =================================================================================================================== 00:21:59.472 [2024-11-26T06:31:43.609Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2130817 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2131141 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2131141 /var/tmp/bdevperf.sock 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2131141 ']' 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:59.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:59.472 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:59.472 [2024-11-26 07:31:43.590830] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:21:59.472 [2024-11-26 07:31:43.590890] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2131141 ] 00:21:59.734 [2024-11-26 07:31:43.653595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.734 [2024-11-26 07:31:43.681981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:59.734 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:59.734 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:59.734 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:21:59.995 [2024-11-26 07:31:43.910829] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:21:59.995 [2024-11-26 07:31:43.910851] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:59.995 request: 00:21:59.995 { 00:21:59.995 "name": "key0", 00:21:59.995 "path": "", 00:21:59.995 "method": "keyring_file_add_key", 00:21:59.995 "req_id": 1 00:21:59.995 } 00:21:59.995 Got JSON-RPC error response 00:21:59.995 response: 00:21:59.995 { 00:21:59.995 "code": -1, 00:21:59.995 "message": "Operation not permitted" 00:21:59.995 } 00:21:59.995 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:59.995 [2024-11-26 07:31:44.063286] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:59.995 [2024-11-26 07:31:44.063307] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:21:59.995 request: 00:21:59.995 { 00:21:59.995 "name": "TLSTEST", 00:21:59.995 "trtype": "tcp", 00:21:59.995 "traddr": "10.0.0.2", 00:21:59.995 "adrfam": "ipv4", 00:21:59.995 "trsvcid": "4420", 00:21:59.995 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.995 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:59.995 "prchk_reftag": false, 00:21:59.995 "prchk_guard": false, 00:21:59.995 "hdgst": false, 00:21:59.995 "ddgst": false, 00:21:59.995 "psk": "key0", 00:21:59.995 "allow_unrecognized_csi": false, 00:21:59.995 "method": "bdev_nvme_attach_controller", 00:21:59.995 "req_id": 1 00:21:59.995 } 00:21:59.995 Got JSON-RPC error response 00:21:59.995 response: 00:21:59.995 { 00:21:59.995 "code": -126, 00:21:59.995 "message": "Required key not available" 00:21:59.995 } 00:21:59.995 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2131141 00:21:59.995 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2131141 ']' 00:21:59.995 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2131141 00:21:59.995 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:59.995 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:59.995 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2131141 00:22:00.257 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:00.257 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:00.257 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2131141' 00:22:00.257 killing process with pid 2131141 00:22:00.257 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2131141 00:22:00.257 Received shutdown signal, test time was about 10.000000 seconds 00:22:00.257 00:22:00.257 Latency(us) 00:22:00.257 [2024-11-26T06:31:44.394Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.257 [2024-11-26T06:31:44.394Z] =================================================================================================================== 00:22:00.257 [2024-11-26T06:31:44.394Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:00.257 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2131141 00:22:00.257 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:00.257 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:00.257 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:00.257 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:00.257 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:00.257 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2125677 00:22:00.257 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2125677 ']' 00:22:00.257 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2125677 00:22:00.257 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:00.257 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:00.257 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2125677 00:22:00.257 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:00.257 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:00.257 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2125677' 00:22:00.257 killing process with pid 2125677 00:22:00.257 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2125677 00:22:00.257 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2125677 00:22:00.518 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:00.519 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:00.519 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:00.519 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:00.519 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:00.519 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:22:00.519 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:00.519 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:00.519 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:22:00.519 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.XdJDwY1ZZ0 00:22:00.519 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:00.519 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.XdJDwY1ZZ0 00:22:00.519 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:22:00.519 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:00.519 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:00.519 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.519 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2131183 00:22:00.519 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2131183 00:22:00.519 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:00.519 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2131183 ']' 00:22:00.519 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.519 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:00.519 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.519 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:00.519 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.519 [2024-11-26 07:31:44.529324] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:22:00.519 [2024-11-26 07:31:44.529380] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:00.519 [2024-11-26 07:31:44.626182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.781 [2024-11-26 07:31:44.654709] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.781 [2024-11-26 07:31:44.654737] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.781 [2024-11-26 07:31:44.654743] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:00.781 [2024-11-26 07:31:44.654747] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:00.781 [2024-11-26 07:31:44.654752] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.781 [2024-11-26 07:31:44.655253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.353 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:01.353 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:01.353 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:01.353 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:01.353 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.353 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:01.353 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.XdJDwY1ZZ0 00:22:01.353 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.XdJDwY1ZZ0 00:22:01.354 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:01.614 [2024-11-26 07:31:45.506762] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:01.614 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:01.614 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:01.872 [2024-11-26 07:31:45.843587] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:01.872 [2024-11-26 07:31:45.843767] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:01.872 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:02.132 malloc0 00:22:02.132 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:02.132 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.XdJDwY1ZZ0 00:22:02.393 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:02.393 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XdJDwY1ZZ0 00:22:02.393 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:02.393 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:02.393 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:02.393 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.XdJDwY1ZZ0 00:22:02.393 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:02.393 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:02.393 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2131671 00:22:02.393 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:02.393 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2131671 /var/tmp/bdevperf.sock 00:22:02.393 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2131671 ']' 00:22:02.393 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:02.393 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:02.394 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:02.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:02.394 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:02.394 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:02.654 [2024-11-26 07:31:46.536246] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:22:02.654 [2024-11-26 07:31:46.536287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2131671 ] 00:22:02.654 [2024-11-26 07:31:46.592251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.654 [2024-11-26 07:31:46.621340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.654 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:02.654 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:02.654 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XdJDwY1ZZ0 00:22:02.915 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:02.915 [2024-11-26 07:31:47.038830] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:03.176 TLSTESTn1 00:22:03.176 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:03.176 Running I/O for 10 seconds... 00:22:05.507 5108.00 IOPS, 19.95 MiB/s [2024-11-26T06:31:50.586Z] 5572.00 IOPS, 21.77 MiB/s [2024-11-26T06:31:51.527Z] 5461.00 IOPS, 21.33 MiB/s [2024-11-26T06:31:52.470Z] 5339.75 IOPS, 20.86 MiB/s [2024-11-26T06:31:53.413Z] 5349.80 IOPS, 20.90 MiB/s [2024-11-26T06:31:54.355Z] 5521.83 IOPS, 21.57 MiB/s [2024-11-26T06:31:55.296Z] 5504.14 IOPS, 21.50 MiB/s [2024-11-26T06:31:56.679Z] 5523.62 IOPS, 21.58 MiB/s [2024-11-26T06:31:57.250Z] 5569.89 IOPS, 21.76 MiB/s [2024-11-26T06:31:57.511Z] 5660.40 IOPS, 22.11 MiB/s 00:22:13.374 Latency(us) 00:22:13.374 [2024-11-26T06:31:57.511Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.374 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:13.374 Verification LBA range: start 0x0 length 0x2000 00:22:13.374 TLSTESTn1 : 10.01 5666.08 22.13 0.00 0.00 22560.11 4669.44 23920.64 00:22:13.374 [2024-11-26T06:31:57.511Z] =================================================================================================================== 00:22:13.374 [2024-11-26T06:31:57.511Z] Total : 5666.08 22.13 0.00 0.00 22560.11 4669.44 23920.64 00:22:13.374 { 00:22:13.374 "results": [ 00:22:13.374 { 00:22:13.374 "job": "TLSTESTn1", 00:22:13.374 "core_mask": "0x4", 00:22:13.374 "workload": "verify", 00:22:13.374 "status": "finished", 00:22:13.374 "verify_range": { 00:22:13.374 "start": 0, 00:22:13.374 "length": 8192 00:22:13.374 }, 00:22:13.374 "queue_depth": 128, 00:22:13.374 "io_size": 4096, 00:22:13.374 "runtime": 10.012566, 00:22:13.374 "iops": 5666.0800038671405, 00:22:13.374 "mibps": 22.133125015106017, 00:22:13.374 "io_failed": 0, 00:22:13.374 "io_timeout": 0, 00:22:13.374 "avg_latency_us": 22560.106944934076, 00:22:13.374 "min_latency_us": 4669.44, 00:22:13.374 "max_latency_us": 23920.64 00:22:13.375 } 00:22:13.375 ], 00:22:13.375 "core_count": 1 00:22:13.375 } 00:22:13.375 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:13.375 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2131671 00:22:13.375 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2131671 ']' 00:22:13.375 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2131671 00:22:13.375 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:13.375 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:13.375 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2131671 00:22:13.375 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:13.375 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:13.375 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2131671' 00:22:13.375 killing process with pid 2131671 00:22:13.375 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2131671 00:22:13.375 Received shutdown signal, test time was about 10.000000 seconds 00:22:13.375 00:22:13.375 Latency(us) 00:22:13.375 [2024-11-26T06:31:57.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.375 [2024-11-26T06:31:57.512Z] =================================================================================================================== 00:22:13.375 [2024-11-26T06:31:57.512Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:13.375 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2131671 00:22:13.375 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.XdJDwY1ZZ0 00:22:13.375 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XdJDwY1ZZ0 00:22:13.375 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:13.375 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XdJDwY1ZZ0 00:22:13.375 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:13.375 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:13.375 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:13.375 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:13.375 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XdJDwY1ZZ0 00:22:13.375 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:13.375 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:13.375 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:13.375 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.XdJDwY1ZZ0 00:22:13.375 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:13.375 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2133880 00:22:13.375 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:13.375 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2133880 /var/tmp/bdevperf.sock 00:22:13.375 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:13.375 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2133880 ']' 00:22:13.375 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:13.375 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:13.375 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:13.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:13.375 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:13.375 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:13.375 [2024-11-26 07:31:57.498299] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:22:13.375 [2024-11-26 07:31:57.498356] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2133880 ] 00:22:13.635 [2024-11-26 07:31:57.562872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.635 [2024-11-26 07:31:57.590973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:13.635 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:13.635 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:13.635 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XdJDwY1ZZ0 00:22:13.896 [2024-11-26 07:31:57.823940] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.XdJDwY1ZZ0': 0100666 00:22:13.896 [2024-11-26 07:31:57.823966] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:13.896 request: 00:22:13.896 { 00:22:13.896 "name": "key0", 00:22:13.896 "path": "/tmp/tmp.XdJDwY1ZZ0", 00:22:13.896 "method": "keyring_file_add_key", 00:22:13.896 "req_id": 1 00:22:13.896 } 00:22:13.896 Got JSON-RPC error response 00:22:13.896 response: 00:22:13.896 { 00:22:13.896 "code": -1, 00:22:13.896 "message": "Operation not permitted" 00:22:13.896 } 00:22:13.896 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:13.896 [2024-11-26 07:31:58.008470] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:13.896 [2024-11-26 07:31:58.008494] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:13.896 request: 00:22:13.896 { 00:22:13.896 "name": "TLSTEST", 00:22:13.896 "trtype": "tcp", 00:22:13.896 "traddr": "10.0.0.2", 00:22:13.896 "adrfam": "ipv4", 00:22:13.896 "trsvcid": "4420", 00:22:13.896 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:13.896 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:13.896 "prchk_reftag": false, 00:22:13.896 "prchk_guard": false, 00:22:13.896 "hdgst": false, 00:22:13.896 "ddgst": false, 00:22:13.896 "psk": "key0", 00:22:13.896 "allow_unrecognized_csi": false, 00:22:13.896 "method": "bdev_nvme_attach_controller", 00:22:13.896 "req_id": 1 00:22:13.896 } 00:22:13.896 Got JSON-RPC error response 00:22:13.896 response: 00:22:13.896 { 00:22:13.896 "code": -126, 00:22:13.896 "message": "Required key not available" 00:22:13.896 } 00:22:14.157 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2133880 00:22:14.157 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2133880 ']' 00:22:14.157 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2133880 00:22:14.157 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:14.157 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:14.157 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2133880 00:22:14.157 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:14.157 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:14.157 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2133880' 00:22:14.157 killing process with pid 2133880 00:22:14.157 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2133880 00:22:14.157 Received shutdown signal, test time was about 10.000000 seconds 00:22:14.157 00:22:14.157 Latency(us) 00:22:14.157 [2024-11-26T06:31:58.294Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:14.157 [2024-11-26T06:31:58.294Z] =================================================================================================================== 00:22:14.157 [2024-11-26T06:31:58.294Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:14.157 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2133880 00:22:14.157 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:14.157 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:14.157 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:14.157 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:14.157 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:14.157 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2131183 00:22:14.157 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2131183 ']' 00:22:14.157 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2131183 00:22:14.157 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:14.157 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:14.157 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2131183 00:22:14.158 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:14.158 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:14.158 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2131183' 00:22:14.158 killing process with pid 2131183 00:22:14.158 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2131183 00:22:14.158 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2131183 00:22:14.419 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:22:14.419 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:14.419 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:14.419 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:14.419 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:14.419 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2133924 00:22:14.419 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2133924 00:22:14.419 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2133924 ']' 00:22:14.419 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.420 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:14.420 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.420 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:14.420 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:14.420 [2024-11-26 07:31:58.436418] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:22:14.420 [2024-11-26 07:31:58.436480] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:14.420 [2024-11-26 07:31:58.534726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.681 [2024-11-26 07:31:58.566061] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:14.681 [2024-11-26 07:31:58.566092] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:14.681 [2024-11-26 07:31:58.566098] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:14.681 [2024-11-26 07:31:58.566103] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:14.681 [2024-11-26 07:31:58.566107] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:14.681 [2024-11-26 07:31:58.566597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.253 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:15.253 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:15.253 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:15.253 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:15.253 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:15.253 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.253 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.XdJDwY1ZZ0 00:22:15.253 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:15.253 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.XdJDwY1ZZ0 00:22:15.253 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:22:15.253 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:15.253 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:22:15.253 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:15.253 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.XdJDwY1ZZ0 00:22:15.253 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.XdJDwY1ZZ0 00:22:15.253 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:15.513 [2024-11-26 07:31:59.403916] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:15.513 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:15.513 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:15.773 [2024-11-26 07:31:59.740738] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:15.773 [2024-11-26 07:31:59.740925] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:15.773 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:16.035 malloc0 00:22:16.035 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:16.035 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.XdJDwY1ZZ0 00:22:16.296 [2024-11-26 07:32:00.235846] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.XdJDwY1ZZ0': 0100666 00:22:16.296 [2024-11-26 07:32:00.235876] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:16.296 request: 00:22:16.296 { 00:22:16.296 "name": "key0", 00:22:16.296 "path": "/tmp/tmp.XdJDwY1ZZ0", 00:22:16.296 "method": "keyring_file_add_key", 00:22:16.296 "req_id": 1 00:22:16.296 } 00:22:16.296 Got JSON-RPC error response 00:22:16.296 response: 00:22:16.296 { 00:22:16.296 "code": -1, 00:22:16.296 "message": "Operation not permitted" 00:22:16.296 } 00:22:16.296 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:16.296 [2024-11-26 07:32:00.384231] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:22:16.296 [2024-11-26 07:32:00.384259] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:16.296 request: 00:22:16.296 { 00:22:16.296 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:16.296 "host": "nqn.2016-06.io.spdk:host1", 00:22:16.296 "psk": "key0", 00:22:16.296 "method": "nvmf_subsystem_add_host", 00:22:16.296 "req_id": 1 00:22:16.296 } 00:22:16.296 Got JSON-RPC error response 00:22:16.296 response: 00:22:16.296 { 00:22:16.296 "code": -32603, 00:22:16.296 "message": "Internal error" 00:22:16.296 } 00:22:16.296 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:16.296 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:16.296 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:16.297 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:16.297 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2133924 00:22:16.297 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2133924 ']' 00:22:16.297 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2133924 00:22:16.297 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:16.297 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:16.297 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2133924 00:22:16.558 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:16.558 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:16.558 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2133924' 00:22:16.558 killing process with pid 2133924 00:22:16.558 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2133924 00:22:16.558 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2133924 00:22:16.558 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.XdJDwY1ZZ0 00:22:16.558 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:22:16.558 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:16.558 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:16.558 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:16.558 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2134518 00:22:16.558 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2134518 00:22:16.558 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:16.558 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2134518 ']' 00:22:16.558 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.558 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:16.558 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.558 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:16.558 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:16.558 [2024-11-26 07:32:00.634139] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:22:16.558 [2024-11-26 07:32:00.634199] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:16.819 [2024-11-26 07:32:00.732647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.819 [2024-11-26 07:32:00.761694] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:16.819 [2024-11-26 07:32:00.761722] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:16.819 [2024-11-26 07:32:00.761728] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:16.819 [2024-11-26 07:32:00.761733] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:16.819 [2024-11-26 07:32:00.761737] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:16.819 [2024-11-26 07:32:00.762198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.392 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:17.392 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:17.392 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:17.392 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:17.392 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:17.392 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:17.392 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.XdJDwY1ZZ0 00:22:17.392 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.XdJDwY1ZZ0 00:22:17.392 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:17.654 [2024-11-26 07:32:01.577397] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:17.654 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:17.654 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:17.915 [2024-11-26 07:32:01.874123] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:17.915 [2024-11-26 07:32:01.874305] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:17.915 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:17.915 malloc0 00:22:17.915 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:18.176 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.XdJDwY1ZZ0 00:22:18.437 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:18.437 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2134959 00:22:18.437 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:18.437 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:18.437 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2134959 /var/tmp/bdevperf.sock 00:22:18.437 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2134959 ']' 00:22:18.437 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:18.437 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:18.437 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:18.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:18.437 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:18.437 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:18.437 [2024-11-26 07:32:02.552162] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:22:18.437 [2024-11-26 07:32:02.552218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2134959 ] 00:22:18.698 [2024-11-26 07:32:02.615280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.698 [2024-11-26 07:32:02.644139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:18.698 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:18.698 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:18.698 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XdJDwY1ZZ0 00:22:18.960 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:18.960 [2024-11-26 07:32:03.053707] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:19.220 TLSTESTn1 00:22:19.220 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:19.482 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:22:19.482 "subsystems": [ 00:22:19.482 { 00:22:19.482 "subsystem": "keyring", 00:22:19.482 "config": [ 00:22:19.482 { 00:22:19.482 "method": "keyring_file_add_key", 00:22:19.482 "params": { 00:22:19.482 "name": "key0", 00:22:19.482 "path": "/tmp/tmp.XdJDwY1ZZ0" 00:22:19.482 } 00:22:19.482 } 00:22:19.482 ] 00:22:19.482 }, 00:22:19.482 { 00:22:19.482 "subsystem": "iobuf", 00:22:19.482 "config": [ 00:22:19.482 { 00:22:19.482 "method": "iobuf_set_options", 00:22:19.482 "params": { 00:22:19.482 "small_pool_count": 8192, 00:22:19.482 "large_pool_count": 1024, 00:22:19.482 "small_bufsize": 8192, 00:22:19.482 "large_bufsize": 135168, 00:22:19.482 "enable_numa": false 00:22:19.482 } 00:22:19.482 } 00:22:19.482 ] 00:22:19.482 }, 00:22:19.482 { 00:22:19.482 "subsystem": "sock", 00:22:19.482 "config": [ 00:22:19.482 { 00:22:19.482 "method": "sock_set_default_impl", 00:22:19.482 "params": { 00:22:19.482 "impl_name": "posix" 00:22:19.482 } 00:22:19.482 }, 00:22:19.482 { 00:22:19.482 "method": "sock_impl_set_options", 00:22:19.482 "params": { 00:22:19.482 "impl_name": "ssl", 00:22:19.482 "recv_buf_size": 4096, 00:22:19.482 "send_buf_size": 4096, 00:22:19.482 "enable_recv_pipe": true, 00:22:19.482 "enable_quickack": false, 00:22:19.482 "enable_placement_id": 0, 00:22:19.482 "enable_zerocopy_send_server": true, 00:22:19.482 "enable_zerocopy_send_client": false, 00:22:19.482 "zerocopy_threshold": 0, 00:22:19.482 "tls_version": 0, 00:22:19.482 "enable_ktls": false 00:22:19.482 } 00:22:19.482 }, 00:22:19.482 { 00:22:19.482 "method": "sock_impl_set_options", 00:22:19.482 "params": { 00:22:19.482 "impl_name": "posix", 00:22:19.482 "recv_buf_size": 2097152, 00:22:19.482 "send_buf_size": 2097152, 00:22:19.482 "enable_recv_pipe": true, 00:22:19.482 "enable_quickack": false, 00:22:19.482 "enable_placement_id": 0, 00:22:19.482 "enable_zerocopy_send_server": true, 00:22:19.482 "enable_zerocopy_send_client": false, 00:22:19.482 "zerocopy_threshold": 0, 00:22:19.482 "tls_version": 0, 00:22:19.482 "enable_ktls": false 00:22:19.482 } 00:22:19.482 } 00:22:19.482 ] 00:22:19.482 }, 00:22:19.482 { 00:22:19.482 "subsystem": "vmd", 00:22:19.482 "config": [] 00:22:19.482 }, 00:22:19.482 { 00:22:19.482 "subsystem": "accel", 00:22:19.482 "config": [ 00:22:19.482 { 00:22:19.482 "method": "accel_set_options", 00:22:19.482 "params": { 00:22:19.482 "small_cache_size": 128, 00:22:19.482 "large_cache_size": 16, 00:22:19.482 "task_count": 2048, 00:22:19.482 "sequence_count": 2048, 00:22:19.482 "buf_count": 2048 00:22:19.482 } 00:22:19.482 } 00:22:19.482 ] 00:22:19.482 }, 00:22:19.482 { 00:22:19.482 "subsystem": "bdev", 00:22:19.482 "config": [ 00:22:19.482 { 00:22:19.482 "method": "bdev_set_options", 00:22:19.482 "params": { 00:22:19.482 "bdev_io_pool_size": 65535, 00:22:19.482 "bdev_io_cache_size": 256, 00:22:19.482 "bdev_auto_examine": true, 00:22:19.482 "iobuf_small_cache_size": 128, 00:22:19.482 "iobuf_large_cache_size": 16 00:22:19.482 } 00:22:19.482 }, 00:22:19.482 { 00:22:19.482 "method": "bdev_raid_set_options", 00:22:19.482 "params": { 00:22:19.482 "process_window_size_kb": 1024, 00:22:19.482 "process_max_bandwidth_mb_sec": 0 00:22:19.482 } 00:22:19.482 }, 00:22:19.482 { 00:22:19.482 "method": "bdev_iscsi_set_options", 00:22:19.482 "params": { 00:22:19.482 "timeout_sec": 30 00:22:19.483 } 00:22:19.483 }, 00:22:19.483 { 00:22:19.483 "method": "bdev_nvme_set_options", 00:22:19.483 "params": { 00:22:19.483 "action_on_timeout": "none", 00:22:19.483 "timeout_us": 0, 00:22:19.483 "timeout_admin_us": 0, 00:22:19.483 "keep_alive_timeout_ms": 10000, 00:22:19.483 "arbitration_burst": 0, 00:22:19.483 "low_priority_weight": 0, 00:22:19.483 "medium_priority_weight": 0, 00:22:19.483 "high_priority_weight": 0, 00:22:19.483 "nvme_adminq_poll_period_us": 10000, 00:22:19.483 "nvme_ioq_poll_period_us": 0, 00:22:19.483 "io_queue_requests": 0, 00:22:19.483 "delay_cmd_submit": true, 00:22:19.483 "transport_retry_count": 4, 00:22:19.483 "bdev_retry_count": 3, 00:22:19.483 "transport_ack_timeout": 0, 00:22:19.483 "ctrlr_loss_timeout_sec": 0, 00:22:19.483 "reconnect_delay_sec": 0, 00:22:19.483 "fast_io_fail_timeout_sec": 0, 00:22:19.483 "disable_auto_failback": false, 00:22:19.483 "generate_uuids": false, 00:22:19.483 "transport_tos": 0, 00:22:19.483 "nvme_error_stat": false, 00:22:19.483 "rdma_srq_size": 0, 00:22:19.483 "io_path_stat": false, 00:22:19.483 "allow_accel_sequence": false, 00:22:19.483 "rdma_max_cq_size": 0, 00:22:19.483 "rdma_cm_event_timeout_ms": 0, 00:22:19.483 "dhchap_digests": [ 00:22:19.483 "sha256", 00:22:19.483 "sha384", 00:22:19.483 "sha512" 00:22:19.483 ], 00:22:19.483 "dhchap_dhgroups": [ 00:22:19.483 "null", 00:22:19.483 "ffdhe2048", 00:22:19.483 "ffdhe3072", 00:22:19.483 "ffdhe4096", 00:22:19.483 "ffdhe6144", 00:22:19.483 "ffdhe8192" 00:22:19.483 ] 00:22:19.483 } 00:22:19.483 }, 00:22:19.483 { 00:22:19.483 "method": "bdev_nvme_set_hotplug", 00:22:19.483 "params": { 00:22:19.483 "period_us": 100000, 00:22:19.483 "enable": false 00:22:19.483 } 00:22:19.483 }, 00:22:19.483 { 00:22:19.483 "method": "bdev_malloc_create", 00:22:19.483 "params": { 00:22:19.483 "name": "malloc0", 00:22:19.483 "num_blocks": 8192, 00:22:19.483 "block_size": 4096, 00:22:19.483 "physical_block_size": 4096, 00:22:19.483 "uuid": "66db2746-a53c-4cae-a0ae-0e57cce321ef", 00:22:19.483 "optimal_io_boundary": 0, 00:22:19.483 "md_size": 0, 00:22:19.483 "dif_type": 0, 00:22:19.483 "dif_is_head_of_md": false, 00:22:19.483 "dif_pi_format": 0 00:22:19.483 } 00:22:19.483 }, 00:22:19.483 { 00:22:19.483 "method": "bdev_wait_for_examine" 00:22:19.483 } 00:22:19.483 ] 00:22:19.483 }, 00:22:19.483 { 00:22:19.483 "subsystem": "nbd", 00:22:19.483 "config": [] 00:22:19.483 }, 00:22:19.483 { 00:22:19.483 "subsystem": "scheduler", 00:22:19.483 "config": [ 00:22:19.483 { 00:22:19.483 "method": "framework_set_scheduler", 00:22:19.483 "params": { 00:22:19.483 "name": "static" 00:22:19.483 } 00:22:19.483 } 00:22:19.483 ] 00:22:19.483 }, 00:22:19.483 { 00:22:19.483 "subsystem": "nvmf", 00:22:19.483 "config": [ 00:22:19.483 { 00:22:19.483 "method": "nvmf_set_config", 00:22:19.483 "params": { 00:22:19.483 "discovery_filter": "match_any", 00:22:19.483 "admin_cmd_passthru": { 00:22:19.483 "identify_ctrlr": false 00:22:19.483 }, 00:22:19.483 "dhchap_digests": [ 00:22:19.483 "sha256", 00:22:19.483 "sha384", 00:22:19.483 "sha512" 00:22:19.483 ], 00:22:19.483 "dhchap_dhgroups": [ 00:22:19.483 "null", 00:22:19.483 "ffdhe2048", 00:22:19.483 "ffdhe3072", 00:22:19.483 "ffdhe4096", 00:22:19.483 "ffdhe6144", 00:22:19.483 "ffdhe8192" 00:22:19.483 ] 00:22:19.483 } 00:22:19.483 }, 00:22:19.483 { 00:22:19.483 "method": "nvmf_set_max_subsystems", 00:22:19.483 "params": { 00:22:19.483 "max_subsystems": 1024 00:22:19.483 } 00:22:19.483 }, 00:22:19.483 { 00:22:19.483 "method": "nvmf_set_crdt", 00:22:19.483 "params": { 00:22:19.483 "crdt1": 0, 00:22:19.483 "crdt2": 0, 00:22:19.483 "crdt3": 0 00:22:19.483 } 00:22:19.483 }, 00:22:19.483 { 00:22:19.483 "method": "nvmf_create_transport", 00:22:19.483 "params": { 00:22:19.483 "trtype": "TCP", 00:22:19.483 "max_queue_depth": 128, 00:22:19.483 "max_io_qpairs_per_ctrlr": 127, 00:22:19.483 "in_capsule_data_size": 4096, 00:22:19.483 "max_io_size": 131072, 00:22:19.483 "io_unit_size": 131072, 00:22:19.483 "max_aq_depth": 128, 00:22:19.483 "num_shared_buffers": 511, 00:22:19.483 "buf_cache_size": 4294967295, 00:22:19.483 "dif_insert_or_strip": false, 00:22:19.483 "zcopy": false, 00:22:19.483 "c2h_success": false, 00:22:19.483 "sock_priority": 0, 00:22:19.483 "abort_timeout_sec": 1, 00:22:19.483 "ack_timeout": 0, 00:22:19.483 "data_wr_pool_size": 0 00:22:19.483 } 00:22:19.483 }, 00:22:19.483 { 00:22:19.483 "method": "nvmf_create_subsystem", 00:22:19.483 "params": { 00:22:19.483 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:19.483 "allow_any_host": false, 00:22:19.483 "serial_number": "SPDK00000000000001", 00:22:19.483 "model_number": "SPDK bdev Controller", 00:22:19.483 "max_namespaces": 10, 00:22:19.483 "min_cntlid": 1, 00:22:19.483 "max_cntlid": 65519, 00:22:19.483 "ana_reporting": false 00:22:19.483 } 00:22:19.483 }, 00:22:19.483 { 00:22:19.483 "method": "nvmf_subsystem_add_host", 00:22:19.483 "params": { 00:22:19.483 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:19.483 "host": "nqn.2016-06.io.spdk:host1", 00:22:19.483 "psk": "key0" 00:22:19.483 } 00:22:19.483 }, 00:22:19.483 { 00:22:19.483 "method": "nvmf_subsystem_add_ns", 00:22:19.483 "params": { 00:22:19.483 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:19.483 "namespace": { 00:22:19.483 "nsid": 1, 00:22:19.483 "bdev_name": "malloc0", 00:22:19.483 "nguid": "66DB2746A53C4CAEA0AE0E57CCE321EF", 00:22:19.483 "uuid": "66db2746-a53c-4cae-a0ae-0e57cce321ef", 00:22:19.483 "no_auto_visible": false 00:22:19.483 } 00:22:19.483 } 00:22:19.483 }, 00:22:19.483 { 00:22:19.483 "method": "nvmf_subsystem_add_listener", 00:22:19.483 "params": { 00:22:19.483 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:19.483 "listen_address": { 00:22:19.483 "trtype": "TCP", 00:22:19.483 "adrfam": "IPv4", 00:22:19.483 "traddr": "10.0.0.2", 00:22:19.483 "trsvcid": "4420" 00:22:19.483 }, 00:22:19.483 "secure_channel": true 00:22:19.483 } 00:22:19.483 } 00:22:19.483 ] 00:22:19.483 } 00:22:19.483 ] 00:22:19.483 }' 00:22:19.483 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:19.745 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:22:19.745 "subsystems": [ 00:22:19.745 { 00:22:19.745 "subsystem": "keyring", 00:22:19.745 "config": [ 00:22:19.745 { 00:22:19.745 "method": "keyring_file_add_key", 00:22:19.745 "params": { 00:22:19.745 "name": "key0", 00:22:19.745 "path": "/tmp/tmp.XdJDwY1ZZ0" 00:22:19.745 } 00:22:19.745 } 00:22:19.745 ] 00:22:19.745 }, 00:22:19.745 { 00:22:19.745 "subsystem": "iobuf", 00:22:19.745 "config": [ 00:22:19.745 { 00:22:19.745 "method": "iobuf_set_options", 00:22:19.745 "params": { 00:22:19.745 "small_pool_count": 8192, 00:22:19.745 "large_pool_count": 1024, 00:22:19.745 "small_bufsize": 8192, 00:22:19.745 "large_bufsize": 135168, 00:22:19.745 "enable_numa": false 00:22:19.745 } 00:22:19.745 } 00:22:19.745 ] 00:22:19.745 }, 00:22:19.745 { 00:22:19.745 "subsystem": "sock", 00:22:19.745 "config": [ 00:22:19.745 { 00:22:19.745 "method": "sock_set_default_impl", 00:22:19.745 "params": { 00:22:19.745 "impl_name": "posix" 00:22:19.745 } 00:22:19.745 }, 00:22:19.745 { 00:22:19.745 "method": "sock_impl_set_options", 00:22:19.745 "params": { 00:22:19.745 "impl_name": "ssl", 00:22:19.745 "recv_buf_size": 4096, 00:22:19.745 "send_buf_size": 4096, 00:22:19.745 "enable_recv_pipe": true, 00:22:19.745 "enable_quickack": false, 00:22:19.745 "enable_placement_id": 0, 00:22:19.745 "enable_zerocopy_send_server": true, 00:22:19.745 "enable_zerocopy_send_client": false, 00:22:19.745 "zerocopy_threshold": 0, 00:22:19.745 "tls_version": 0, 00:22:19.745 "enable_ktls": false 00:22:19.745 } 00:22:19.745 }, 00:22:19.745 { 00:22:19.745 "method": "sock_impl_set_options", 00:22:19.745 "params": { 00:22:19.745 "impl_name": "posix", 00:22:19.745 "recv_buf_size": 2097152, 00:22:19.745 "send_buf_size": 2097152, 00:22:19.745 "enable_recv_pipe": true, 00:22:19.745 "enable_quickack": false, 00:22:19.745 "enable_placement_id": 0, 00:22:19.745 "enable_zerocopy_send_server": true, 00:22:19.745 "enable_zerocopy_send_client": false, 00:22:19.745 "zerocopy_threshold": 0, 00:22:19.745 "tls_version": 0, 00:22:19.745 "enable_ktls": false 00:22:19.745 } 00:22:19.746 } 00:22:19.746 ] 00:22:19.746 }, 00:22:19.746 { 00:22:19.746 "subsystem": "vmd", 00:22:19.746 "config": [] 00:22:19.746 }, 00:22:19.746 { 00:22:19.746 "subsystem": "accel", 00:22:19.746 "config": [ 00:22:19.746 { 00:22:19.746 "method": "accel_set_options", 00:22:19.746 "params": { 00:22:19.746 "small_cache_size": 128, 00:22:19.746 "large_cache_size": 16, 00:22:19.746 "task_count": 2048, 00:22:19.746 "sequence_count": 2048, 00:22:19.746 "buf_count": 2048 00:22:19.746 } 00:22:19.746 } 00:22:19.746 ] 00:22:19.746 }, 00:22:19.746 { 00:22:19.746 "subsystem": "bdev", 00:22:19.746 "config": [ 00:22:19.746 { 00:22:19.746 "method": "bdev_set_options", 00:22:19.746 "params": { 00:22:19.746 "bdev_io_pool_size": 65535, 00:22:19.746 "bdev_io_cache_size": 256, 00:22:19.746 "bdev_auto_examine": true, 00:22:19.746 "iobuf_small_cache_size": 128, 00:22:19.746 "iobuf_large_cache_size": 16 00:22:19.746 } 00:22:19.746 }, 00:22:19.746 { 00:22:19.746 "method": "bdev_raid_set_options", 00:22:19.746 "params": { 00:22:19.746 "process_window_size_kb": 1024, 00:22:19.746 "process_max_bandwidth_mb_sec": 0 00:22:19.746 } 00:22:19.746 }, 00:22:19.746 { 00:22:19.746 "method": "bdev_iscsi_set_options", 00:22:19.746 "params": { 00:22:19.746 "timeout_sec": 30 00:22:19.746 } 00:22:19.746 }, 00:22:19.746 { 00:22:19.746 "method": "bdev_nvme_set_options", 00:22:19.746 "params": { 00:22:19.746 "action_on_timeout": "none", 00:22:19.746 "timeout_us": 0, 00:22:19.746 "timeout_admin_us": 0, 00:22:19.746 "keep_alive_timeout_ms": 10000, 00:22:19.746 "arbitration_burst": 0, 00:22:19.746 "low_priority_weight": 0, 00:22:19.746 "medium_priority_weight": 0, 00:22:19.746 "high_priority_weight": 0, 00:22:19.746 "nvme_adminq_poll_period_us": 10000, 00:22:19.746 "nvme_ioq_poll_period_us": 0, 00:22:19.746 "io_queue_requests": 512, 00:22:19.746 "delay_cmd_submit": true, 00:22:19.746 "transport_retry_count": 4, 00:22:19.746 "bdev_retry_count": 3, 00:22:19.746 "transport_ack_timeout": 0, 00:22:19.746 "ctrlr_loss_timeout_sec": 0, 00:22:19.746 "reconnect_delay_sec": 0, 00:22:19.746 "fast_io_fail_timeout_sec": 0, 00:22:19.746 "disable_auto_failback": false, 00:22:19.746 "generate_uuids": false, 00:22:19.746 "transport_tos": 0, 00:22:19.746 "nvme_error_stat": false, 00:22:19.746 "rdma_srq_size": 0, 00:22:19.746 "io_path_stat": false, 00:22:19.746 "allow_accel_sequence": false, 00:22:19.746 "rdma_max_cq_size": 0, 00:22:19.746 "rdma_cm_event_timeout_ms": 0, 00:22:19.746 "dhchap_digests": [ 00:22:19.746 "sha256", 00:22:19.746 "sha384", 00:22:19.746 "sha512" 00:22:19.746 ], 00:22:19.746 "dhchap_dhgroups": [ 00:22:19.746 "null", 00:22:19.746 "ffdhe2048", 00:22:19.746 "ffdhe3072", 00:22:19.746 "ffdhe4096", 00:22:19.746 "ffdhe6144", 00:22:19.746 "ffdhe8192" 00:22:19.746 ] 00:22:19.746 } 00:22:19.746 }, 00:22:19.746 { 00:22:19.746 "method": "bdev_nvme_attach_controller", 00:22:19.746 "params": { 00:22:19.746 "name": "TLSTEST", 00:22:19.746 "trtype": "TCP", 00:22:19.746 "adrfam": "IPv4", 00:22:19.746 "traddr": "10.0.0.2", 00:22:19.746 "trsvcid": "4420", 00:22:19.746 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:19.746 "prchk_reftag": false, 00:22:19.746 "prchk_guard": false, 00:22:19.746 "ctrlr_loss_timeout_sec": 0, 00:22:19.746 "reconnect_delay_sec": 0, 00:22:19.746 "fast_io_fail_timeout_sec": 0, 00:22:19.746 "psk": "key0", 00:22:19.746 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:19.746 "hdgst": false, 00:22:19.746 "ddgst": false, 00:22:19.746 "multipath": "multipath" 00:22:19.746 } 00:22:19.746 }, 00:22:19.746 { 00:22:19.746 "method": "bdev_nvme_set_hotplug", 00:22:19.746 "params": { 00:22:19.746 "period_us": 100000, 00:22:19.746 "enable": false 00:22:19.746 } 00:22:19.746 }, 00:22:19.746 { 00:22:19.746 "method": "bdev_wait_for_examine" 00:22:19.746 } 00:22:19.746 ] 00:22:19.746 }, 00:22:19.746 { 00:22:19.746 "subsystem": "nbd", 00:22:19.746 "config": [] 00:22:19.746 } 00:22:19.746 ] 00:22:19.746 }' 00:22:19.746 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2134959 00:22:19.746 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2134959 ']' 00:22:19.746 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2134959 00:22:19.746 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:19.746 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:19.746 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2134959 00:22:19.746 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:19.746 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:19.746 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2134959' 00:22:19.746 killing process with pid 2134959 00:22:19.746 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2134959 00:22:19.746 Received shutdown signal, test time was about 10.000000 seconds 00:22:19.746 00:22:19.746 Latency(us) 00:22:19.746 [2024-11-26T06:32:03.883Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.746 [2024-11-26T06:32:03.883Z] =================================================================================================================== 00:22:19.746 [2024-11-26T06:32:03.883Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:19.746 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2134959 00:22:19.746 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2134518 00:22:19.746 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2134518 ']' 00:22:19.746 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2134518 00:22:19.746 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:19.746 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:19.746 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2134518 00:22:20.009 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:20.009 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:20.009 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2134518' 00:22:20.009 killing process with pid 2134518 00:22:20.009 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2134518 00:22:20.009 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2134518 00:22:20.009 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:20.009 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:20.009 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:20.009 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:20.009 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:22:20.009 "subsystems": [ 00:22:20.009 { 00:22:20.009 "subsystem": "keyring", 00:22:20.009 "config": [ 00:22:20.009 { 00:22:20.009 "method": "keyring_file_add_key", 00:22:20.009 "params": { 00:22:20.009 "name": "key0", 00:22:20.009 "path": "/tmp/tmp.XdJDwY1ZZ0" 00:22:20.009 } 00:22:20.009 } 00:22:20.009 ] 00:22:20.009 }, 00:22:20.009 { 00:22:20.009 "subsystem": "iobuf", 00:22:20.009 "config": [ 00:22:20.009 { 00:22:20.009 "method": "iobuf_set_options", 00:22:20.009 "params": { 00:22:20.009 "small_pool_count": 8192, 00:22:20.009 "large_pool_count": 1024, 00:22:20.009 "small_bufsize": 8192, 00:22:20.009 "large_bufsize": 135168, 00:22:20.009 "enable_numa": false 00:22:20.009 } 00:22:20.009 } 00:22:20.009 ] 00:22:20.009 }, 00:22:20.009 { 00:22:20.009 "subsystem": "sock", 00:22:20.009 "config": [ 00:22:20.009 { 00:22:20.009 "method": "sock_set_default_impl", 00:22:20.009 "params": { 00:22:20.009 "impl_name": "posix" 00:22:20.009 } 00:22:20.009 }, 00:22:20.009 { 00:22:20.009 "method": "sock_impl_set_options", 00:22:20.009 "params": { 00:22:20.009 "impl_name": "ssl", 00:22:20.009 "recv_buf_size": 4096, 00:22:20.009 "send_buf_size": 4096, 00:22:20.009 "enable_recv_pipe": true, 00:22:20.009 "enable_quickack": false, 00:22:20.009 "enable_placement_id": 0, 00:22:20.009 "enable_zerocopy_send_server": true, 00:22:20.009 "enable_zerocopy_send_client": false, 00:22:20.009 "zerocopy_threshold": 0, 00:22:20.009 "tls_version": 0, 00:22:20.009 "enable_ktls": false 00:22:20.009 } 00:22:20.009 }, 00:22:20.009 { 00:22:20.009 "method": "sock_impl_set_options", 00:22:20.009 "params": { 00:22:20.009 "impl_name": "posix", 00:22:20.009 "recv_buf_size": 2097152, 00:22:20.009 "send_buf_size": 2097152, 00:22:20.009 "enable_recv_pipe": true, 00:22:20.009 "enable_quickack": false, 00:22:20.009 "enable_placement_id": 0, 00:22:20.009 "enable_zerocopy_send_server": true, 00:22:20.009 "enable_zerocopy_send_client": false, 00:22:20.009 "zerocopy_threshold": 0, 00:22:20.009 "tls_version": 0, 00:22:20.009 "enable_ktls": false 00:22:20.009 } 00:22:20.009 } 00:22:20.009 ] 00:22:20.009 }, 00:22:20.009 { 00:22:20.009 "subsystem": "vmd", 00:22:20.009 "config": [] 00:22:20.009 }, 00:22:20.009 { 00:22:20.009 "subsystem": "accel", 00:22:20.009 "config": [ 00:22:20.009 { 00:22:20.009 "method": "accel_set_options", 00:22:20.009 "params": { 00:22:20.009 "small_cache_size": 128, 00:22:20.009 "large_cache_size": 16, 00:22:20.009 "task_count": 2048, 00:22:20.009 "sequence_count": 2048, 00:22:20.009 "buf_count": 2048 00:22:20.009 } 00:22:20.009 } 00:22:20.009 ] 00:22:20.009 }, 00:22:20.009 { 00:22:20.009 "subsystem": "bdev", 00:22:20.009 "config": [ 00:22:20.009 { 00:22:20.009 "method": "bdev_set_options", 00:22:20.009 "params": { 00:22:20.009 "bdev_io_pool_size": 65535, 00:22:20.009 "bdev_io_cache_size": 256, 00:22:20.009 "bdev_auto_examine": true, 00:22:20.009 "iobuf_small_cache_size": 128, 00:22:20.009 "iobuf_large_cache_size": 16 00:22:20.009 } 00:22:20.009 }, 00:22:20.009 { 00:22:20.009 "method": "bdev_raid_set_options", 00:22:20.009 "params": { 00:22:20.009 "process_window_size_kb": 1024, 00:22:20.009 "process_max_bandwidth_mb_sec": 0 00:22:20.009 } 00:22:20.009 }, 00:22:20.009 { 00:22:20.009 "method": "bdev_iscsi_set_options", 00:22:20.009 "params": { 00:22:20.009 "timeout_sec": 30 00:22:20.009 } 00:22:20.009 }, 00:22:20.009 { 00:22:20.009 "method": "bdev_nvme_set_options", 00:22:20.009 "params": { 00:22:20.009 "action_on_timeout": "none", 00:22:20.009 "timeout_us": 0, 00:22:20.009 "timeout_admin_us": 0, 00:22:20.009 "keep_alive_timeout_ms": 10000, 00:22:20.009 "arbitration_burst": 0, 00:22:20.009 "low_priority_weight": 0, 00:22:20.009 "medium_priority_weight": 0, 00:22:20.009 "high_priority_weight": 0, 00:22:20.009 "nvme_adminq_poll_period_us": 10000, 00:22:20.009 "nvme_ioq_poll_period_us": 0, 00:22:20.009 "io_queue_requests": 0, 00:22:20.009 "delay_cmd_submit": true, 00:22:20.009 "transport_retry_count": 4, 00:22:20.009 "bdev_retry_count": 3, 00:22:20.009 "transport_ack_timeout": 0, 00:22:20.009 "ctrlr_loss_timeout_sec": 0, 00:22:20.009 "reconnect_delay_sec": 0, 00:22:20.009 "fast_io_fail_timeout_sec": 0, 00:22:20.009 "disable_auto_failback": false, 00:22:20.009 "generate_uuids": false, 00:22:20.009 "transport_tos": 0, 00:22:20.009 "nvme_error_stat": false, 00:22:20.009 "rdma_srq_size": 0, 00:22:20.009 "io_path_stat": false, 00:22:20.010 "allow_accel_sequence": false, 00:22:20.010 "rdma_max_cq_size": 0, 00:22:20.010 "rdma_cm_event_timeout_ms": 0, 00:22:20.010 "dhchap_digests": [ 00:22:20.010 "sha256", 00:22:20.010 "sha384", 00:22:20.010 "sha512" 00:22:20.010 ], 00:22:20.010 "dhchap_dhgroups": [ 00:22:20.010 "null", 00:22:20.010 "ffdhe2048", 00:22:20.010 "ffdhe3072", 00:22:20.010 "ffdhe4096", 00:22:20.010 "ffdhe6144", 00:22:20.010 "ffdhe8192" 00:22:20.010 ] 00:22:20.010 } 00:22:20.010 }, 00:22:20.010 { 00:22:20.010 "method": "bdev_nvme_set_hotplug", 00:22:20.010 "params": { 00:22:20.010 "period_us": 100000, 00:22:20.010 "enable": false 00:22:20.010 } 00:22:20.010 }, 00:22:20.010 { 00:22:20.010 "method": "bdev_malloc_create", 00:22:20.010 "params": { 00:22:20.010 "name": "malloc0", 00:22:20.010 "num_blocks": 8192, 00:22:20.010 "block_size": 4096, 00:22:20.010 "physical_block_size": 4096, 00:22:20.010 "uuid": "66db2746-a53c-4cae-a0ae-0e57cce321ef", 00:22:20.010 "optimal_io_boundary": 0, 00:22:20.010 "md_size": 0, 00:22:20.010 "dif_type": 0, 00:22:20.010 "dif_is_head_of_md": false, 00:22:20.010 "dif_pi_format": 0 00:22:20.010 } 00:22:20.010 }, 00:22:20.010 { 00:22:20.010 "method": "bdev_wait_for_examine" 00:22:20.010 } 00:22:20.010 ] 00:22:20.010 }, 00:22:20.010 { 00:22:20.010 "subsystem": "nbd", 00:22:20.010 "config": [] 00:22:20.010 }, 00:22:20.010 { 00:22:20.010 "subsystem": "scheduler", 00:22:20.010 "config": [ 00:22:20.010 { 00:22:20.010 "method": "framework_set_scheduler", 00:22:20.010 "params": { 00:22:20.010 "name": "static" 00:22:20.010 } 00:22:20.010 } 00:22:20.010 ] 00:22:20.010 }, 00:22:20.010 { 00:22:20.010 "subsystem": "nvmf", 00:22:20.010 "config": [ 00:22:20.010 { 00:22:20.010 "method": "nvmf_set_config", 00:22:20.010 "params": { 00:22:20.010 "discovery_filter": "match_any", 00:22:20.010 "admin_cmd_passthru": { 00:22:20.010 "identify_ctrlr": false 00:22:20.010 }, 00:22:20.010 "dhchap_digests": [ 00:22:20.010 "sha256", 00:22:20.010 "sha384", 00:22:20.010 "sha512" 00:22:20.010 ], 00:22:20.010 "dhchap_dhgroups": [ 00:22:20.010 "null", 00:22:20.010 "ffdhe2048", 00:22:20.010 "ffdhe3072", 00:22:20.010 "ffdhe4096", 00:22:20.010 "ffdhe6144", 00:22:20.010 "ffdhe8192" 00:22:20.010 ] 00:22:20.010 } 00:22:20.010 }, 00:22:20.010 { 00:22:20.010 "method": "nvmf_set_max_subsystems", 00:22:20.010 "params": { 00:22:20.010 "max_subsystems": 1024 00:22:20.010 } 00:22:20.010 }, 00:22:20.010 { 00:22:20.010 "method": "nvmf_set_crdt", 00:22:20.010 "params": { 00:22:20.010 "crdt1": 0, 00:22:20.010 "crdt2": 0, 00:22:20.010 "crdt3": 0 00:22:20.010 } 00:22:20.010 }, 00:22:20.010 { 00:22:20.010 "method": "nvmf_create_transport", 00:22:20.010 "params": { 00:22:20.010 "trtype": "TCP", 00:22:20.010 "max_queue_depth": 128, 00:22:20.010 "max_io_qpairs_per_ctrlr": 127, 00:22:20.010 "in_capsule_data_size": 4096, 00:22:20.010 "max_io_size": 131072, 00:22:20.010 "io_unit_size": 131072, 00:22:20.010 "max_aq_depth": 128, 00:22:20.010 "num_shared_buffers": 511, 00:22:20.010 "buf_cache_size": 4294967295, 00:22:20.010 "dif_insert_or_strip": false, 00:22:20.010 "zcopy": false, 00:22:20.010 "c2h_success": false, 00:22:20.010 "sock_priority": 0, 00:22:20.010 "abort_timeout_sec": 1, 00:22:20.010 "ack_timeout": 0, 00:22:20.010 "data_wr_pool_size": 0 00:22:20.010 } 00:22:20.010 }, 00:22:20.010 { 00:22:20.010 "method": "nvmf_create_subsystem", 00:22:20.010 "params": { 00:22:20.010 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.010 "allow_any_host": false, 00:22:20.010 "serial_number": "SPDK00000000000001", 00:22:20.010 "model_number": "SPDK bdev Controller", 00:22:20.010 "max_namespaces": 10, 00:22:20.010 "min_cntlid": 1, 00:22:20.010 "max_cntlid": 65519, 00:22:20.010 "ana_reporting": false 00:22:20.010 } 00:22:20.010 }, 00:22:20.010 { 00:22:20.010 "method": "nvmf_subsystem_add_host", 00:22:20.010 "params": { 00:22:20.010 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.010 "host": "nqn.2016-06.io.spdk:host1", 00:22:20.010 "psk": "key0" 00:22:20.010 } 00:22:20.010 }, 00:22:20.010 { 00:22:20.010 "method": "nvmf_subsystem_add_ns", 00:22:20.010 "params": { 00:22:20.010 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.010 "namespace": { 00:22:20.010 "nsid": 1, 00:22:20.010 "bdev_name": "malloc0", 00:22:20.010 "nguid": "66DB2746A53C4CAEA0AE0E57CCE321EF", 00:22:20.010 "uuid": "66db2746-a53c-4cae-a0ae-0e57cce321ef", 00:22:20.010 "no_auto_visible": false 00:22:20.010 } 00:22:20.010 } 00:22:20.010 }, 00:22:20.010 { 00:22:20.010 "method": "nvmf_subsystem_add_listener", 00:22:20.010 "params": { 00:22:20.010 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.010 "listen_address": { 00:22:20.010 "trtype": "TCP", 00:22:20.010 "adrfam": "IPv4", 00:22:20.010 "traddr": "10.0.0.2", 00:22:20.010 "trsvcid": "4420" 00:22:20.010 }, 00:22:20.010 "secure_channel": true 00:22:20.010 } 00:22:20.010 } 00:22:20.010 ] 00:22:20.010 } 00:22:20.010 ] 00:22:20.010 }' 00:22:20.010 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2135222 00:22:20.010 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2135222 00:22:20.010 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:20.010 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2135222 ']' 00:22:20.010 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:20.010 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:20.010 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:20.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:20.010 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:20.010 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:20.010 [2024-11-26 07:32:04.065580] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:22:20.010 [2024-11-26 07:32:04.065649] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:20.272 [2024-11-26 07:32:04.167519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.272 [2024-11-26 07:32:04.197393] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:20.272 [2024-11-26 07:32:04.197425] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:20.272 [2024-11-26 07:32:04.197430] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:20.272 [2024-11-26 07:32:04.197436] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:20.272 [2024-11-26 07:32:04.197440] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:20.272 [2024-11-26 07:32:04.197953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.272 [2024-11-26 07:32:04.390770] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:20.533 [2024-11-26 07:32:04.422798] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:20.533 [2024-11-26 07:32:04.422988] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:20.795 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:20.795 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:20.795 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:20.795 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:20.795 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:20.795 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:20.795 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2135339 00:22:20.795 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2135339 /var/tmp/bdevperf.sock 00:22:20.795 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2135339 ']' 00:22:20.795 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:20.795 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:20.795 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:20.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:20.795 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:20.795 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:20.795 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:20.795 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:22:20.795 "subsystems": [ 00:22:20.795 { 00:22:20.795 "subsystem": "keyring", 00:22:20.795 "config": [ 00:22:20.795 { 00:22:20.795 "method": "keyring_file_add_key", 00:22:20.795 "params": { 00:22:20.795 "name": "key0", 00:22:20.795 "path": "/tmp/tmp.XdJDwY1ZZ0" 00:22:20.795 } 00:22:20.795 } 00:22:20.795 ] 00:22:20.795 }, 00:22:20.795 { 00:22:20.795 "subsystem": "iobuf", 00:22:20.795 "config": [ 00:22:20.795 { 00:22:20.795 "method": "iobuf_set_options", 00:22:20.795 "params": { 00:22:20.795 "small_pool_count": 8192, 00:22:20.795 "large_pool_count": 1024, 00:22:20.795 "small_bufsize": 8192, 00:22:20.795 "large_bufsize": 135168, 00:22:20.795 "enable_numa": false 00:22:20.795 } 00:22:20.795 } 00:22:20.795 ] 00:22:20.795 }, 00:22:20.795 { 00:22:20.795 "subsystem": "sock", 00:22:20.795 "config": [ 00:22:20.795 { 00:22:20.795 "method": "sock_set_default_impl", 00:22:20.795 "params": { 00:22:20.795 "impl_name": "posix" 00:22:20.795 } 00:22:20.795 }, 00:22:20.795 { 00:22:20.795 "method": "sock_impl_set_options", 00:22:20.795 "params": { 00:22:20.795 "impl_name": "ssl", 00:22:20.795 "recv_buf_size": 4096, 00:22:20.795 "send_buf_size": 4096, 00:22:20.795 "enable_recv_pipe": true, 00:22:20.795 "enable_quickack": false, 00:22:20.795 "enable_placement_id": 0, 00:22:20.795 "enable_zerocopy_send_server": true, 00:22:20.795 "enable_zerocopy_send_client": false, 00:22:20.795 "zerocopy_threshold": 0, 00:22:20.795 "tls_version": 0, 00:22:20.795 "enable_ktls": false 00:22:20.795 } 00:22:20.795 }, 00:22:20.795 { 00:22:20.795 "method": "sock_impl_set_options", 00:22:20.795 "params": { 00:22:20.795 "impl_name": "posix", 00:22:20.795 "recv_buf_size": 2097152, 00:22:20.795 "send_buf_size": 2097152, 00:22:20.795 "enable_recv_pipe": true, 00:22:20.795 "enable_quickack": false, 00:22:20.795 "enable_placement_id": 0, 00:22:20.795 "enable_zerocopy_send_server": true, 00:22:20.795 "enable_zerocopy_send_client": false, 00:22:20.795 "zerocopy_threshold": 0, 00:22:20.795 "tls_version": 0, 00:22:20.795 "enable_ktls": false 00:22:20.795 } 00:22:20.795 } 00:22:20.795 ] 00:22:20.795 }, 00:22:20.795 { 00:22:20.795 "subsystem": "vmd", 00:22:20.795 "config": [] 00:22:20.795 }, 00:22:20.795 { 00:22:20.795 "subsystem": "accel", 00:22:20.795 "config": [ 00:22:20.795 { 00:22:20.795 "method": "accel_set_options", 00:22:20.795 "params": { 00:22:20.795 "small_cache_size": 128, 00:22:20.795 "large_cache_size": 16, 00:22:20.795 "task_count": 2048, 00:22:20.795 "sequence_count": 2048, 00:22:20.795 "buf_count": 2048 00:22:20.795 } 00:22:20.795 } 00:22:20.795 ] 00:22:20.795 }, 00:22:20.795 { 00:22:20.795 "subsystem": "bdev", 00:22:20.795 "config": [ 00:22:20.795 { 00:22:20.795 "method": "bdev_set_options", 00:22:20.795 "params": { 00:22:20.795 "bdev_io_pool_size": 65535, 00:22:20.795 "bdev_io_cache_size": 256, 00:22:20.795 "bdev_auto_examine": true, 00:22:20.795 "iobuf_small_cache_size": 128, 00:22:20.795 "iobuf_large_cache_size": 16 00:22:20.795 } 00:22:20.795 }, 00:22:20.795 { 00:22:20.795 "method": "bdev_raid_set_options", 00:22:20.795 "params": { 00:22:20.795 "process_window_size_kb": 1024, 00:22:20.795 "process_max_bandwidth_mb_sec": 0 00:22:20.795 } 00:22:20.795 }, 00:22:20.795 { 00:22:20.795 "method": "bdev_iscsi_set_options", 00:22:20.795 "params": { 00:22:20.795 "timeout_sec": 30 00:22:20.795 } 00:22:20.795 }, 00:22:20.795 { 00:22:20.795 "method": "bdev_nvme_set_options", 00:22:20.795 "params": { 00:22:20.795 "action_on_timeout": "none", 00:22:20.795 "timeout_us": 0, 00:22:20.795 "timeout_admin_us": 0, 00:22:20.795 "keep_alive_timeout_ms": 10000, 00:22:20.795 "arbitration_burst": 0, 00:22:20.795 "low_priority_weight": 0, 00:22:20.795 "medium_priority_weight": 0, 00:22:20.795 "high_priority_weight": 0, 00:22:20.795 "nvme_adminq_poll_period_us": 10000, 00:22:20.795 "nvme_ioq_poll_period_us": 0, 00:22:20.795 "io_queue_requests": 512, 00:22:20.795 "delay_cmd_submit": true, 00:22:20.795 "transport_retry_count": 4, 00:22:20.795 "bdev_retry_count": 3, 00:22:20.795 "transport_ack_timeout": 0, 00:22:20.795 "ctrlr_loss_timeout_sec": 0, 00:22:20.795 "reconnect_delay_sec": 0, 00:22:20.795 "fast_io_fail_timeout_sec": 0, 00:22:20.795 "disable_auto_failback": false, 00:22:20.795 "generate_uuids": false, 00:22:20.795 "transport_tos": 0, 00:22:20.795 "nvme_error_stat": false, 00:22:20.795 "rdma_srq_size": 0, 00:22:20.795 "io_path_stat": false, 00:22:20.795 "allow_accel_sequence": false, 00:22:20.795 "rdma_max_cq_size": 0, 00:22:20.795 "rdma_cm_event_timeout_ms": 0, 00:22:20.795 "dhchap_digests": [ 00:22:20.795 "sha256", 00:22:20.795 "sha384", 00:22:20.795 "sha512" 00:22:20.795 ], 00:22:20.796 "dhchap_dhgroups": [ 00:22:20.796 "null", 00:22:20.796 "ffdhe2048", 00:22:20.796 "ffdhe3072", 00:22:20.796 "ffdhe4096", 00:22:20.796 "ffdhe6144", 00:22:20.796 "ffdhe8192" 00:22:20.796 ] 00:22:20.796 } 00:22:20.796 }, 00:22:20.796 { 00:22:20.796 "method": "bdev_nvme_attach_controller", 00:22:20.796 "params": { 00:22:20.796 "name": "TLSTEST", 00:22:20.796 "trtype": "TCP", 00:22:20.796 "adrfam": "IPv4", 00:22:20.796 "traddr": "10.0.0.2", 00:22:20.796 "trsvcid": "4420", 00:22:20.796 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.796 "prchk_reftag": false, 00:22:20.796 "prchk_guard": false, 00:22:20.796 "ctrlr_loss_timeout_sec": 0, 00:22:20.796 "reconnect_delay_sec": 0, 00:22:20.796 "fast_io_fail_timeout_sec": 0, 00:22:20.796 "psk": "key0", 00:22:20.796 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:20.796 "hdgst": false, 00:22:20.796 "ddgst": false, 00:22:20.796 "multipath": "multipath" 00:22:20.796 } 00:22:20.796 }, 00:22:20.796 { 00:22:20.796 "method": "bdev_nvme_set_hotplug", 00:22:20.796 "params": { 00:22:20.796 "period_us": 100000, 00:22:20.796 "enable": false 00:22:20.796 } 00:22:20.796 }, 00:22:20.796 { 00:22:20.796 "method": "bdev_wait_for_examine" 00:22:20.796 } 00:22:20.796 ] 00:22:20.796 }, 00:22:20.796 { 00:22:20.796 "subsystem": "nbd", 00:22:20.796 "config": [] 00:22:20.796 } 00:22:20.796 ] 00:22:20.796 }' 00:22:21.057 [2024-11-26 07:32:04.941298] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:22:21.057 [2024-11-26 07:32:04.941352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2135339 ] 00:22:21.057 [2024-11-26 07:32:05.005728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.057 [2024-11-26 07:32:05.034681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:21.057 [2024-11-26 07:32:05.168748] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:21.629 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:21.629 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:21.629 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:21.889 Running I/O for 10 seconds... 00:22:23.772 5167.00 IOPS, 20.18 MiB/s [2024-11-26T06:32:08.852Z] 5383.50 IOPS, 21.03 MiB/s [2024-11-26T06:32:10.234Z] 5719.33 IOPS, 22.34 MiB/s [2024-11-26T06:32:11.178Z] 5613.75 IOPS, 21.93 MiB/s [2024-11-26T06:32:12.119Z] 5484.00 IOPS, 21.42 MiB/s [2024-11-26T06:32:13.060Z] 5575.33 IOPS, 21.78 MiB/s [2024-11-26T06:32:14.001Z] 5698.14 IOPS, 22.26 MiB/s [2024-11-26T06:32:14.945Z] 5767.62 IOPS, 22.53 MiB/s [2024-11-26T06:32:15.888Z] 5848.89 IOPS, 22.85 MiB/s [2024-11-26T06:32:15.888Z] 5740.30 IOPS, 22.42 MiB/s 00:22:31.751 Latency(us) 00:22:31.751 [2024-11-26T06:32:15.888Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.751 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:31.751 Verification LBA range: start 0x0 length 0x2000 00:22:31.751 TLSTESTn1 : 10.02 5741.53 22.43 0.00 0.00 22258.50 5925.55 33641.81 00:22:31.751 [2024-11-26T06:32:15.888Z] =================================================================================================================== 00:22:31.751 [2024-11-26T06:32:15.888Z] Total : 5741.53 22.43 0.00 0.00 22258.50 5925.55 33641.81 00:22:31.751 { 00:22:31.751 "results": [ 00:22:31.751 { 00:22:31.751 "job": "TLSTESTn1", 00:22:31.751 "core_mask": "0x4", 00:22:31.751 "workload": "verify", 00:22:31.751 "status": "finished", 00:22:31.751 "verify_range": { 00:22:31.751 "start": 0, 00:22:31.751 "length": 8192 00:22:31.751 }, 00:22:31.751 "queue_depth": 128, 00:22:31.751 "io_size": 4096, 00:22:31.751 "runtime": 10.019973, 00:22:31.751 "iops": 5741.532437263055, 00:22:31.751 "mibps": 22.427861083058808, 00:22:31.751 "io_failed": 0, 00:22:31.751 "io_timeout": 0, 00:22:31.751 "avg_latency_us": 22258.500572223187, 00:22:31.751 "min_latency_us": 5925.546666666667, 00:22:31.751 "max_latency_us": 33641.81333333333 00:22:31.751 } 00:22:31.751 ], 00:22:31.751 "core_count": 1 00:22:31.751 } 00:22:32.013 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:32.013 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2135339 00:22:32.013 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2135339 ']' 00:22:32.013 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2135339 00:22:32.013 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:32.013 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:32.013 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2135339 00:22:32.013 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:32.013 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:32.013 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2135339' 00:22:32.013 killing process with pid 2135339 00:22:32.013 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2135339 00:22:32.013 Received shutdown signal, test time was about 10.000000 seconds 00:22:32.013 00:22:32.013 Latency(us) 00:22:32.013 [2024-11-26T06:32:16.150Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.013 [2024-11-26T06:32:16.150Z] =================================================================================================================== 00:22:32.013 [2024-11-26T06:32:16.151Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:32.014 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2135339 00:22:32.014 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2135222 00:22:32.014 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2135222 ']' 00:22:32.014 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2135222 00:22:32.014 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:32.014 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:32.014 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2135222 00:22:32.014 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:32.014 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:32.014 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2135222' 00:22:32.014 killing process with pid 2135222 00:22:32.014 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2135222 00:22:32.014 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2135222 00:22:32.275 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:22:32.275 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:32.275 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:32.275 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:32.275 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2137621 00:22:32.275 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2137621 00:22:32.275 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:32.275 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2137621 ']' 00:22:32.275 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.275 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:32.275 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.275 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:32.275 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:32.275 [2024-11-26 07:32:16.302639] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:22:32.275 [2024-11-26 07:32:16.302700] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.275 [2024-11-26 07:32:16.386837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.536 [2024-11-26 07:32:16.422371] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.536 [2024-11-26 07:32:16.422405] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.536 [2024-11-26 07:32:16.422414] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:32.536 [2024-11-26 07:32:16.422421] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:32.536 [2024-11-26 07:32:16.422426] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.537 [2024-11-26 07:32:16.423023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.158 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:33.158 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:33.158 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:33.158 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:33.158 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.158 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.158 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.XdJDwY1ZZ0 00:22:33.158 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.XdJDwY1ZZ0 00:22:33.158 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:33.473 [2024-11-26 07:32:17.275278] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:33.473 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:33.473 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:33.754 [2024-11-26 07:32:17.612122] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:33.754 [2024-11-26 07:32:17.612344] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:33.754 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:33.754 malloc0 00:22:33.754 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:34.023 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.XdJDwY1ZZ0 00:22:34.023 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:34.283 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2138051 00:22:34.283 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:34.283 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:34.283 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2138051 /var/tmp/bdevperf.sock 00:22:34.283 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2138051 ']' 00:22:34.283 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:34.283 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:34.283 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:34.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:34.284 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:34.284 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.284 [2024-11-26 07:32:18.357185] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:22:34.284 [2024-11-26 07:32:18.357239] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2138051 ] 00:22:34.545 [2024-11-26 07:32:18.446554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.545 [2024-11-26 07:32:18.476524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:35.118 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:35.118 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:35.118 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XdJDwY1ZZ0 00:22:35.379 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:35.379 [2024-11-26 07:32:19.452245] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:35.640 nvme0n1 00:22:35.640 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:35.640 Running I/O for 1 seconds... 00:22:36.583 4827.00 IOPS, 18.86 MiB/s 00:22:36.583 Latency(us) 00:22:36.583 [2024-11-26T06:32:20.720Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.583 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:36.583 Verification LBA range: start 0x0 length 0x2000 00:22:36.583 nvme0n1 : 1.02 4864.76 19.00 0.00 0.00 26127.72 4505.60 31020.37 00:22:36.583 [2024-11-26T06:32:20.720Z] =================================================================================================================== 00:22:36.583 [2024-11-26T06:32:20.720Z] Total : 4864.76 19.00 0.00 0.00 26127.72 4505.60 31020.37 00:22:36.583 { 00:22:36.583 "results": [ 00:22:36.583 { 00:22:36.583 "job": "nvme0n1", 00:22:36.583 "core_mask": "0x2", 00:22:36.583 "workload": "verify", 00:22:36.583 "status": "finished", 00:22:36.583 "verify_range": { 00:22:36.583 "start": 0, 00:22:36.583 "length": 8192 00:22:36.583 }, 00:22:36.583 "queue_depth": 128, 00:22:36.583 "io_size": 4096, 00:22:36.583 "runtime": 1.018549, 00:22:36.583 "iops": 4864.763501805019, 00:22:36.583 "mibps": 19.002982428925854, 00:22:36.583 "io_failed": 0, 00:22:36.583 "io_timeout": 0, 00:22:36.583 "avg_latency_us": 26127.715799529095, 00:22:36.583 "min_latency_us": 4505.6, 00:22:36.583 "max_latency_us": 31020.373333333333 00:22:36.583 } 00:22:36.583 ], 00:22:36.583 "core_count": 1 00:22:36.583 } 00:22:36.583 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2138051 00:22:36.583 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2138051 ']' 00:22:36.583 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2138051 00:22:36.583 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:36.583 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:36.583 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2138051 00:22:36.844 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:36.844 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:36.844 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2138051' 00:22:36.844 killing process with pid 2138051 00:22:36.844 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2138051 00:22:36.844 Received shutdown signal, test time was about 1.000000 seconds 00:22:36.844 00:22:36.844 Latency(us) 00:22:36.844 [2024-11-26T06:32:20.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.844 [2024-11-26T06:32:20.981Z] =================================================================================================================== 00:22:36.844 [2024-11-26T06:32:20.981Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:36.844 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2138051 00:22:36.844 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2137621 00:22:36.844 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2137621 ']' 00:22:36.844 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2137621 00:22:36.844 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:36.844 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:36.844 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2137621 00:22:36.844 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:36.844 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:36.844 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2137621' 00:22:36.844 killing process with pid 2137621 00:22:36.844 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2137621 00:22:36.844 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2137621 00:22:37.106 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:22:37.106 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:37.106 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:37.106 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.106 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2138478 00:22:37.106 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2138478 00:22:37.106 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:37.106 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2138478 ']' 00:22:37.106 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.106 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:37.106 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.106 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:37.106 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.106 [2024-11-26 07:32:21.096122] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:22:37.106 [2024-11-26 07:32:21.096177] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:37.106 [2024-11-26 07:32:21.182378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.106 [2024-11-26 07:32:21.217429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:37.106 [2024-11-26 07:32:21.217467] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:37.106 [2024-11-26 07:32:21.217477] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:37.106 [2024-11-26 07:32:21.217485] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:37.106 [2024-11-26 07:32:21.217490] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:37.106 [2024-11-26 07:32:21.218058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.049 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:38.049 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:38.049 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:38.049 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:38.049 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:38.049 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:38.049 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:22:38.049 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.049 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:38.049 [2024-11-26 07:32:21.950457] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.049 malloc0 00:22:38.049 [2024-11-26 07:32:21.977175] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:38.049 [2024-11-26 07:32:21.977395] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:38.049 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.049 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2138758 00:22:38.049 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2138758 /var/tmp/bdevperf.sock 00:22:38.049 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:38.049 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2138758 ']' 00:22:38.049 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:38.049 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:38.049 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:38.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:38.049 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:38.049 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:38.049 [2024-11-26 07:32:22.058317] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:22:38.049 [2024-11-26 07:32:22.058365] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2138758 ] 00:22:38.049 [2024-11-26 07:32:22.150141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.310 [2024-11-26 07:32:22.180141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.881 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:38.881 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:38.881 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XdJDwY1ZZ0 00:22:38.881 07:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:39.142 [2024-11-26 07:32:23.155873] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:39.142 nvme0n1 00:22:39.142 07:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:39.403 Running I/O for 1 seconds... 00:22:40.344 3933.00 IOPS, 15.36 MiB/s 00:22:40.344 Latency(us) 00:22:40.344 [2024-11-26T06:32:24.481Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.344 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:40.344 Verification LBA range: start 0x0 length 0x2000 00:22:40.344 nvme0n1 : 1.02 3974.82 15.53 0.00 0.00 31980.96 6608.21 108789.76 00:22:40.344 [2024-11-26T06:32:24.481Z] =================================================================================================================== 00:22:40.344 [2024-11-26T06:32:24.481Z] Total : 3974.82 15.53 0.00 0.00 31980.96 6608.21 108789.76 00:22:40.344 { 00:22:40.344 "results": [ 00:22:40.344 { 00:22:40.344 "job": "nvme0n1", 00:22:40.344 "core_mask": "0x2", 00:22:40.344 "workload": "verify", 00:22:40.344 "status": "finished", 00:22:40.344 "verify_range": { 00:22:40.344 "start": 0, 00:22:40.345 "length": 8192 00:22:40.345 }, 00:22:40.345 "queue_depth": 128, 00:22:40.345 "io_size": 4096, 00:22:40.345 "runtime": 1.021681, 00:22:40.345 "iops": 3974.8218866749994, 00:22:40.345 "mibps": 15.526647994824216, 00:22:40.345 "io_failed": 0, 00:22:40.345 "io_timeout": 0, 00:22:40.345 "avg_latency_us": 31980.964846097017, 00:22:40.345 "min_latency_us": 6608.213333333333, 00:22:40.345 "max_latency_us": 108789.76 00:22:40.345 } 00:22:40.345 ], 00:22:40.345 "core_count": 1 00:22:40.345 } 00:22:40.345 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:22:40.345 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.345 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:40.605 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.605 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:22:40.605 "subsystems": [ 00:22:40.605 { 00:22:40.605 "subsystem": "keyring", 00:22:40.605 "config": [ 00:22:40.605 { 00:22:40.605 "method": "keyring_file_add_key", 00:22:40.605 "params": { 00:22:40.605 "name": "key0", 00:22:40.605 "path": "/tmp/tmp.XdJDwY1ZZ0" 00:22:40.605 } 00:22:40.605 } 00:22:40.605 ] 00:22:40.605 }, 00:22:40.605 { 00:22:40.605 "subsystem": "iobuf", 00:22:40.605 "config": [ 00:22:40.605 { 00:22:40.605 "method": "iobuf_set_options", 00:22:40.605 "params": { 00:22:40.605 "small_pool_count": 8192, 00:22:40.605 "large_pool_count": 1024, 00:22:40.605 "small_bufsize": 8192, 00:22:40.605 "large_bufsize": 135168, 00:22:40.605 "enable_numa": false 00:22:40.605 } 00:22:40.605 } 00:22:40.605 ] 00:22:40.605 }, 00:22:40.605 { 00:22:40.605 "subsystem": "sock", 00:22:40.605 "config": [ 00:22:40.605 { 00:22:40.605 "method": "sock_set_default_impl", 00:22:40.605 "params": { 00:22:40.605 "impl_name": "posix" 00:22:40.605 } 00:22:40.605 }, 00:22:40.605 { 00:22:40.605 "method": "sock_impl_set_options", 00:22:40.605 "params": { 00:22:40.605 "impl_name": "ssl", 00:22:40.605 "recv_buf_size": 4096, 00:22:40.605 "send_buf_size": 4096, 00:22:40.605 "enable_recv_pipe": true, 00:22:40.605 "enable_quickack": false, 00:22:40.605 "enable_placement_id": 0, 00:22:40.605 "enable_zerocopy_send_server": true, 00:22:40.605 "enable_zerocopy_send_client": false, 00:22:40.605 "zerocopy_threshold": 0, 00:22:40.605 "tls_version": 0, 00:22:40.605 "enable_ktls": false 00:22:40.605 } 00:22:40.605 }, 00:22:40.605 { 00:22:40.605 "method": "sock_impl_set_options", 00:22:40.605 "params": { 00:22:40.605 "impl_name": "posix", 00:22:40.605 "recv_buf_size": 2097152, 00:22:40.605 "send_buf_size": 2097152, 00:22:40.605 "enable_recv_pipe": true, 00:22:40.605 "enable_quickack": false, 00:22:40.605 "enable_placement_id": 0, 00:22:40.605 "enable_zerocopy_send_server": true, 00:22:40.605 "enable_zerocopy_send_client": false, 00:22:40.605 "zerocopy_threshold": 0, 00:22:40.605 "tls_version": 0, 00:22:40.605 "enable_ktls": false 00:22:40.605 } 00:22:40.605 } 00:22:40.605 ] 00:22:40.605 }, 00:22:40.605 { 00:22:40.605 "subsystem": "vmd", 00:22:40.605 "config": [] 00:22:40.605 }, 00:22:40.605 { 00:22:40.605 "subsystem": "accel", 00:22:40.605 "config": [ 00:22:40.605 { 00:22:40.605 "method": "accel_set_options", 00:22:40.605 "params": { 00:22:40.605 "small_cache_size": 128, 00:22:40.605 "large_cache_size": 16, 00:22:40.605 "task_count": 2048, 00:22:40.605 "sequence_count": 2048, 00:22:40.605 "buf_count": 2048 00:22:40.605 } 00:22:40.605 } 00:22:40.605 ] 00:22:40.605 }, 00:22:40.605 { 00:22:40.605 "subsystem": "bdev", 00:22:40.605 "config": [ 00:22:40.605 { 00:22:40.605 "method": "bdev_set_options", 00:22:40.605 "params": { 00:22:40.605 "bdev_io_pool_size": 65535, 00:22:40.605 "bdev_io_cache_size": 256, 00:22:40.605 "bdev_auto_examine": true, 00:22:40.605 "iobuf_small_cache_size": 128, 00:22:40.605 "iobuf_large_cache_size": 16 00:22:40.605 } 00:22:40.605 }, 00:22:40.605 { 00:22:40.605 "method": "bdev_raid_set_options", 00:22:40.605 "params": { 00:22:40.605 "process_window_size_kb": 1024, 00:22:40.605 "process_max_bandwidth_mb_sec": 0 00:22:40.605 } 00:22:40.605 }, 00:22:40.605 { 00:22:40.605 "method": "bdev_iscsi_set_options", 00:22:40.605 "params": { 00:22:40.605 "timeout_sec": 30 00:22:40.605 } 00:22:40.605 }, 00:22:40.605 { 00:22:40.605 "method": "bdev_nvme_set_options", 00:22:40.605 "params": { 00:22:40.605 "action_on_timeout": "none", 00:22:40.605 "timeout_us": 0, 00:22:40.605 "timeout_admin_us": 0, 00:22:40.605 "keep_alive_timeout_ms": 10000, 00:22:40.605 "arbitration_burst": 0, 00:22:40.605 "low_priority_weight": 0, 00:22:40.605 "medium_priority_weight": 0, 00:22:40.605 "high_priority_weight": 0, 00:22:40.605 "nvme_adminq_poll_period_us": 10000, 00:22:40.605 "nvme_ioq_poll_period_us": 0, 00:22:40.605 "io_queue_requests": 0, 00:22:40.605 "delay_cmd_submit": true, 00:22:40.605 "transport_retry_count": 4, 00:22:40.605 "bdev_retry_count": 3, 00:22:40.605 "transport_ack_timeout": 0, 00:22:40.605 "ctrlr_loss_timeout_sec": 0, 00:22:40.605 "reconnect_delay_sec": 0, 00:22:40.605 "fast_io_fail_timeout_sec": 0, 00:22:40.605 "disable_auto_failback": false, 00:22:40.605 "generate_uuids": false, 00:22:40.605 "transport_tos": 0, 00:22:40.605 "nvme_error_stat": false, 00:22:40.605 "rdma_srq_size": 0, 00:22:40.605 "io_path_stat": false, 00:22:40.605 "allow_accel_sequence": false, 00:22:40.605 "rdma_max_cq_size": 0, 00:22:40.605 "rdma_cm_event_timeout_ms": 0, 00:22:40.605 "dhchap_digests": [ 00:22:40.605 "sha256", 00:22:40.605 "sha384", 00:22:40.605 "sha512" 00:22:40.605 ], 00:22:40.605 "dhchap_dhgroups": [ 00:22:40.605 "null", 00:22:40.605 "ffdhe2048", 00:22:40.605 "ffdhe3072", 00:22:40.605 "ffdhe4096", 00:22:40.605 "ffdhe6144", 00:22:40.605 "ffdhe8192" 00:22:40.605 ] 00:22:40.605 } 00:22:40.605 }, 00:22:40.605 { 00:22:40.605 "method": "bdev_nvme_set_hotplug", 00:22:40.605 "params": { 00:22:40.605 "period_us": 100000, 00:22:40.605 "enable": false 00:22:40.605 } 00:22:40.605 }, 00:22:40.605 { 00:22:40.605 "method": "bdev_malloc_create", 00:22:40.605 "params": { 00:22:40.605 "name": "malloc0", 00:22:40.605 "num_blocks": 8192, 00:22:40.605 "block_size": 4096, 00:22:40.605 "physical_block_size": 4096, 00:22:40.605 "uuid": "5413d7f4-28c1-4f96-b49f-fcaf8166d451", 00:22:40.605 "optimal_io_boundary": 0, 00:22:40.605 "md_size": 0, 00:22:40.605 "dif_type": 0, 00:22:40.605 "dif_is_head_of_md": false, 00:22:40.605 "dif_pi_format": 0 00:22:40.605 } 00:22:40.605 }, 00:22:40.605 { 00:22:40.605 "method": "bdev_wait_for_examine" 00:22:40.605 } 00:22:40.605 ] 00:22:40.606 }, 00:22:40.606 { 00:22:40.606 "subsystem": "nbd", 00:22:40.606 "config": [] 00:22:40.606 }, 00:22:40.606 { 00:22:40.606 "subsystem": "scheduler", 00:22:40.606 "config": [ 00:22:40.606 { 00:22:40.606 "method": "framework_set_scheduler", 00:22:40.606 "params": { 00:22:40.606 "name": "static" 00:22:40.606 } 00:22:40.606 } 00:22:40.606 ] 00:22:40.606 }, 00:22:40.606 { 00:22:40.606 "subsystem": "nvmf", 00:22:40.606 "config": [ 00:22:40.606 { 00:22:40.606 "method": "nvmf_set_config", 00:22:40.606 "params": { 00:22:40.606 "discovery_filter": "match_any", 00:22:40.606 "admin_cmd_passthru": { 00:22:40.606 "identify_ctrlr": false 00:22:40.606 }, 00:22:40.606 "dhchap_digests": [ 00:22:40.606 "sha256", 00:22:40.606 "sha384", 00:22:40.606 "sha512" 00:22:40.606 ], 00:22:40.606 "dhchap_dhgroups": [ 00:22:40.606 "null", 00:22:40.606 "ffdhe2048", 00:22:40.606 "ffdhe3072", 00:22:40.606 "ffdhe4096", 00:22:40.606 "ffdhe6144", 00:22:40.606 "ffdhe8192" 00:22:40.606 ] 00:22:40.606 } 00:22:40.606 }, 00:22:40.606 { 00:22:40.606 "method": "nvmf_set_max_subsystems", 00:22:40.606 "params": { 00:22:40.606 "max_subsystems": 1024 00:22:40.606 } 00:22:40.606 }, 00:22:40.606 { 00:22:40.606 "method": "nvmf_set_crdt", 00:22:40.606 "params": { 00:22:40.606 "crdt1": 0, 00:22:40.606 "crdt2": 0, 00:22:40.606 "crdt3": 0 00:22:40.606 } 00:22:40.606 }, 00:22:40.606 { 00:22:40.606 "method": "nvmf_create_transport", 00:22:40.606 "params": { 00:22:40.606 "trtype": "TCP", 00:22:40.606 "max_queue_depth": 128, 00:22:40.606 "max_io_qpairs_per_ctrlr": 127, 00:22:40.606 "in_capsule_data_size": 4096, 00:22:40.606 "max_io_size": 131072, 00:22:40.606 "io_unit_size": 131072, 00:22:40.606 "max_aq_depth": 128, 00:22:40.606 "num_shared_buffers": 511, 00:22:40.606 "buf_cache_size": 4294967295, 00:22:40.606 "dif_insert_or_strip": false, 00:22:40.606 "zcopy": false, 00:22:40.606 "c2h_success": false, 00:22:40.606 "sock_priority": 0, 00:22:40.606 "abort_timeout_sec": 1, 00:22:40.606 "ack_timeout": 0, 00:22:40.606 "data_wr_pool_size": 0 00:22:40.606 } 00:22:40.606 }, 00:22:40.606 { 00:22:40.606 "method": "nvmf_create_subsystem", 00:22:40.606 "params": { 00:22:40.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:40.606 "allow_any_host": false, 00:22:40.606 "serial_number": "00000000000000000000", 00:22:40.606 "model_number": "SPDK bdev Controller", 00:22:40.606 "max_namespaces": 32, 00:22:40.606 "min_cntlid": 1, 00:22:40.606 "max_cntlid": 65519, 00:22:40.606 "ana_reporting": false 00:22:40.606 } 00:22:40.606 }, 00:22:40.606 { 00:22:40.606 "method": "nvmf_subsystem_add_host", 00:22:40.606 "params": { 00:22:40.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:40.606 "host": "nqn.2016-06.io.spdk:host1", 00:22:40.606 "psk": "key0" 00:22:40.606 } 00:22:40.606 }, 00:22:40.606 { 00:22:40.606 "method": "nvmf_subsystem_add_ns", 00:22:40.606 "params": { 00:22:40.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:40.606 "namespace": { 00:22:40.606 "nsid": 1, 00:22:40.606 "bdev_name": "malloc0", 00:22:40.606 "nguid": "5413D7F428C14F96B49FFCAF8166D451", 00:22:40.606 "uuid": "5413d7f4-28c1-4f96-b49f-fcaf8166d451", 00:22:40.606 "no_auto_visible": false 00:22:40.606 } 00:22:40.606 } 00:22:40.606 }, 00:22:40.606 { 00:22:40.606 "method": "nvmf_subsystem_add_listener", 00:22:40.606 "params": { 00:22:40.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:40.606 "listen_address": { 00:22:40.606 "trtype": "TCP", 00:22:40.606 "adrfam": "IPv4", 00:22:40.606 "traddr": "10.0.0.2", 00:22:40.606 "trsvcid": "4420" 00:22:40.606 }, 00:22:40.606 "secure_channel": false, 00:22:40.606 "sock_impl": "ssl" 00:22:40.606 } 00:22:40.606 } 00:22:40.606 ] 00:22:40.606 } 00:22:40.606 ] 00:22:40.606 }' 00:22:40.606 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:40.867 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:22:40.867 "subsystems": [ 00:22:40.867 { 00:22:40.867 "subsystem": "keyring", 00:22:40.867 "config": [ 00:22:40.867 { 00:22:40.867 "method": "keyring_file_add_key", 00:22:40.867 "params": { 00:22:40.867 "name": "key0", 00:22:40.867 "path": "/tmp/tmp.XdJDwY1ZZ0" 00:22:40.867 } 00:22:40.867 } 00:22:40.867 ] 00:22:40.867 }, 00:22:40.867 { 00:22:40.867 "subsystem": "iobuf", 00:22:40.867 "config": [ 00:22:40.867 { 00:22:40.867 "method": "iobuf_set_options", 00:22:40.867 "params": { 00:22:40.867 "small_pool_count": 8192, 00:22:40.867 "large_pool_count": 1024, 00:22:40.867 "small_bufsize": 8192, 00:22:40.867 "large_bufsize": 135168, 00:22:40.867 "enable_numa": false 00:22:40.867 } 00:22:40.867 } 00:22:40.867 ] 00:22:40.867 }, 00:22:40.867 { 00:22:40.867 "subsystem": "sock", 00:22:40.867 "config": [ 00:22:40.867 { 00:22:40.867 "method": "sock_set_default_impl", 00:22:40.867 "params": { 00:22:40.867 "impl_name": "posix" 00:22:40.867 } 00:22:40.867 }, 00:22:40.867 { 00:22:40.867 "method": "sock_impl_set_options", 00:22:40.867 "params": { 00:22:40.867 "impl_name": "ssl", 00:22:40.867 "recv_buf_size": 4096, 00:22:40.867 "send_buf_size": 4096, 00:22:40.867 "enable_recv_pipe": true, 00:22:40.867 "enable_quickack": false, 00:22:40.867 "enable_placement_id": 0, 00:22:40.867 "enable_zerocopy_send_server": true, 00:22:40.867 "enable_zerocopy_send_client": false, 00:22:40.867 "zerocopy_threshold": 0, 00:22:40.867 "tls_version": 0, 00:22:40.867 "enable_ktls": false 00:22:40.867 } 00:22:40.867 }, 00:22:40.867 { 00:22:40.867 "method": "sock_impl_set_options", 00:22:40.867 "params": { 00:22:40.867 "impl_name": "posix", 00:22:40.867 "recv_buf_size": 2097152, 00:22:40.867 "send_buf_size": 2097152, 00:22:40.867 "enable_recv_pipe": true, 00:22:40.867 "enable_quickack": false, 00:22:40.867 "enable_placement_id": 0, 00:22:40.867 "enable_zerocopy_send_server": true, 00:22:40.867 "enable_zerocopy_send_client": false, 00:22:40.867 "zerocopy_threshold": 0, 00:22:40.867 "tls_version": 0, 00:22:40.867 "enable_ktls": false 00:22:40.867 } 00:22:40.867 } 00:22:40.867 ] 00:22:40.867 }, 00:22:40.867 { 00:22:40.867 "subsystem": "vmd", 00:22:40.867 "config": [] 00:22:40.867 }, 00:22:40.867 { 00:22:40.867 "subsystem": "accel", 00:22:40.867 "config": [ 00:22:40.867 { 00:22:40.867 "method": "accel_set_options", 00:22:40.867 "params": { 00:22:40.867 "small_cache_size": 128, 00:22:40.867 "large_cache_size": 16, 00:22:40.867 "task_count": 2048, 00:22:40.867 "sequence_count": 2048, 00:22:40.867 "buf_count": 2048 00:22:40.867 } 00:22:40.867 } 00:22:40.867 ] 00:22:40.867 }, 00:22:40.867 { 00:22:40.867 "subsystem": "bdev", 00:22:40.867 "config": [ 00:22:40.867 { 00:22:40.867 "method": "bdev_set_options", 00:22:40.867 "params": { 00:22:40.867 "bdev_io_pool_size": 65535, 00:22:40.867 "bdev_io_cache_size": 256, 00:22:40.867 "bdev_auto_examine": true, 00:22:40.867 "iobuf_small_cache_size": 128, 00:22:40.867 "iobuf_large_cache_size": 16 00:22:40.867 } 00:22:40.867 }, 00:22:40.867 { 00:22:40.867 "method": "bdev_raid_set_options", 00:22:40.867 "params": { 00:22:40.867 "process_window_size_kb": 1024, 00:22:40.867 "process_max_bandwidth_mb_sec": 0 00:22:40.867 } 00:22:40.867 }, 00:22:40.867 { 00:22:40.867 "method": "bdev_iscsi_set_options", 00:22:40.867 "params": { 00:22:40.867 "timeout_sec": 30 00:22:40.867 } 00:22:40.867 }, 00:22:40.867 { 00:22:40.867 "method": "bdev_nvme_set_options", 00:22:40.867 "params": { 00:22:40.867 "action_on_timeout": "none", 00:22:40.867 "timeout_us": 0, 00:22:40.867 "timeout_admin_us": 0, 00:22:40.867 "keep_alive_timeout_ms": 10000, 00:22:40.867 "arbitration_burst": 0, 00:22:40.867 "low_priority_weight": 0, 00:22:40.867 "medium_priority_weight": 0, 00:22:40.867 "high_priority_weight": 0, 00:22:40.867 "nvme_adminq_poll_period_us": 10000, 00:22:40.867 "nvme_ioq_poll_period_us": 0, 00:22:40.867 "io_queue_requests": 512, 00:22:40.867 "delay_cmd_submit": true, 00:22:40.867 "transport_retry_count": 4, 00:22:40.867 "bdev_retry_count": 3, 00:22:40.867 "transport_ack_timeout": 0, 00:22:40.867 "ctrlr_loss_timeout_sec": 0, 00:22:40.867 "reconnect_delay_sec": 0, 00:22:40.867 "fast_io_fail_timeout_sec": 0, 00:22:40.867 "disable_auto_failback": false, 00:22:40.867 "generate_uuids": false, 00:22:40.867 "transport_tos": 0, 00:22:40.867 "nvme_error_stat": false, 00:22:40.867 "rdma_srq_size": 0, 00:22:40.867 "io_path_stat": false, 00:22:40.867 "allow_accel_sequence": false, 00:22:40.867 "rdma_max_cq_size": 0, 00:22:40.867 "rdma_cm_event_timeout_ms": 0, 00:22:40.867 "dhchap_digests": [ 00:22:40.867 "sha256", 00:22:40.867 "sha384", 00:22:40.867 "sha512" 00:22:40.867 ], 00:22:40.867 "dhchap_dhgroups": [ 00:22:40.867 "null", 00:22:40.867 "ffdhe2048", 00:22:40.867 "ffdhe3072", 00:22:40.867 "ffdhe4096", 00:22:40.867 "ffdhe6144", 00:22:40.867 "ffdhe8192" 00:22:40.867 ] 00:22:40.867 } 00:22:40.867 }, 00:22:40.867 { 00:22:40.867 "method": "bdev_nvme_attach_controller", 00:22:40.868 "params": { 00:22:40.868 "name": "nvme0", 00:22:40.868 "trtype": "TCP", 00:22:40.868 "adrfam": "IPv4", 00:22:40.868 "traddr": "10.0.0.2", 00:22:40.868 "trsvcid": "4420", 00:22:40.868 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:40.868 "prchk_reftag": false, 00:22:40.868 "prchk_guard": false, 00:22:40.868 "ctrlr_loss_timeout_sec": 0, 00:22:40.868 "reconnect_delay_sec": 0, 00:22:40.868 "fast_io_fail_timeout_sec": 0, 00:22:40.868 "psk": "key0", 00:22:40.868 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:40.868 "hdgst": false, 00:22:40.868 "ddgst": false, 00:22:40.868 "multipath": "multipath" 00:22:40.868 } 00:22:40.868 }, 00:22:40.868 { 00:22:40.868 "method": "bdev_nvme_set_hotplug", 00:22:40.868 "params": { 00:22:40.868 "period_us": 100000, 00:22:40.868 "enable": false 00:22:40.868 } 00:22:40.868 }, 00:22:40.868 { 00:22:40.868 "method": "bdev_enable_histogram", 00:22:40.868 "params": { 00:22:40.868 "name": "nvme0n1", 00:22:40.868 "enable": true 00:22:40.868 } 00:22:40.868 }, 00:22:40.868 { 00:22:40.868 "method": "bdev_wait_for_examine" 00:22:40.868 } 00:22:40.868 ] 00:22:40.868 }, 00:22:40.868 { 00:22:40.868 "subsystem": "nbd", 00:22:40.868 "config": [] 00:22:40.868 } 00:22:40.868 ] 00:22:40.868 }' 00:22:40.868 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2138758 00:22:40.868 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2138758 ']' 00:22:40.868 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2138758 00:22:40.868 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:40.868 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:40.868 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2138758 00:22:40.868 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:40.868 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:40.868 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2138758' 00:22:40.868 killing process with pid 2138758 00:22:40.868 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2138758 00:22:40.868 Received shutdown signal, test time was about 1.000000 seconds 00:22:40.868 00:22:40.868 Latency(us) 00:22:40.868 [2024-11-26T06:32:25.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.868 [2024-11-26T06:32:25.005Z] =================================================================================================================== 00:22:40.868 [2024-11-26T06:32:25.005Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:40.868 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2138758 00:22:40.868 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2138478 00:22:40.868 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2138478 ']' 00:22:40.868 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2138478 00:22:40.868 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:40.868 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:40.868 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2138478 00:22:40.868 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:40.868 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:40.868 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2138478' 00:22:40.868 killing process with pid 2138478 00:22:40.868 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2138478 00:22:40.868 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2138478 00:22:41.129 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:22:41.129 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:41.129 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:41.129 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:22:41.129 "subsystems": [ 00:22:41.129 { 00:22:41.129 "subsystem": "keyring", 00:22:41.129 "config": [ 00:22:41.129 { 00:22:41.129 "method": "keyring_file_add_key", 00:22:41.129 "params": { 00:22:41.129 "name": "key0", 00:22:41.129 "path": "/tmp/tmp.XdJDwY1ZZ0" 00:22:41.129 } 00:22:41.129 } 00:22:41.129 ] 00:22:41.129 }, 00:22:41.129 { 00:22:41.129 "subsystem": "iobuf", 00:22:41.129 "config": [ 00:22:41.129 { 00:22:41.129 "method": "iobuf_set_options", 00:22:41.129 "params": { 00:22:41.129 "small_pool_count": 8192, 00:22:41.129 "large_pool_count": 1024, 00:22:41.129 "small_bufsize": 8192, 00:22:41.129 "large_bufsize": 135168, 00:22:41.129 "enable_numa": false 00:22:41.129 } 00:22:41.129 } 00:22:41.129 ] 00:22:41.129 }, 00:22:41.129 { 00:22:41.129 "subsystem": "sock", 00:22:41.129 "config": [ 00:22:41.129 { 00:22:41.129 "method": "sock_set_default_impl", 00:22:41.129 "params": { 00:22:41.129 "impl_name": "posix" 00:22:41.129 } 00:22:41.129 }, 00:22:41.129 { 00:22:41.129 "method": "sock_impl_set_options", 00:22:41.129 "params": { 00:22:41.129 "impl_name": "ssl", 00:22:41.129 "recv_buf_size": 4096, 00:22:41.129 "send_buf_size": 4096, 00:22:41.129 "enable_recv_pipe": true, 00:22:41.129 "enable_quickack": false, 00:22:41.129 "enable_placement_id": 0, 00:22:41.129 "enable_zerocopy_send_server": true, 00:22:41.129 "enable_zerocopy_send_client": false, 00:22:41.129 "zerocopy_threshold": 0, 00:22:41.129 "tls_version": 0, 00:22:41.129 "enable_ktls": false 00:22:41.129 } 00:22:41.129 }, 00:22:41.129 { 00:22:41.129 "method": "sock_impl_set_options", 00:22:41.129 "params": { 00:22:41.129 "impl_name": "posix", 00:22:41.129 "recv_buf_size": 2097152, 00:22:41.129 "send_buf_size": 2097152, 00:22:41.129 "enable_recv_pipe": true, 00:22:41.129 "enable_quickack": false, 00:22:41.129 "enable_placement_id": 0, 00:22:41.129 "enable_zerocopy_send_server": true, 00:22:41.129 "enable_zerocopy_send_client": false, 00:22:41.129 "zerocopy_threshold": 0, 00:22:41.129 "tls_version": 0, 00:22:41.129 "enable_ktls": false 00:22:41.129 } 00:22:41.129 } 00:22:41.129 ] 00:22:41.129 }, 00:22:41.129 { 00:22:41.129 "subsystem": "vmd", 00:22:41.129 "config": [] 00:22:41.129 }, 00:22:41.129 { 00:22:41.129 "subsystem": "accel", 00:22:41.129 "config": [ 00:22:41.129 { 00:22:41.129 "method": "accel_set_options", 00:22:41.129 "params": { 00:22:41.129 "small_cache_size": 128, 00:22:41.129 "large_cache_size": 16, 00:22:41.129 "task_count": 2048, 00:22:41.129 "sequence_count": 2048, 00:22:41.129 "buf_count": 2048 00:22:41.129 } 00:22:41.129 } 00:22:41.129 ] 00:22:41.129 }, 00:22:41.129 { 00:22:41.129 "subsystem": "bdev", 00:22:41.129 "config": [ 00:22:41.129 { 00:22:41.129 "method": "bdev_set_options", 00:22:41.129 "params": { 00:22:41.129 "bdev_io_pool_size": 65535, 00:22:41.129 "bdev_io_cache_size": 256, 00:22:41.129 "bdev_auto_examine": true, 00:22:41.129 "iobuf_small_cache_size": 128, 00:22:41.129 "iobuf_large_cache_size": 16 00:22:41.129 } 00:22:41.129 }, 00:22:41.129 { 00:22:41.129 "method": "bdev_raid_set_options", 00:22:41.129 "params": { 00:22:41.129 "process_window_size_kb": 1024, 00:22:41.129 "process_max_bandwidth_mb_sec": 0 00:22:41.129 } 00:22:41.129 }, 00:22:41.129 { 00:22:41.129 "method": "bdev_iscsi_set_options", 00:22:41.129 "params": { 00:22:41.129 "timeout_sec": 30 00:22:41.129 } 00:22:41.129 }, 00:22:41.129 { 00:22:41.129 "method": "bdev_nvme_set_options", 00:22:41.129 "params": { 00:22:41.129 "action_on_timeout": "none", 00:22:41.129 "timeout_us": 0, 00:22:41.129 "timeout_admin_us": 0, 00:22:41.129 "keep_alive_timeout_ms": 10000, 00:22:41.129 "arbitration_burst": 0, 00:22:41.129 "low_priority_weight": 0, 00:22:41.129 "medium_priority_weight": 0, 00:22:41.129 "high_priority_weight": 0, 00:22:41.129 "nvme_adminq_poll_period_us": 10000, 00:22:41.129 "nvme_ioq_poll_period_us": 0, 00:22:41.129 "io_queue_requests": 0, 00:22:41.129 "delay_cmd_submit": true, 00:22:41.129 "transport_retry_count": 4, 00:22:41.129 "bdev_retry_count": 3, 00:22:41.129 "transport_ack_timeout": 0, 00:22:41.129 "ctrlr_loss_timeout_sec": 0, 00:22:41.129 "reconnect_delay_sec": 0, 00:22:41.129 "fast_io_fail_timeout_sec": 0, 00:22:41.129 "disable_auto_failback": false, 00:22:41.129 "generate_uuids": false, 00:22:41.129 "transport_tos": 0, 00:22:41.129 "nvme_error_stat": false, 00:22:41.129 "rdma_srq_size": 0, 00:22:41.129 "io_path_stat": false, 00:22:41.129 "allow_accel_sequence": false, 00:22:41.129 "rdma_max_cq_size": 0, 00:22:41.129 "rdma_cm_event_timeout_ms": 0, 00:22:41.129 "dhchap_digests": [ 00:22:41.129 "sha256", 00:22:41.129 "sha384", 00:22:41.129 "sha512" 00:22:41.129 ], 00:22:41.129 "dhchap_dhgroups": [ 00:22:41.129 "null", 00:22:41.129 "ffdhe2048", 00:22:41.129 "ffdhe3072", 00:22:41.129 "ffdhe4096", 00:22:41.129 "ffdhe6144", 00:22:41.129 "ffdhe8192" 00:22:41.129 ] 00:22:41.129 } 00:22:41.129 }, 00:22:41.129 { 00:22:41.129 "method": "bdev_nvme_set_hotplug", 00:22:41.129 "params": { 00:22:41.129 "period_us": 100000, 00:22:41.129 "enable": false 00:22:41.129 } 00:22:41.129 }, 00:22:41.129 { 00:22:41.129 "method": "bdev_malloc_create", 00:22:41.129 "params": { 00:22:41.129 "name": "malloc0", 00:22:41.129 "num_blocks": 8192, 00:22:41.129 "block_size": 4096, 00:22:41.129 "physical_block_size": 4096, 00:22:41.129 "uuid": "5413d7f4-28c1-4f96-b49f-fcaf8166d451", 00:22:41.129 "optimal_io_boundary": 0, 00:22:41.129 "md_size": 0, 00:22:41.129 "dif_type": 0, 00:22:41.129 "dif_is_head_of_md": false, 00:22:41.129 "dif_pi_format": 0 00:22:41.129 } 00:22:41.129 }, 00:22:41.129 { 00:22:41.129 "method": "bdev_wait_for_examine" 00:22:41.129 } 00:22:41.129 ] 00:22:41.129 }, 00:22:41.129 { 00:22:41.129 "subsystem": "nbd", 00:22:41.129 "config": [] 00:22:41.129 }, 00:22:41.129 { 00:22:41.129 "subsystem": "scheduler", 00:22:41.129 "config": [ 00:22:41.129 { 00:22:41.129 "method": "framework_set_scheduler", 00:22:41.129 "params": { 00:22:41.129 "name": "static" 00:22:41.129 } 00:22:41.129 } 00:22:41.129 ] 00:22:41.129 }, 00:22:41.129 { 00:22:41.129 "subsystem": "nvmf", 00:22:41.129 "config": [ 00:22:41.129 { 00:22:41.129 "method": "nvmf_set_config", 00:22:41.129 "params": { 00:22:41.129 "discovery_filter": "match_any", 00:22:41.129 "admin_cmd_passthru": { 00:22:41.129 "identify_ctrlr": false 00:22:41.129 }, 00:22:41.129 "dhchap_digests": [ 00:22:41.129 "sha256", 00:22:41.129 "sha384", 00:22:41.129 "sha512" 00:22:41.129 ], 00:22:41.129 "dhchap_dhgroups": [ 00:22:41.129 "null", 00:22:41.129 "ffdhe2048", 00:22:41.129 "ffdhe3072", 00:22:41.129 "ffdhe4096", 00:22:41.129 "ffdhe6144", 00:22:41.129 "ffdhe8192" 00:22:41.129 ] 00:22:41.129 } 00:22:41.129 }, 00:22:41.129 { 00:22:41.129 "method": "nvmf_set_max_subsystems", 00:22:41.129 "params": { 00:22:41.129 "max_subsystems": 1024 00:22:41.129 } 00:22:41.129 }, 00:22:41.129 { 00:22:41.129 "method": "nvmf_set_crdt", 00:22:41.129 "params": { 00:22:41.129 "crdt1": 0, 00:22:41.129 "crdt2": 0, 00:22:41.129 "crdt3": 0 00:22:41.129 } 00:22:41.129 }, 00:22:41.129 { 00:22:41.129 "method": "nvmf_create_transport", 00:22:41.129 "params": { 00:22:41.129 "trtype": "TCP", 00:22:41.129 "max_queue_depth": 128, 00:22:41.129 "max_io_qpairs_per_ctrlr": 127, 00:22:41.129 "in_capsule_data_size": 4096, 00:22:41.129 "max_io_size": 131072, 00:22:41.129 "io_unit_size": 131072, 00:22:41.129 "max_aq_depth": 128, 00:22:41.129 "num_shared_buffers": 511, 00:22:41.129 "buf_cache_size": 4294967295, 00:22:41.129 "dif_insert_or_strip": false, 00:22:41.129 "zcopy": false, 00:22:41.129 "c2h_success": false, 00:22:41.129 "sock_priority": 0, 00:22:41.129 "abort_timeout_sec": 1, 00:22:41.129 "ack_timeout": 0, 00:22:41.129 "data_wr_pool_size": 0 00:22:41.129 } 00:22:41.129 }, 00:22:41.129 { 00:22:41.129 "method": "nvmf_create_subsystem", 00:22:41.129 "params": { 00:22:41.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:41.129 "allow_any_host": false, 00:22:41.129 "serial_number": "00000000000000000000", 00:22:41.129 "model_number": "SPDK bdev Controller", 00:22:41.129 "max_namespaces": 32, 00:22:41.129 "min_cntlid": 1, 00:22:41.129 "max_cntlid": 65519, 00:22:41.129 "ana_reporting": false 00:22:41.129 } 00:22:41.129 }, 00:22:41.129 { 00:22:41.129 "method": "nvmf_subsystem_add_host", 00:22:41.129 "params": { 00:22:41.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:41.129 "host": "nqn.2016-06.io.spdk:host1", 00:22:41.129 "psk": "key0" 00:22:41.129 } 00:22:41.129 }, 00:22:41.129 { 00:22:41.129 "method": "nvmf_subsystem_add_ns", 00:22:41.129 "params": { 00:22:41.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:41.129 "namespace": { 00:22:41.129 "nsid": 1, 00:22:41.129 "bdev_name": "malloc0", 00:22:41.129 "nguid": "5413D7F428C14F96B49FFCAF8166D451", 00:22:41.129 "uuid": "5413d7f4-28c1-4f96-b49f-fcaf8166d451", 00:22:41.129 "no_auto_visible": false 00:22:41.129 } 00:22:41.129 } 00:22:41.129 }, 00:22:41.129 { 00:22:41.129 "method": "nvmf_subsystem_add_listener", 00:22:41.129 "params": { 00:22:41.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:41.129 "listen_address": { 00:22:41.129 "trtype": "TCP", 00:22:41.129 "adrfam": "IPv4", 00:22:41.129 "traddr": "10.0.0.2", 00:22:41.129 "trsvcid": "4420" 00:22:41.129 }, 00:22:41.129 "secure_channel": false, 00:22:41.129 "sock_impl": "ssl" 00:22:41.129 } 00:22:41.129 } 00:22:41.130 ] 00:22:41.130 } 00:22:41.130 ] 00:22:41.130 }' 00:22:41.130 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.130 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2139446 00:22:41.130 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2139446 00:22:41.130 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:41.130 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2139446 ']' 00:22:41.130 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:41.130 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:41.130 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:41.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:41.130 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:41.130 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.130 [2024-11-26 07:32:25.187196] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:22:41.130 [2024-11-26 07:32:25.187262] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:41.390 [2024-11-26 07:32:25.274410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.390 [2024-11-26 07:32:25.308223] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:41.390 [2024-11-26 07:32:25.308257] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:41.390 [2024-11-26 07:32:25.308265] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:41.390 [2024-11-26 07:32:25.308272] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:41.390 [2024-11-26 07:32:25.308279] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:41.390 [2024-11-26 07:32:25.308889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.390 [2024-11-26 07:32:25.507443] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:41.649 [2024-11-26 07:32:25.539456] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:41.649 [2024-11-26 07:32:25.539672] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:41.911 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:41.911 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:41.911 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:41.911 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:41.911 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.911 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:41.911 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2139475 00:22:41.911 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2139475 /var/tmp/bdevperf.sock 00:22:41.911 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2139475 ']' 00:22:41.911 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:41.911 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:41.911 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:41.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:41.911 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:41.911 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:41.911 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.911 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:22:41.911 "subsystems": [ 00:22:41.911 { 00:22:41.911 "subsystem": "keyring", 00:22:41.911 "config": [ 00:22:41.911 { 00:22:41.911 "method": "keyring_file_add_key", 00:22:41.911 "params": { 00:22:41.911 "name": "key0", 00:22:41.911 "path": "/tmp/tmp.XdJDwY1ZZ0" 00:22:41.911 } 00:22:41.911 } 00:22:41.911 ] 00:22:41.911 }, 00:22:41.911 { 00:22:41.911 "subsystem": "iobuf", 00:22:41.911 "config": [ 00:22:41.911 { 00:22:41.911 "method": "iobuf_set_options", 00:22:41.911 "params": { 00:22:41.911 "small_pool_count": 8192, 00:22:41.911 "large_pool_count": 1024, 00:22:41.911 "small_bufsize": 8192, 00:22:41.911 "large_bufsize": 135168, 00:22:41.911 "enable_numa": false 00:22:41.911 } 00:22:41.911 } 00:22:41.911 ] 00:22:41.911 }, 00:22:41.911 { 00:22:41.911 "subsystem": "sock", 00:22:41.911 "config": [ 00:22:41.911 { 00:22:41.911 "method": "sock_set_default_impl", 00:22:41.911 "params": { 00:22:41.911 "impl_name": "posix" 00:22:41.911 } 00:22:41.911 }, 00:22:41.911 { 00:22:41.911 "method": "sock_impl_set_options", 00:22:41.911 "params": { 00:22:41.911 "impl_name": "ssl", 00:22:41.911 "recv_buf_size": 4096, 00:22:41.911 "send_buf_size": 4096, 00:22:41.911 "enable_recv_pipe": true, 00:22:41.911 "enable_quickack": false, 00:22:41.911 "enable_placement_id": 0, 00:22:41.911 "enable_zerocopy_send_server": true, 00:22:41.911 "enable_zerocopy_send_client": false, 00:22:41.911 "zerocopy_threshold": 0, 00:22:41.911 "tls_version": 0, 00:22:41.911 "enable_ktls": false 00:22:41.911 } 00:22:41.911 }, 00:22:41.911 { 00:22:41.911 "method": "sock_impl_set_options", 00:22:41.911 "params": { 00:22:41.911 "impl_name": "posix", 00:22:41.911 "recv_buf_size": 2097152, 00:22:41.911 "send_buf_size": 2097152, 00:22:41.911 "enable_recv_pipe": true, 00:22:41.911 "enable_quickack": false, 00:22:41.911 "enable_placement_id": 0, 00:22:41.911 "enable_zerocopy_send_server": true, 00:22:41.911 "enable_zerocopy_send_client": false, 00:22:41.911 "zerocopy_threshold": 0, 00:22:41.911 "tls_version": 0, 00:22:41.911 "enable_ktls": false 00:22:41.911 } 00:22:41.911 } 00:22:41.911 ] 00:22:41.911 }, 00:22:41.911 { 00:22:41.911 "subsystem": "vmd", 00:22:41.911 "config": [] 00:22:41.911 }, 00:22:41.911 { 00:22:41.911 "subsystem": "accel", 00:22:41.911 "config": [ 00:22:41.911 { 00:22:41.911 "method": "accel_set_options", 00:22:41.911 "params": { 00:22:41.911 "small_cache_size": 128, 00:22:41.911 "large_cache_size": 16, 00:22:41.911 "task_count": 2048, 00:22:41.911 "sequence_count": 2048, 00:22:41.911 "buf_count": 2048 00:22:41.911 } 00:22:41.911 } 00:22:41.911 ] 00:22:41.911 }, 00:22:41.911 { 00:22:41.911 "subsystem": "bdev", 00:22:41.911 "config": [ 00:22:41.911 { 00:22:41.911 "method": "bdev_set_options", 00:22:41.911 "params": { 00:22:41.911 "bdev_io_pool_size": 65535, 00:22:41.911 "bdev_io_cache_size": 256, 00:22:41.911 "bdev_auto_examine": true, 00:22:41.911 "iobuf_small_cache_size": 128, 00:22:41.911 "iobuf_large_cache_size": 16 00:22:41.911 } 00:22:41.911 }, 00:22:41.911 { 00:22:41.911 "method": "bdev_raid_set_options", 00:22:41.911 "params": { 00:22:41.911 "process_window_size_kb": 1024, 00:22:41.911 "process_max_bandwidth_mb_sec": 0 00:22:41.911 } 00:22:41.911 }, 00:22:41.911 { 00:22:41.911 "method": "bdev_iscsi_set_options", 00:22:41.911 "params": { 00:22:41.911 "timeout_sec": 30 00:22:41.911 } 00:22:41.911 }, 00:22:41.911 { 00:22:41.911 "method": "bdev_nvme_set_options", 00:22:41.911 "params": { 00:22:41.911 "action_on_timeout": "none", 00:22:41.911 "timeout_us": 0, 00:22:41.911 "timeout_admin_us": 0, 00:22:41.911 "keep_alive_timeout_ms": 10000, 00:22:41.911 "arbitration_burst": 0, 00:22:41.911 "low_priority_weight": 0, 00:22:41.911 "medium_priority_weight": 0, 00:22:41.911 "high_priority_weight": 0, 00:22:41.911 "nvme_adminq_poll_period_us": 10000, 00:22:41.912 "nvme_ioq_poll_period_us": 0, 00:22:41.912 "io_queue_requests": 512, 00:22:41.912 "delay_cmd_submit": true, 00:22:41.912 "transport_retry_count": 4, 00:22:41.912 "bdev_retry_count": 3, 00:22:41.912 "transport_ack_timeout": 0, 00:22:41.912 "ctrlr_loss_timeout_sec": 0, 00:22:41.912 "reconnect_delay_sec": 0, 00:22:41.912 "fast_io_fail_timeout_sec": 0, 00:22:41.912 "disable_auto_failback": false, 00:22:41.912 "generate_uuids": false, 00:22:41.912 "transport_tos": 0, 00:22:41.912 "nvme_error_stat": false, 00:22:41.912 "rdma_srq_size": 0, 00:22:41.912 "io_path_stat": false, 00:22:41.912 "allow_accel_sequence": false, 00:22:41.912 "rdma_max_cq_size": 0, 00:22:41.912 "rdma_cm_event_timeout_ms": 0, 00:22:41.912 "dhchap_digests": [ 00:22:41.912 "sha256", 00:22:41.912 "sha384", 00:22:41.912 "sha512" 00:22:41.912 ], 00:22:41.912 "dhchap_dhgroups": [ 00:22:41.912 "null", 00:22:41.912 "ffdhe2048", 00:22:41.912 "ffdhe3072", 00:22:41.912 "ffdhe4096", 00:22:41.912 "ffdhe6144", 00:22:41.912 "ffdhe8192" 00:22:41.912 ] 00:22:41.912 } 00:22:41.912 }, 00:22:41.912 { 00:22:41.912 "method": "bdev_nvme_attach_controller", 00:22:41.912 "params": { 00:22:41.912 "name": "nvme0", 00:22:41.912 "trtype": "TCP", 00:22:41.912 "adrfam": "IPv4", 00:22:41.912 "traddr": "10.0.0.2", 00:22:41.912 "trsvcid": "4420", 00:22:41.912 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:41.912 "prchk_reftag": false, 00:22:41.912 "prchk_guard": false, 00:22:41.912 "ctrlr_loss_timeout_sec": 0, 00:22:41.912 "reconnect_delay_sec": 0, 00:22:41.912 "fast_io_fail_timeout_sec": 0, 00:22:41.912 "psk": "key0", 00:22:41.912 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:41.912 "hdgst": false, 00:22:41.912 "ddgst": false, 00:22:41.912 "multipath": "multipath" 00:22:41.912 } 00:22:41.912 }, 00:22:41.912 { 00:22:41.912 "method": "bdev_nvme_set_hotplug", 00:22:41.912 "params": { 00:22:41.912 "period_us": 100000, 00:22:41.912 "enable": false 00:22:41.912 } 00:22:41.912 }, 00:22:41.912 { 00:22:41.912 "method": "bdev_enable_histogram", 00:22:41.912 "params": { 00:22:41.912 "name": "nvme0n1", 00:22:41.912 "enable": true 00:22:41.912 } 00:22:41.912 }, 00:22:41.912 { 00:22:41.912 "method": "bdev_wait_for_examine" 00:22:41.912 } 00:22:41.912 ] 00:22:41.912 }, 00:22:41.912 { 00:22:41.912 "subsystem": "nbd", 00:22:41.912 "config": [] 00:22:41.912 } 00:22:41.912 ] 00:22:41.912 }' 00:22:42.173 [2024-11-26 07:32:26.071778] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:22:42.173 [2024-11-26 07:32:26.071831] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2139475 ] 00:22:42.173 [2024-11-26 07:32:26.162375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.173 [2024-11-26 07:32:26.192382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:42.436 [2024-11-26 07:32:26.327547] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:43.007 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:43.007 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:43.007 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:22:43.007 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:43.007 07:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.007 07:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:43.007 Running I/O for 1 seconds... 00:22:44.391 4666.00 IOPS, 18.23 MiB/s 00:22:44.391 Latency(us) 00:22:44.391 [2024-11-26T06:32:28.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.391 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:44.391 Verification LBA range: start 0x0 length 0x2000 00:22:44.391 nvme0n1 : 1.02 4718.78 18.43 0.00 0.00 26931.96 5270.19 48933.55 00:22:44.391 [2024-11-26T06:32:28.528Z] =================================================================================================================== 00:22:44.391 [2024-11-26T06:32:28.528Z] Total : 4718.78 18.43 0.00 0.00 26931.96 5270.19 48933.55 00:22:44.391 { 00:22:44.391 "results": [ 00:22:44.391 { 00:22:44.391 "job": "nvme0n1", 00:22:44.391 "core_mask": "0x2", 00:22:44.391 "workload": "verify", 00:22:44.391 "status": "finished", 00:22:44.391 "verify_range": { 00:22:44.391 "start": 0, 00:22:44.391 "length": 8192 00:22:44.391 }, 00:22:44.391 "queue_depth": 128, 00:22:44.391 "io_size": 4096, 00:22:44.391 "runtime": 1.016152, 00:22:44.391 "iops": 4718.782229430242, 00:22:44.391 "mibps": 18.432743083711884, 00:22:44.391 "io_failed": 0, 00:22:44.391 "io_timeout": 0, 00:22:44.391 "avg_latency_us": 26931.955987486966, 00:22:44.391 "min_latency_us": 5270.1866666666665, 00:22:44.391 "max_latency_us": 48933.54666666667 00:22:44.391 } 00:22:44.391 ], 00:22:44.391 "core_count": 1 00:22:44.391 } 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:44.391 nvmf_trace.0 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2139475 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2139475 ']' 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2139475 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2139475 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2139475' 00:22:44.391 killing process with pid 2139475 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2139475 00:22:44.391 Received shutdown signal, test time was about 1.000000 seconds 00:22:44.391 00:22:44.391 Latency(us) 00:22:44.391 [2024-11-26T06:32:28.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.391 [2024-11-26T06:32:28.528Z] =================================================================================================================== 00:22:44.391 [2024-11-26T06:32:28.528Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2139475 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:44.391 rmmod nvme_tcp 00:22:44.391 rmmod nvme_fabrics 00:22:44.391 rmmod nvme_keyring 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2139446 ']' 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2139446 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2139446 ']' 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2139446 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:44.391 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2139446 00:22:44.653 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:44.653 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:44.653 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2139446' 00:22:44.653 killing process with pid 2139446 00:22:44.653 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2139446 00:22:44.653 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2139446 00:22:44.653 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:44.653 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:44.653 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:44.653 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:22:44.653 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:22:44.653 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:44.653 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:22:44.653 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:44.653 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:44.653 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.653 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:44.653 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.203 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:47.203 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.otgW8jklEc /tmp/tmp.1jP4s5eYbO /tmp/tmp.XdJDwY1ZZ0 00:22:47.203 00:22:47.203 real 1m23.992s 00:22:47.203 user 2m8.657s 00:22:47.203 sys 0m27.282s 00:22:47.203 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:47.203 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:47.203 ************************************ 00:22:47.203 END TEST nvmf_tls 00:22:47.203 ************************************ 00:22:47.203 07:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:47.203 07:32:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:47.203 07:32:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:47.203 07:32:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:47.203 ************************************ 00:22:47.203 START TEST nvmf_fips 00:22:47.203 ************************************ 00:22:47.203 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:47.203 * Looking for test storage... 00:22:47.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:47.203 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:47.203 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:22:47.203 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:47.203 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:47.203 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:47.203 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:47.203 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:47.203 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:22:47.203 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:22:47.203 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:22:47.203 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:22:47.203 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:22:47.203 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:22:47.203 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:22:47.203 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:47.203 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:22:47.203 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:47.203 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:47.203 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:47.203 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:22:47.203 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:22:47.203 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:47.203 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:22:47.203 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:22:47.203 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:22:47.203 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:22:47.203 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:47.203 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:22:47.203 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:22:47.203 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:47.203 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:47.203 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:22:47.203 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:47.203 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:47.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.203 --rc genhtml_branch_coverage=1 00:22:47.203 --rc genhtml_function_coverage=1 00:22:47.203 --rc genhtml_legend=1 00:22:47.203 --rc geninfo_all_blocks=1 00:22:47.203 --rc geninfo_unexecuted_blocks=1 00:22:47.203 00:22:47.203 ' 00:22:47.203 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:47.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.203 --rc genhtml_branch_coverage=1 00:22:47.203 --rc genhtml_function_coverage=1 00:22:47.203 --rc genhtml_legend=1 00:22:47.203 --rc geninfo_all_blocks=1 00:22:47.203 --rc geninfo_unexecuted_blocks=1 00:22:47.203 00:22:47.203 ' 00:22:47.203 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:47.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.203 --rc genhtml_branch_coverage=1 00:22:47.203 --rc genhtml_function_coverage=1 00:22:47.203 --rc genhtml_legend=1 00:22:47.204 --rc geninfo_all_blocks=1 00:22:47.204 --rc geninfo_unexecuted_blocks=1 00:22:47.204 00:22:47.204 ' 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:47.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.204 --rc genhtml_branch_coverage=1 00:22:47.204 --rc genhtml_function_coverage=1 00:22:47.204 --rc genhtml_legend=1 00:22:47.204 --rc geninfo_all_blocks=1 00:22:47.204 --rc geninfo_unexecuted_blocks=1 00:22:47.204 00:22:47.204 ' 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:47.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:22:47.204 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:22:47.205 Error setting digest 00:22:47.205 40125C0B967F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:22:47.205 40125C0B967F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:22:47.205 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:55.351 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:55.351 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:55.351 Found net devices under 0000:31:00.0: cvl_0_0 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:55.351 Found net devices under 0000:31:00.1: cvl_0_1 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:55.351 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:55.612 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:55.612 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:55.612 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:55.612 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:55.612 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:55.612 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:55.612 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:55.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:55.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:22:55.612 00:22:55.612 --- 10.0.0.2 ping statistics --- 00:22:55.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.612 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:22:55.612 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:55.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:55.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:22:55.612 00:22:55.612 --- 10.0.0.1 ping statistics --- 00:22:55.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.612 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:22:55.612 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:55.612 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:22:55.612 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:55.612 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:55.612 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:55.612 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:55.612 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:55.612 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:55.612 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:55.612 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:22:55.612 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:55.612 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:55.612 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:55.612 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2144860 00:22:55.612 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2144860 00:22:55.612 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:55.612 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2144860 ']' 00:22:55.612 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.612 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:55.612 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.612 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:55.612 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:55.874 [2024-11-26 07:32:39.756336] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:22:55.874 [2024-11-26 07:32:39.756413] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.874 [2024-11-26 07:32:39.864624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.874 [2024-11-26 07:32:39.913908] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.874 [2024-11-26 07:32:39.913963] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.874 [2024-11-26 07:32:39.913972] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:55.874 [2024-11-26 07:32:39.913979] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:55.874 [2024-11-26 07:32:39.913985] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.874 [2024-11-26 07:32:39.914789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.447 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:56.447 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:22:56.447 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:56.447 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:56.447 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:56.709 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:56.709 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:22:56.709 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:56.709 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:22:56.709 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.nSb 00:22:56.709 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:56.709 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.nSb 00:22:56.709 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.nSb 00:22:56.709 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.nSb 00:22:56.709 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:56.709 [2024-11-26 07:32:40.775840] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:56.709 [2024-11-26 07:32:40.791839] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:56.709 [2024-11-26 07:32:40.792165] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:56.709 malloc0 00:22:56.970 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:56.970 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2145163 00:22:56.970 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2145163 /var/tmp/bdevperf.sock 00:22:56.970 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:56.970 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2145163 ']' 00:22:56.971 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:56.971 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:56.971 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:56.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:56.971 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:56.971 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:56.971 [2024-11-26 07:32:40.935133] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:22:56.971 [2024-11-26 07:32:40.935211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2145163 ] 00:22:56.971 [2024-11-26 07:32:41.005061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.971 [2024-11-26 07:32:41.041117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.913 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:57.913 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:22:57.913 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.nSb 00:22:57.913 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:58.174 [2024-11-26 07:32:42.051706] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:58.174 TLSTESTn1 00:22:58.174 07:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:58.174 Running I/O for 10 seconds... 00:23:00.503 4827.00 IOPS, 18.86 MiB/s [2024-11-26T06:32:45.581Z] 4722.50 IOPS, 18.45 MiB/s [2024-11-26T06:32:46.522Z] 4847.67 IOPS, 18.94 MiB/s [2024-11-26T06:32:47.464Z] 4815.75 IOPS, 18.81 MiB/s [2024-11-26T06:32:48.407Z] 4621.40 IOPS, 18.05 MiB/s [2024-11-26T06:32:49.352Z] 4650.33 IOPS, 18.17 MiB/s [2024-11-26T06:32:50.299Z] 4695.57 IOPS, 18.34 MiB/s [2024-11-26T06:32:51.684Z] 4720.38 IOPS, 18.44 MiB/s [2024-11-26T06:32:52.631Z] 4722.44 IOPS, 18.45 MiB/s [2024-11-26T06:32:52.631Z] 4741.90 IOPS, 18.52 MiB/s 00:23:08.494 Latency(us) 00:23:08.494 [2024-11-26T06:32:52.631Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.494 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:08.494 Verification LBA range: start 0x0 length 0x2000 00:23:08.494 TLSTESTn1 : 10.02 4744.07 18.53 0.00 0.00 26937.54 4805.97 72526.51 00:23:08.494 [2024-11-26T06:32:52.631Z] =================================================================================================================== 00:23:08.494 [2024-11-26T06:32:52.631Z] Total : 4744.07 18.53 0.00 0.00 26937.54 4805.97 72526.51 00:23:08.494 { 00:23:08.494 "results": [ 00:23:08.494 { 00:23:08.494 "job": "TLSTESTn1", 00:23:08.494 "core_mask": "0x4", 00:23:08.494 "workload": "verify", 00:23:08.494 "status": "finished", 00:23:08.494 "verify_range": { 00:23:08.494 "start": 0, 00:23:08.494 "length": 8192 00:23:08.494 }, 00:23:08.494 "queue_depth": 128, 00:23:08.494 "io_size": 4096, 00:23:08.494 "runtime": 10.022189, 00:23:08.494 "iops": 4744.0733755869105, 00:23:08.494 "mibps": 18.53153662338637, 00:23:08.494 "io_failed": 0, 00:23:08.494 "io_timeout": 0, 00:23:08.494 "avg_latency_us": 26937.54136541455, 00:23:08.494 "min_latency_us": 4805.973333333333, 00:23:08.494 "max_latency_us": 72526.50666666667 00:23:08.494 } 00:23:08.494 ], 00:23:08.494 "core_count": 1 00:23:08.494 } 00:23:08.494 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:08.494 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:08.494 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:23:08.494 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:23:08.494 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:23:08.494 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:08.494 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:23:08.494 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:23:08.494 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:23:08.494 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:08.494 nvmf_trace.0 00:23:08.494 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:23:08.494 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2145163 00:23:08.494 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2145163 ']' 00:23:08.494 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2145163 00:23:08.494 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:23:08.494 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:08.494 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2145163 00:23:08.494 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:08.494 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:08.494 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2145163' 00:23:08.494 killing process with pid 2145163 00:23:08.494 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2145163 00:23:08.494 Received shutdown signal, test time was about 10.000000 seconds 00:23:08.494 00:23:08.494 Latency(us) 00:23:08.494 [2024-11-26T06:32:52.631Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.494 [2024-11-26T06:32:52.631Z] =================================================================================================================== 00:23:08.494 [2024-11-26T06:32:52.631Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:08.494 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2145163 00:23:08.494 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:08.494 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:08.494 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:23:08.494 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:08.494 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:23:08.494 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:08.494 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:08.494 rmmod nvme_tcp 00:23:08.494 rmmod nvme_fabrics 00:23:08.494 rmmod nvme_keyring 00:23:08.756 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:08.756 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:23:08.756 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:23:08.756 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2144860 ']' 00:23:08.756 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2144860 00:23:08.756 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2144860 ']' 00:23:08.756 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2144860 00:23:08.756 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:23:08.756 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:08.756 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2144860 00:23:08.756 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:08.756 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:08.756 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2144860' 00:23:08.756 killing process with pid 2144860 00:23:08.756 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2144860 00:23:08.756 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2144860 00:23:08.756 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:08.756 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:08.756 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:08.756 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:23:08.756 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:08.756 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:23:08.756 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:23:08.756 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:08.756 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:08.756 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.756 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:08.756 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.300 07:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:11.300 07:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.nSb 00:23:11.300 00:23:11.300 real 0m24.069s 00:23:11.300 user 0m24.643s 00:23:11.300 sys 0m10.679s 00:23:11.300 07:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:11.300 07:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:11.300 ************************************ 00:23:11.300 END TEST nvmf_fips 00:23:11.300 ************************************ 00:23:11.300 07:32:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:11.300 07:32:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:11.300 07:32:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:11.300 07:32:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:11.300 ************************************ 00:23:11.300 START TEST nvmf_control_msg_list 00:23:11.300 ************************************ 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:11.300 * Looking for test storage... 00:23:11.300 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:11.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.300 --rc genhtml_branch_coverage=1 00:23:11.300 --rc genhtml_function_coverage=1 00:23:11.300 --rc genhtml_legend=1 00:23:11.300 --rc geninfo_all_blocks=1 00:23:11.300 --rc geninfo_unexecuted_blocks=1 00:23:11.300 00:23:11.300 ' 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:11.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.300 --rc genhtml_branch_coverage=1 00:23:11.300 --rc genhtml_function_coverage=1 00:23:11.300 --rc genhtml_legend=1 00:23:11.300 --rc geninfo_all_blocks=1 00:23:11.300 --rc geninfo_unexecuted_blocks=1 00:23:11.300 00:23:11.300 ' 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:11.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.300 --rc genhtml_branch_coverage=1 00:23:11.300 --rc genhtml_function_coverage=1 00:23:11.300 --rc genhtml_legend=1 00:23:11.300 --rc geninfo_all_blocks=1 00:23:11.300 --rc geninfo_unexecuted_blocks=1 00:23:11.300 00:23:11.300 ' 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:11.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.300 --rc genhtml_branch_coverage=1 00:23:11.300 --rc genhtml_function_coverage=1 00:23:11.300 --rc genhtml_legend=1 00:23:11.300 --rc geninfo_all_blocks=1 00:23:11.300 --rc geninfo_unexecuted_blocks=1 00:23:11.300 00:23:11.300 ' 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:11.300 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:11.301 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:11.301 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:23:11.301 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:11.301 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:11.301 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:11.301 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.301 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.301 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.301 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:23:11.301 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.301 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:23:11.301 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:11.301 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:11.301 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:11.301 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:11.301 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:11.301 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:11.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:11.301 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:11.301 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:11.301 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:11.301 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:23:11.301 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:11.301 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:11.301 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:11.301 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:11.301 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:11.301 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.301 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:11.301 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.301 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:11.301 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:11.301 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:23:11.301 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:19.440 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:19.440 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:19.440 Found net devices under 0000:31:00.0: cvl_0_0 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:19.440 Found net devices under 0000:31:00.1: cvl_0_1 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:19.440 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:19.441 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:19.441 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:19.441 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:19.441 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:19.703 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:19.703 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:19.703 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:19.703 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:19.703 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:19.703 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:19.703 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:19.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:19.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.568 ms 00:23:19.703 00:23:19.703 --- 10.0.0.2 ping statistics --- 00:23:19.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.703 rtt min/avg/max/mdev = 0.568/0.568/0.568/0.000 ms 00:23:19.703 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:19.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:19.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:23:19.703 00:23:19.703 --- 10.0.0.1 ping statistics --- 00:23:19.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.703 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:23:19.703 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:19.703 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:23:19.703 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:19.703 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:19.703 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:19.703 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:19.703 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:19.703 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:19.703 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:19.703 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:23:19.703 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:19.703 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:19.703 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:19.703 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2152235 00:23:19.703 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2152235 00:23:19.703 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:19.703 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2152235 ']' 00:23:19.703 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.703 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:19.703 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.703 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:19.703 07:33:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:19.964 [2024-11-26 07:33:03.838323] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:23:19.964 [2024-11-26 07:33:03.838398] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:19.964 [2024-11-26 07:33:03.928835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.964 [2024-11-26 07:33:03.968760] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:19.964 [2024-11-26 07:33:03.968800] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:19.964 [2024-11-26 07:33:03.968812] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:19.964 [2024-11-26 07:33:03.968819] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:19.964 [2024-11-26 07:33:03.968825] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:19.964 [2024-11-26 07:33:03.969435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.535 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:20.535 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:23:20.535 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:20.535 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:20.535 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:20.795 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:20.795 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:23:20.795 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:20.795 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:23:20.795 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.795 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:20.795 [2024-11-26 07:33:04.674553] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:20.795 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.795 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:23:20.795 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.795 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:20.795 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.795 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:23:20.795 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.795 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:20.795 Malloc0 00:23:20.795 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.795 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:23:20.795 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.795 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:20.795 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.795 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:20.795 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.795 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:20.795 [2024-11-26 07:33:04.725407] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:20.795 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.795 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2152378 00:23:20.795 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:20.795 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2152379 00:23:20.795 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:20.795 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2152380 00:23:20.795 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2152378 00:23:20.796 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:20.796 [2024-11-26 07:33:04.795781] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:20.796 [2024-11-26 07:33:04.815785] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:20.796 [2024-11-26 07:33:04.825959] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:21.738 Initializing NVMe Controllers 00:23:21.738 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:21.738 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:23:21.738 Initialization complete. Launching workers. 00:23:21.738 ======================================================== 00:23:21.738 Latency(us) 00:23:21.738 Device Information : IOPS MiB/s Average min max 00:23:21.738 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40906.20 40750.00 41035.90 00:23:21.738 ======================================================== 00:23:21.738 Total : 25.00 0.10 40906.20 40750.00 41035.90 00:23:21.738 00:23:22.000 07:33:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2152379 00:23:22.000 Initializing NVMe Controllers 00:23:22.000 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:22.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:23:22.000 Initialization complete. Launching workers. 00:23:22.000 ======================================================== 00:23:22.000 Latency(us) 00:23:22.000 Device Information : IOPS MiB/s Average min max 00:23:22.000 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40902.82 40849.97 40951.88 00:23:22.000 ======================================================== 00:23:22.000 Total : 25.00 0.10 40902.82 40849.97 40951.88 00:23:22.000 00:23:22.000 07:33:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2152380 00:23:22.000 Initializing NVMe Controllers 00:23:22.000 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:22.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:23:22.000 Initialization complete. Launching workers. 00:23:22.000 ======================================================== 00:23:22.000 Latency(us) 00:23:22.000 Device Information : IOPS MiB/s Average min max 00:23:22.000 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 2222.98 8.68 449.68 155.09 764.54 00:23:22.000 ======================================================== 00:23:22.000 Total : 2222.98 8.68 449.68 155.09 764.54 00:23:22.000 00:23:22.000 07:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:22.000 07:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:23:22.000 07:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:22.000 07:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:23:22.000 07:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:22.000 07:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:23:22.000 07:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:22.000 07:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:22.000 rmmod nvme_tcp 00:23:22.000 rmmod nvme_fabrics 00:23:22.000 rmmod nvme_keyring 00:23:22.262 07:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:22.262 07:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:23:22.262 07:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:23:22.262 07:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2152235 ']' 00:23:22.262 07:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2152235 00:23:22.262 07:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2152235 ']' 00:23:22.262 07:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2152235 00:23:22.262 07:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:23:22.262 07:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:22.262 07:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2152235 00:23:22.262 07:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:22.262 07:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:22.262 07:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2152235' 00:23:22.262 killing process with pid 2152235 00:23:22.262 07:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2152235 00:23:22.262 07:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2152235 00:23:22.262 07:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:22.262 07:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:22.262 07:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:22.262 07:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:23:22.262 07:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:23:22.262 07:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:22.262 07:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:23:22.262 07:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:22.262 07:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:22.262 07:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.262 07:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:22.262 07:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.807 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:24.807 00:23:24.807 real 0m13.422s 00:23:24.807 user 0m8.202s 00:23:24.807 sys 0m7.256s 00:23:24.807 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:24.807 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:24.807 ************************************ 00:23:24.807 END TEST nvmf_control_msg_list 00:23:24.807 ************************************ 00:23:24.807 07:33:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:23:24.807 07:33:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:24.807 07:33:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:24.807 07:33:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:24.807 ************************************ 00:23:24.807 START TEST nvmf_wait_for_buf 00:23:24.807 ************************************ 00:23:24.807 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:23:24.807 * Looking for test storage... 00:23:24.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:24.807 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:24.807 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:23:24.807 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:24.807 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:24.807 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:24.807 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:24.807 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:24.807 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:23:24.807 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:23:24.807 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:23:24.807 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:23:24.807 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:23:24.807 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:23:24.807 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:23:24.807 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:24.807 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:23:24.807 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:23:24.807 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:24.807 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:24.807 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:23:24.807 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:23:24.807 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:24.807 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:23:24.807 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:24.807 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:24.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.808 --rc genhtml_branch_coverage=1 00:23:24.808 --rc genhtml_function_coverage=1 00:23:24.808 --rc genhtml_legend=1 00:23:24.808 --rc geninfo_all_blocks=1 00:23:24.808 --rc geninfo_unexecuted_blocks=1 00:23:24.808 00:23:24.808 ' 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:24.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.808 --rc genhtml_branch_coverage=1 00:23:24.808 --rc genhtml_function_coverage=1 00:23:24.808 --rc genhtml_legend=1 00:23:24.808 --rc geninfo_all_blocks=1 00:23:24.808 --rc geninfo_unexecuted_blocks=1 00:23:24.808 00:23:24.808 ' 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:24.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.808 --rc genhtml_branch_coverage=1 00:23:24.808 --rc genhtml_function_coverage=1 00:23:24.808 --rc genhtml_legend=1 00:23:24.808 --rc geninfo_all_blocks=1 00:23:24.808 --rc geninfo_unexecuted_blocks=1 00:23:24.808 00:23:24.808 ' 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:24.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.808 --rc genhtml_branch_coverage=1 00:23:24.808 --rc genhtml_function_coverage=1 00:23:24.808 --rc genhtml_legend=1 00:23:24.808 --rc geninfo_all_blocks=1 00:23:24.808 --rc geninfo_unexecuted_blocks=1 00:23:24.808 00:23:24.808 ' 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:24.808 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:24.808 07:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:33.120 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:33.120 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:33.120 Found net devices under 0000:31:00.0: cvl_0_0 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:33.120 Found net devices under 0000:31:00.1: cvl_0_1 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:33.120 07:33:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:33.120 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:33.120 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:33.120 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:33.120 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:33.120 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:33.120 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:33.120 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:33.120 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:33.120 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:33.120 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:33.120 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:33.120 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:33.120 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:33.120 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:33.120 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:33.120 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:33.120 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:33.121 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:33.121 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:33.121 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:33.383 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:33.383 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:33.383 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:33.383 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:33.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:33.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:23:33.383 00:23:33.383 --- 10.0.0.2 ping statistics --- 00:23:33.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.383 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:23:33.383 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:33.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:33.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:23:33.383 00:23:33.383 --- 10.0.0.1 ping statistics --- 00:23:33.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.383 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:23:33.383 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:33.383 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:23:33.383 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:33.383 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:33.383 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:33.383 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:33.383 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:33.383 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:33.383 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:33.383 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:23:33.383 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:33.383 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:33.383 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:33.383 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2157850 00:23:33.383 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2157850 00:23:33.383 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:33.383 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2157850 ']' 00:23:33.383 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.383 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:33.383 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.383 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:33.383 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:33.383 [2024-11-26 07:33:17.426528] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:23:33.383 [2024-11-26 07:33:17.426594] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.646 [2024-11-26 07:33:17.516099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.646 [2024-11-26 07:33:17.555958] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.646 [2024-11-26 07:33:17.555995] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.646 [2024-11-26 07:33:17.556004] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.646 [2024-11-26 07:33:17.556012] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.646 [2024-11-26 07:33:17.556020] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.646 [2024-11-26 07:33:17.556629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.218 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:34.218 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:23:34.218 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:34.218 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:34.218 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:34.218 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:34.218 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:23:34.218 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:34.218 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:23:34.218 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.218 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:34.218 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.218 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:23:34.218 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.218 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:34.218 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.218 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:23:34.218 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.218 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:34.218 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.218 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:23:34.218 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.218 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:34.479 Malloc0 00:23:34.479 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.479 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:23:34.479 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.479 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:34.479 [2024-11-26 07:33:18.363574] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.479 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.479 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:23:34.479 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.479 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:34.479 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.479 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:23:34.479 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.479 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:34.479 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.479 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:34.479 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.479 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:34.479 [2024-11-26 07:33:18.399773] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.479 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.479 07:33:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:34.479 [2024-11-26 07:33:18.502324] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:35.867 Initializing NVMe Controllers 00:23:35.867 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:35.867 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:23:35.867 Initialization complete. Launching workers. 00:23:35.867 ======================================================== 00:23:35.867 Latency(us) 00:23:35.867 Device Information : IOPS MiB/s Average min max 00:23:35.867 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 124.00 15.50 33638.09 29964.15 70799.78 00:23:35.867 ======================================================== 00:23:35.867 Total : 124.00 15.50 33638.09 29964.15 70799.78 00:23:35.867 00:23:35.867 07:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:23:35.867 07:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:23:35.867 07:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.867 07:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:35.867 07:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.867 07:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1958 00:23:35.867 07:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1958 -eq 0 ]] 00:23:35.867 07:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:35.867 07:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:23:35.867 07:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:35.867 07:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:23:35.867 07:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:35.867 07:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:23:35.867 07:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:35.867 07:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:35.867 rmmod nvme_tcp 00:23:36.130 rmmod nvme_fabrics 00:23:36.130 rmmod nvme_keyring 00:23:36.130 07:33:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:36.130 07:33:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:23:36.130 07:33:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:23:36.130 07:33:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2157850 ']' 00:23:36.130 07:33:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2157850 00:23:36.130 07:33:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2157850 ']' 00:23:36.130 07:33:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2157850 00:23:36.130 07:33:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:23:36.130 07:33:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:36.130 07:33:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2157850 00:23:36.130 07:33:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:36.130 07:33:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:36.130 07:33:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2157850' 00:23:36.130 killing process with pid 2157850 00:23:36.130 07:33:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2157850 00:23:36.130 07:33:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2157850 00:23:36.130 07:33:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:36.130 07:33:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:36.130 07:33:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:36.130 07:33:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:23:36.130 07:33:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:23:36.130 07:33:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:36.130 07:33:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:23:36.130 07:33:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:36.130 07:33:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:36.130 07:33:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.130 07:33:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.130 07:33:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.679 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:38.679 00:23:38.679 real 0m13.807s 00:23:38.679 user 0m5.373s 00:23:38.679 sys 0m6.992s 00:23:38.679 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:38.679 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:38.679 ************************************ 00:23:38.679 END TEST nvmf_wait_for_buf 00:23:38.679 ************************************ 00:23:38.679 07:33:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:23:38.679 07:33:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:23:38.679 07:33:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:23:38.679 07:33:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:23:38.679 07:33:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:23:38.679 07:33:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:46.819 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:46.819 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:23:46.819 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:46.819 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:46.819 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:46.819 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:46.819 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:46.819 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:23:46.819 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:46.819 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:23:46.819 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:23:46.819 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:23:46.819 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:23:46.819 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:23:46.819 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:23:46.819 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:46.819 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:46.819 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:46.819 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:46.819 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:46.819 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:46.819 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:46.819 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:46.819 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:46.819 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:46.819 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:46.819 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:46.819 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:46.819 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:46.819 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:46.819 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:46.819 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:46.819 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:46.820 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:46.820 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:46.820 Found net devices under 0000:31:00.0: cvl_0_0 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:46.820 Found net devices under 0000:31:00.1: cvl_0_1 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:46.820 ************************************ 00:23:46.820 START TEST nvmf_perf_adq 00:23:46.820 ************************************ 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:46.820 * Looking for test storage... 00:23:46.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:46.820 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:46.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.820 --rc genhtml_branch_coverage=1 00:23:46.820 --rc genhtml_function_coverage=1 00:23:46.821 --rc genhtml_legend=1 00:23:46.821 --rc geninfo_all_blocks=1 00:23:46.821 --rc geninfo_unexecuted_blocks=1 00:23:46.821 00:23:46.821 ' 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:46.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.821 --rc genhtml_branch_coverage=1 00:23:46.821 --rc genhtml_function_coverage=1 00:23:46.821 --rc genhtml_legend=1 00:23:46.821 --rc geninfo_all_blocks=1 00:23:46.821 --rc geninfo_unexecuted_blocks=1 00:23:46.821 00:23:46.821 ' 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:46.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.821 --rc genhtml_branch_coverage=1 00:23:46.821 --rc genhtml_function_coverage=1 00:23:46.821 --rc genhtml_legend=1 00:23:46.821 --rc geninfo_all_blocks=1 00:23:46.821 --rc geninfo_unexecuted_blocks=1 00:23:46.821 00:23:46.821 ' 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:46.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.821 --rc genhtml_branch_coverage=1 00:23:46.821 --rc genhtml_function_coverage=1 00:23:46.821 --rc genhtml_legend=1 00:23:46.821 --rc geninfo_all_blocks=1 00:23:46.821 --rc geninfo_unexecuted_blocks=1 00:23:46.821 00:23:46.821 ' 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:46.821 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:46.821 07:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:54.964 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:54.964 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:54.964 Found net devices under 0000:31:00.0: cvl_0_0 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:54.964 Found net devices under 0000:31:00.1: cvl_0_1 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.964 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:54.965 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:54.965 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:23:54.965 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:54.965 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:23:54.965 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:23:54.965 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:23:56.350 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:23:58.264 07:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:03.555 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:03.555 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:03.556 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:03.556 Found net devices under 0000:31:00.0: cvl_0_0 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:03.556 Found net devices under 0000:31:00.1: cvl_0_1 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:03.556 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:03.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:03.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.534 ms 00:24:03.816 00:24:03.816 --- 10.0.0.2 ping statistics --- 00:24:03.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.816 rtt min/avg/max/mdev = 0.534/0.534/0.534/0.000 ms 00:24:03.816 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:03.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:03.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:24:03.816 00:24:03.816 --- 10.0.0.1 ping statistics --- 00:24:03.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.816 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:24:03.816 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:03.816 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:24:03.816 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:03.816 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:03.816 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:03.816 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:03.816 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:03.816 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:03.816 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:03.816 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:03.816 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:03.816 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:03.816 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:03.816 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2169158 00:24:03.816 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2169158 00:24:03.816 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:03.816 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2169158 ']' 00:24:03.816 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.816 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:03.816 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.816 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:03.816 07:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:03.816 [2024-11-26 07:33:47.800401] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:24:03.816 [2024-11-26 07:33:47.800469] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:03.816 [2024-11-26 07:33:47.891792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:03.816 [2024-11-26 07:33:47.934020] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:03.816 [2024-11-26 07:33:47.934059] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:03.816 [2024-11-26 07:33:47.934068] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:03.816 [2024-11-26 07:33:47.934075] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:03.816 [2024-11-26 07:33:47.934081] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:03.816 [2024-11-26 07:33:47.935694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.816 [2024-11-26 07:33:47.935817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:03.816 [2024-11-26 07:33:47.935959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.816 [2024-11-26 07:33:47.935959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:04.759 [2024-11-26 07:33:48.781555] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:04.759 Malloc1 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:04.759 [2024-11-26 07:33:48.850184] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2169486 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:24:04.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:07.304 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:24:07.304 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.304 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:07.304 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.304 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:24:07.304 "tick_rate": 2400000000, 00:24:07.304 "poll_groups": [ 00:24:07.304 { 00:24:07.304 "name": "nvmf_tgt_poll_group_000", 00:24:07.304 "admin_qpairs": 1, 00:24:07.304 "io_qpairs": 1, 00:24:07.304 "current_admin_qpairs": 1, 00:24:07.304 "current_io_qpairs": 1, 00:24:07.304 "pending_bdev_io": 0, 00:24:07.304 "completed_nvme_io": 19375, 00:24:07.304 "transports": [ 00:24:07.304 { 00:24:07.304 "trtype": "TCP" 00:24:07.304 } 00:24:07.304 ] 00:24:07.304 }, 00:24:07.304 { 00:24:07.304 "name": "nvmf_tgt_poll_group_001", 00:24:07.304 "admin_qpairs": 0, 00:24:07.304 "io_qpairs": 1, 00:24:07.304 "current_admin_qpairs": 0, 00:24:07.304 "current_io_qpairs": 1, 00:24:07.304 "pending_bdev_io": 0, 00:24:07.304 "completed_nvme_io": 28090, 00:24:07.304 "transports": [ 00:24:07.304 { 00:24:07.304 "trtype": "TCP" 00:24:07.304 } 00:24:07.304 ] 00:24:07.304 }, 00:24:07.304 { 00:24:07.304 "name": "nvmf_tgt_poll_group_002", 00:24:07.304 "admin_qpairs": 0, 00:24:07.304 "io_qpairs": 1, 00:24:07.304 "current_admin_qpairs": 0, 00:24:07.304 "current_io_qpairs": 1, 00:24:07.304 "pending_bdev_io": 0, 00:24:07.304 "completed_nvme_io": 19441, 00:24:07.304 "transports": [ 00:24:07.304 { 00:24:07.304 "trtype": "TCP" 00:24:07.304 } 00:24:07.304 ] 00:24:07.304 }, 00:24:07.304 { 00:24:07.304 "name": "nvmf_tgt_poll_group_003", 00:24:07.304 "admin_qpairs": 0, 00:24:07.304 "io_qpairs": 1, 00:24:07.304 "current_admin_qpairs": 0, 00:24:07.304 "current_io_qpairs": 1, 00:24:07.304 "pending_bdev_io": 0, 00:24:07.304 "completed_nvme_io": 19696, 00:24:07.304 "transports": [ 00:24:07.304 { 00:24:07.304 "trtype": "TCP" 00:24:07.304 } 00:24:07.304 ] 00:24:07.304 } 00:24:07.304 ] 00:24:07.304 }' 00:24:07.304 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:24:07.304 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:24:07.304 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:24:07.304 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:24:07.304 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2169486 00:24:15.440 Initializing NVMe Controllers 00:24:15.440 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:15.440 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:15.440 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:15.440 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:15.440 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:15.440 Initialization complete. Launching workers. 00:24:15.440 ======================================================== 00:24:15.440 Latency(us) 00:24:15.440 Device Information : IOPS MiB/s Average min max 00:24:15.440 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10942.20 42.74 5850.43 1327.18 9931.38 00:24:15.440 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14698.60 57.42 4353.80 1325.81 9934.25 00:24:15.440 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13364.40 52.20 4788.77 1306.01 12456.17 00:24:15.440 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12723.50 49.70 5030.13 1288.73 12545.22 00:24:15.440 ======================================================== 00:24:15.440 Total : 51728.68 202.07 4949.12 1288.73 12545.22 00:24:15.440 00:24:15.440 07:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:24:15.440 07:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:15.440 07:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:24:15.440 07:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:15.440 07:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:24:15.440 07:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:15.440 07:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:15.440 rmmod nvme_tcp 00:24:15.440 rmmod nvme_fabrics 00:24:15.440 rmmod nvme_keyring 00:24:15.440 07:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:15.440 07:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:24:15.440 07:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:24:15.440 07:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2169158 ']' 00:24:15.440 07:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2169158 00:24:15.440 07:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2169158 ']' 00:24:15.440 07:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2169158 00:24:15.440 07:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:24:15.440 07:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:15.440 07:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2169158 00:24:15.440 07:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:15.440 07:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:15.440 07:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2169158' 00:24:15.440 killing process with pid 2169158 00:24:15.440 07:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2169158 00:24:15.440 07:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2169158 00:24:15.440 07:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:15.441 07:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:15.441 07:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:15.441 07:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:24:15.441 07:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:24:15.441 07:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:15.441 07:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:24:15.441 07:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:15.441 07:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:15.441 07:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:15.441 07:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:15.441 07:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.355 07:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:17.355 07:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:24:17.355 07:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:24:17.355 07:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:24:19.266 07:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:24:21.177 07:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:24:26.464 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:24:26.464 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:26.464 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:26.464 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:26.464 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:26.464 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:26.464 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.464 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:26.464 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:26.464 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:26.464 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:26.464 Found net devices under 0000:31:00.0: cvl_0_0 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:26.464 Found net devices under 0000:31:00.1: cvl_0_1 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:26.464 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:26.465 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:26.465 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:26.465 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:26.465 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:26.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:26.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:24:26.465 00:24:26.465 --- 10.0.0.2 ping statistics --- 00:24:26.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.465 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:24:26.465 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:26.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:26.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:24:26.465 00:24:26.465 --- 10.0.0.1 ping statistics --- 00:24:26.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.465 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:24:26.465 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:26.465 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:24:26.465 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:26.465 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:26.465 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:26.465 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:26.465 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:26.465 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:26.465 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:26.465 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:24:26.465 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:24:26.465 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:24:26.465 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:24:26.465 net.core.busy_poll = 1 00:24:26.465 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:24:26.465 net.core.busy_read = 1 00:24:26.465 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:24:26.465 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:24:26.465 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:24:26.727 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:24:26.727 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:24:26.727 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:26.727 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:26.727 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:26.727 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:26.727 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2174005 00:24:26.727 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2174005 00:24:26.727 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2174005 ']' 00:24:26.727 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:26.727 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.727 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:26.727 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.727 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:26.727 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:26.727 [2024-11-26 07:34:10.729143] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:24:26.727 [2024-11-26 07:34:10.729209] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:26.727 [2024-11-26 07:34:10.822401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:26.988 [2024-11-26 07:34:10.862841] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:26.988 [2024-11-26 07:34:10.862887] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:26.988 [2024-11-26 07:34:10.862896] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:26.988 [2024-11-26 07:34:10.862902] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:26.988 [2024-11-26 07:34:10.862908] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:26.988 [2024-11-26 07:34:10.864545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:26.988 [2024-11-26 07:34:10.864662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:26.988 [2024-11-26 07:34:10.864819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.988 [2024-11-26 07:34:10.864820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:27.559 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:27.559 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:24:27.560 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:27.560 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:27.560 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:27.560 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:27.560 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:24:27.560 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:24:27.560 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:24:27.560 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.560 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:27.560 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.560 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:24:27.560 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:24:27.560 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.560 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:27.560 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.560 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:24:27.560 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.560 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:27.820 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.820 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:24:27.820 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.820 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:27.820 [2024-11-26 07:34:11.706755] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:27.821 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.821 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:27.821 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.821 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:27.821 Malloc1 00:24:27.821 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.821 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:27.821 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.821 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:27.821 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.821 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:27.821 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.821 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:27.821 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.821 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:27.821 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.821 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:27.821 [2024-11-26 07:34:11.780198] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.821 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.821 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2174307 00:24:27.821 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:24:27.821 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:29.735 07:34:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:24:29.735 07:34:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.735 07:34:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:29.735 07:34:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.735 07:34:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:24:29.735 "tick_rate": 2400000000, 00:24:29.735 "poll_groups": [ 00:24:29.735 { 00:24:29.735 "name": "nvmf_tgt_poll_group_000", 00:24:29.735 "admin_qpairs": 1, 00:24:29.735 "io_qpairs": 3, 00:24:29.735 "current_admin_qpairs": 1, 00:24:29.735 "current_io_qpairs": 3, 00:24:29.735 "pending_bdev_io": 0, 00:24:29.735 "completed_nvme_io": 29160, 00:24:29.735 "transports": [ 00:24:29.735 { 00:24:29.735 "trtype": "TCP" 00:24:29.735 } 00:24:29.735 ] 00:24:29.735 }, 00:24:29.735 { 00:24:29.735 "name": "nvmf_tgt_poll_group_001", 00:24:29.735 "admin_qpairs": 0, 00:24:29.735 "io_qpairs": 1, 00:24:29.735 "current_admin_qpairs": 0, 00:24:29.735 "current_io_qpairs": 1, 00:24:29.735 "pending_bdev_io": 0, 00:24:29.735 "completed_nvme_io": 39174, 00:24:29.735 "transports": [ 00:24:29.735 { 00:24:29.735 "trtype": "TCP" 00:24:29.735 } 00:24:29.735 ] 00:24:29.735 }, 00:24:29.735 { 00:24:29.735 "name": "nvmf_tgt_poll_group_002", 00:24:29.735 "admin_qpairs": 0, 00:24:29.735 "io_qpairs": 0, 00:24:29.735 "current_admin_qpairs": 0, 00:24:29.735 "current_io_qpairs": 0, 00:24:29.735 "pending_bdev_io": 0, 00:24:29.735 "completed_nvme_io": 0, 00:24:29.735 "transports": [ 00:24:29.735 { 00:24:29.735 "trtype": "TCP" 00:24:29.735 } 00:24:29.735 ] 00:24:29.735 }, 00:24:29.735 { 00:24:29.735 "name": "nvmf_tgt_poll_group_003", 00:24:29.735 "admin_qpairs": 0, 00:24:29.735 "io_qpairs": 0, 00:24:29.735 "current_admin_qpairs": 0, 00:24:29.735 "current_io_qpairs": 0, 00:24:29.735 "pending_bdev_io": 0, 00:24:29.735 "completed_nvme_io": 0, 00:24:29.735 "transports": [ 00:24:29.735 { 00:24:29.735 "trtype": "TCP" 00:24:29.735 } 00:24:29.735 ] 00:24:29.735 } 00:24:29.735 ] 00:24:29.735 }' 00:24:29.735 07:34:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:24:29.735 07:34:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:24:29.735 07:34:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:24:29.735 07:34:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:24:29.735 07:34:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2174307 00:24:37.875 Initializing NVMe Controllers 00:24:37.875 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:37.875 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:37.875 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:37.875 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:37.875 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:37.875 Initialization complete. Launching workers. 00:24:37.875 ======================================================== 00:24:37.875 Latency(us) 00:24:37.875 Device Information : IOPS MiB/s Average min max 00:24:37.875 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8279.20 32.34 7753.76 1406.25 56668.94 00:24:37.875 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6292.70 24.58 10195.68 1404.55 57527.55 00:24:37.875 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 21048.50 82.22 3040.07 1125.60 7470.75 00:24:37.875 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5237.70 20.46 12219.66 1215.98 57368.13 00:24:37.875 ======================================================== 00:24:37.875 Total : 40858.10 159.60 6274.03 1125.60 57527.55 00:24:37.875 00:24:37.875 07:34:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:24:37.875 07:34:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:37.875 07:34:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:24:37.875 07:34:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:37.875 07:34:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:24:37.875 07:34:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:37.875 07:34:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:37.875 rmmod nvme_tcp 00:24:38.137 rmmod nvme_fabrics 00:24:38.137 rmmod nvme_keyring 00:24:38.137 07:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:38.137 07:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:24:38.137 07:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:24:38.137 07:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2174005 ']' 00:24:38.137 07:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2174005 00:24:38.137 07:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2174005 ']' 00:24:38.137 07:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2174005 00:24:38.137 07:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:24:38.137 07:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:38.137 07:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2174005 00:24:38.137 07:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:38.137 07:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:38.137 07:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2174005' 00:24:38.137 killing process with pid 2174005 00:24:38.137 07:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2174005 00:24:38.137 07:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2174005 00:24:38.137 07:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:38.137 07:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:38.137 07:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:38.137 07:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:24:38.137 07:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:24:38.137 07:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:38.137 07:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:24:38.137 07:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:38.137 07:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:38.137 07:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.137 07:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:38.137 07:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.445 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:41.445 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:24:41.445 00:24:41.445 real 0m54.954s 00:24:41.445 user 2m50.080s 00:24:41.445 sys 0m12.038s 00:24:41.445 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:41.445 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:41.445 ************************************ 00:24:41.445 END TEST nvmf_perf_adq 00:24:41.445 ************************************ 00:24:41.445 07:34:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:41.445 07:34:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:41.445 07:34:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:41.445 07:34:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:41.445 ************************************ 00:24:41.445 START TEST nvmf_shutdown 00:24:41.445 ************************************ 00:24:41.445 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:41.445 * Looking for test storage... 00:24:41.445 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:41.445 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:41.445 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:24:41.445 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:41.707 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:41.707 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:41.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.708 --rc genhtml_branch_coverage=1 00:24:41.708 --rc genhtml_function_coverage=1 00:24:41.708 --rc genhtml_legend=1 00:24:41.708 --rc geninfo_all_blocks=1 00:24:41.708 --rc geninfo_unexecuted_blocks=1 00:24:41.708 00:24:41.708 ' 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:41.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.708 --rc genhtml_branch_coverage=1 00:24:41.708 --rc genhtml_function_coverage=1 00:24:41.708 --rc genhtml_legend=1 00:24:41.708 --rc geninfo_all_blocks=1 00:24:41.708 --rc geninfo_unexecuted_blocks=1 00:24:41.708 00:24:41.708 ' 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:41.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.708 --rc genhtml_branch_coverage=1 00:24:41.708 --rc genhtml_function_coverage=1 00:24:41.708 --rc genhtml_legend=1 00:24:41.708 --rc geninfo_all_blocks=1 00:24:41.708 --rc geninfo_unexecuted_blocks=1 00:24:41.708 00:24:41.708 ' 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:41.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.708 --rc genhtml_branch_coverage=1 00:24:41.708 --rc genhtml_function_coverage=1 00:24:41.708 --rc genhtml_legend=1 00:24:41.708 --rc geninfo_all_blocks=1 00:24:41.708 --rc geninfo_unexecuted_blocks=1 00:24:41.708 00:24:41.708 ' 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:41.708 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:41.708 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:41.708 ************************************ 00:24:41.708 START TEST nvmf_shutdown_tc1 00:24:41.708 ************************************ 00:24:41.709 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:24:41.709 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:24:41.709 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:41.709 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:41.709 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:41.709 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:41.709 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:41.709 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:41.709 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.709 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:41.709 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.709 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:41.709 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:41.709 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:41.709 07:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:49.972 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:49.972 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:49.972 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:49.973 Found net devices under 0000:31:00.0: cvl_0_0 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:49.973 Found net devices under 0000:31:00.1: cvl_0_1 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:49.973 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:49.973 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:49.973 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:49.973 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:50.234 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:50.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:50.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:24:50.234 00:24:50.234 --- 10.0.0.2 ping statistics --- 00:24:50.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.234 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:24:50.234 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:50.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:50.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:24:50.234 00:24:50.234 --- 10.0.0.1 ping statistics --- 00:24:50.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.234 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:24:50.234 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:50.234 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:24:50.234 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:50.234 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:50.234 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:50.234 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:50.234 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:50.234 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:50.234 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:50.234 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:50.234 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:50.234 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:50.234 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:50.234 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2181411 00:24:50.234 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2181411 00:24:50.234 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:50.234 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2181411 ']' 00:24:50.234 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:50.234 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:50.234 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:50.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:50.234 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:50.234 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:50.234 [2024-11-26 07:34:34.233823] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:24:50.234 [2024-11-26 07:34:34.233910] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:50.234 [2024-11-26 07:34:34.343916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:50.495 [2024-11-26 07:34:34.396383] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:50.495 [2024-11-26 07:34:34.396438] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:50.495 [2024-11-26 07:34:34.396446] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:50.495 [2024-11-26 07:34:34.396454] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:50.495 [2024-11-26 07:34:34.396460] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:50.495 [2024-11-26 07:34:34.398478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:50.495 [2024-11-26 07:34:34.398643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:50.495 [2024-11-26 07:34:34.398812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:50.495 [2024-11-26 07:34:34.398812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:51.067 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:51.067 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:24:51.067 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:51.067 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:51.067 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:51.067 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:51.067 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:51.067 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.067 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:51.067 [2024-11-26 07:34:35.090284] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:51.067 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.067 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:51.067 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:51.067 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:51.067 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:51.067 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:51.067 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:51.067 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:51.067 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:51.067 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:51.067 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:51.067 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:51.067 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:51.067 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:51.067 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:51.068 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:51.068 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:51.068 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:51.068 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:51.068 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:51.068 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:51.068 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:51.068 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:51.068 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:51.068 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:51.068 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:51.068 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:51.068 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.068 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:51.068 Malloc1 00:24:51.329 [2024-11-26 07:34:35.213633] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:51.329 Malloc2 00:24:51.329 Malloc3 00:24:51.329 Malloc4 00:24:51.329 Malloc5 00:24:51.329 Malloc6 00:24:51.329 Malloc7 00:24:51.591 Malloc8 00:24:51.591 Malloc9 00:24:51.591 Malloc10 00:24:51.591 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.591 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:51.591 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:51.591 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:51.591 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2181654 00:24:51.591 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2181654 /var/tmp/bdevperf.sock 00:24:51.591 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2181654 ']' 00:24:51.591 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:51.591 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:51.591 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:24:51.591 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:51.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:51.591 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:51.591 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:51.591 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:51.591 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:24:51.591 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:24:51.591 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:51.591 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:51.591 { 00:24:51.591 "params": { 00:24:51.591 "name": "Nvme$subsystem", 00:24:51.591 "trtype": "$TEST_TRANSPORT", 00:24:51.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:51.591 "adrfam": "ipv4", 00:24:51.591 "trsvcid": "$NVMF_PORT", 00:24:51.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:51.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:51.591 "hdgst": ${hdgst:-false}, 00:24:51.591 "ddgst": ${ddgst:-false} 00:24:51.591 }, 00:24:51.591 "method": "bdev_nvme_attach_controller" 00:24:51.591 } 00:24:51.591 EOF 00:24:51.591 )") 00:24:51.591 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:51.591 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:51.591 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:51.591 { 00:24:51.591 "params": { 00:24:51.591 "name": "Nvme$subsystem", 00:24:51.591 "trtype": "$TEST_TRANSPORT", 00:24:51.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:51.591 "adrfam": "ipv4", 00:24:51.591 "trsvcid": "$NVMF_PORT", 00:24:51.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:51.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:51.591 "hdgst": ${hdgst:-false}, 00:24:51.591 "ddgst": ${ddgst:-false} 00:24:51.591 }, 00:24:51.591 "method": "bdev_nvme_attach_controller" 00:24:51.591 } 00:24:51.591 EOF 00:24:51.591 )") 00:24:51.591 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:51.591 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:51.591 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:51.591 { 00:24:51.591 "params": { 00:24:51.591 "name": "Nvme$subsystem", 00:24:51.591 "trtype": "$TEST_TRANSPORT", 00:24:51.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:51.591 "adrfam": "ipv4", 00:24:51.591 "trsvcid": "$NVMF_PORT", 00:24:51.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:51.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:51.591 "hdgst": ${hdgst:-false}, 00:24:51.591 "ddgst": ${ddgst:-false} 00:24:51.591 }, 00:24:51.591 "method": "bdev_nvme_attach_controller" 00:24:51.591 } 00:24:51.591 EOF 00:24:51.591 )") 00:24:51.591 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:51.591 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:51.591 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:51.591 { 00:24:51.591 "params": { 00:24:51.591 "name": "Nvme$subsystem", 00:24:51.591 "trtype": "$TEST_TRANSPORT", 00:24:51.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:51.591 "adrfam": "ipv4", 00:24:51.591 "trsvcid": "$NVMF_PORT", 00:24:51.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:51.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:51.591 "hdgst": ${hdgst:-false}, 00:24:51.591 "ddgst": ${ddgst:-false} 00:24:51.591 }, 00:24:51.591 "method": "bdev_nvme_attach_controller" 00:24:51.591 } 00:24:51.591 EOF 00:24:51.591 )") 00:24:51.591 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:51.591 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:51.591 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:51.591 { 00:24:51.591 "params": { 00:24:51.591 "name": "Nvme$subsystem", 00:24:51.591 "trtype": "$TEST_TRANSPORT", 00:24:51.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:51.591 "adrfam": "ipv4", 00:24:51.591 "trsvcid": "$NVMF_PORT", 00:24:51.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:51.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:51.591 "hdgst": ${hdgst:-false}, 00:24:51.591 "ddgst": ${ddgst:-false} 00:24:51.591 }, 00:24:51.591 "method": "bdev_nvme_attach_controller" 00:24:51.591 } 00:24:51.591 EOF 00:24:51.591 )") 00:24:51.591 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:51.591 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:51.591 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:51.591 { 00:24:51.591 "params": { 00:24:51.591 "name": "Nvme$subsystem", 00:24:51.591 "trtype": "$TEST_TRANSPORT", 00:24:51.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:51.591 "adrfam": "ipv4", 00:24:51.591 "trsvcid": "$NVMF_PORT", 00:24:51.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:51.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:51.591 "hdgst": ${hdgst:-false}, 00:24:51.591 "ddgst": ${ddgst:-false} 00:24:51.592 }, 00:24:51.592 "method": "bdev_nvme_attach_controller" 00:24:51.592 } 00:24:51.592 EOF 00:24:51.592 )") 00:24:51.592 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:51.592 [2024-11-26 07:34:35.670965] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:24:51.592 [2024-11-26 07:34:35.671019] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:51.592 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:51.592 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:51.592 { 00:24:51.592 "params": { 00:24:51.592 "name": "Nvme$subsystem", 00:24:51.592 "trtype": "$TEST_TRANSPORT", 00:24:51.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:51.592 "adrfam": "ipv4", 00:24:51.592 "trsvcid": "$NVMF_PORT", 00:24:51.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:51.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:51.592 "hdgst": ${hdgst:-false}, 00:24:51.592 "ddgst": ${ddgst:-false} 00:24:51.592 }, 00:24:51.592 "method": "bdev_nvme_attach_controller" 00:24:51.592 } 00:24:51.592 EOF 00:24:51.592 )") 00:24:51.592 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:51.592 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:51.592 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:51.592 { 00:24:51.592 "params": { 00:24:51.592 "name": "Nvme$subsystem", 00:24:51.592 "trtype": "$TEST_TRANSPORT", 00:24:51.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:51.592 "adrfam": "ipv4", 00:24:51.592 "trsvcid": "$NVMF_PORT", 00:24:51.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:51.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:51.592 "hdgst": ${hdgst:-false}, 00:24:51.592 "ddgst": ${ddgst:-false} 00:24:51.592 }, 00:24:51.592 "method": "bdev_nvme_attach_controller" 00:24:51.592 } 00:24:51.592 EOF 00:24:51.592 )") 00:24:51.592 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:51.592 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:51.592 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:51.592 { 00:24:51.592 "params": { 00:24:51.592 "name": "Nvme$subsystem", 00:24:51.592 "trtype": "$TEST_TRANSPORT", 00:24:51.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:51.592 "adrfam": "ipv4", 00:24:51.592 "trsvcid": "$NVMF_PORT", 00:24:51.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:51.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:51.592 "hdgst": ${hdgst:-false}, 00:24:51.592 "ddgst": ${ddgst:-false} 00:24:51.592 }, 00:24:51.592 "method": "bdev_nvme_attach_controller" 00:24:51.592 } 00:24:51.592 EOF 00:24:51.592 )") 00:24:51.592 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:51.592 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:51.592 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:51.592 { 00:24:51.592 "params": { 00:24:51.592 "name": "Nvme$subsystem", 00:24:51.592 "trtype": "$TEST_TRANSPORT", 00:24:51.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:51.592 "adrfam": "ipv4", 00:24:51.592 "trsvcid": "$NVMF_PORT", 00:24:51.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:51.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:51.592 "hdgst": ${hdgst:-false}, 00:24:51.592 "ddgst": ${ddgst:-false} 00:24:51.592 }, 00:24:51.592 "method": "bdev_nvme_attach_controller" 00:24:51.592 } 00:24:51.592 EOF 00:24:51.592 )") 00:24:51.592 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:51.592 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:24:51.592 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:24:51.592 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:51.592 "params": { 00:24:51.592 "name": "Nvme1", 00:24:51.592 "trtype": "tcp", 00:24:51.592 "traddr": "10.0.0.2", 00:24:51.592 "adrfam": "ipv4", 00:24:51.592 "trsvcid": "4420", 00:24:51.592 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:51.592 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:51.592 "hdgst": false, 00:24:51.592 "ddgst": false 00:24:51.592 }, 00:24:51.592 "method": "bdev_nvme_attach_controller" 00:24:51.592 },{ 00:24:51.592 "params": { 00:24:51.592 "name": "Nvme2", 00:24:51.592 "trtype": "tcp", 00:24:51.592 "traddr": "10.0.0.2", 00:24:51.592 "adrfam": "ipv4", 00:24:51.592 "trsvcid": "4420", 00:24:51.592 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:51.592 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:51.592 "hdgst": false, 00:24:51.592 "ddgst": false 00:24:51.592 }, 00:24:51.592 "method": "bdev_nvme_attach_controller" 00:24:51.592 },{ 00:24:51.592 "params": { 00:24:51.592 "name": "Nvme3", 00:24:51.592 "trtype": "tcp", 00:24:51.592 "traddr": "10.0.0.2", 00:24:51.592 "adrfam": "ipv4", 00:24:51.592 "trsvcid": "4420", 00:24:51.592 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:51.592 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:51.592 "hdgst": false, 00:24:51.592 "ddgst": false 00:24:51.592 }, 00:24:51.592 "method": "bdev_nvme_attach_controller" 00:24:51.592 },{ 00:24:51.592 "params": { 00:24:51.592 "name": "Nvme4", 00:24:51.592 "trtype": "tcp", 00:24:51.592 "traddr": "10.0.0.2", 00:24:51.592 "adrfam": "ipv4", 00:24:51.592 "trsvcid": "4420", 00:24:51.592 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:51.592 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:51.592 "hdgst": false, 00:24:51.592 "ddgst": false 00:24:51.592 }, 00:24:51.592 "method": "bdev_nvme_attach_controller" 00:24:51.592 },{ 00:24:51.592 "params": { 00:24:51.592 "name": "Nvme5", 00:24:51.592 "trtype": "tcp", 00:24:51.592 "traddr": "10.0.0.2", 00:24:51.592 "adrfam": "ipv4", 00:24:51.592 "trsvcid": "4420", 00:24:51.592 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:51.592 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:51.592 "hdgst": false, 00:24:51.592 "ddgst": false 00:24:51.592 }, 00:24:51.592 "method": "bdev_nvme_attach_controller" 00:24:51.592 },{ 00:24:51.592 "params": { 00:24:51.592 "name": "Nvme6", 00:24:51.592 "trtype": "tcp", 00:24:51.592 "traddr": "10.0.0.2", 00:24:51.592 "adrfam": "ipv4", 00:24:51.592 "trsvcid": "4420", 00:24:51.592 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:51.592 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:51.592 "hdgst": false, 00:24:51.592 "ddgst": false 00:24:51.592 }, 00:24:51.592 "method": "bdev_nvme_attach_controller" 00:24:51.592 },{ 00:24:51.592 "params": { 00:24:51.592 "name": "Nvme7", 00:24:51.592 "trtype": "tcp", 00:24:51.592 "traddr": "10.0.0.2", 00:24:51.592 "adrfam": "ipv4", 00:24:51.592 "trsvcid": "4420", 00:24:51.592 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:51.592 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:51.592 "hdgst": false, 00:24:51.592 "ddgst": false 00:24:51.592 }, 00:24:51.592 "method": "bdev_nvme_attach_controller" 00:24:51.592 },{ 00:24:51.592 "params": { 00:24:51.592 "name": "Nvme8", 00:24:51.592 "trtype": "tcp", 00:24:51.592 "traddr": "10.0.0.2", 00:24:51.592 "adrfam": "ipv4", 00:24:51.592 "trsvcid": "4420", 00:24:51.592 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:51.592 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:51.592 "hdgst": false, 00:24:51.592 "ddgst": false 00:24:51.592 }, 00:24:51.592 "method": "bdev_nvme_attach_controller" 00:24:51.592 },{ 00:24:51.592 "params": { 00:24:51.592 "name": "Nvme9", 00:24:51.592 "trtype": "tcp", 00:24:51.592 "traddr": "10.0.0.2", 00:24:51.592 "adrfam": "ipv4", 00:24:51.592 "trsvcid": "4420", 00:24:51.592 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:51.592 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:51.592 "hdgst": false, 00:24:51.592 "ddgst": false 00:24:51.592 }, 00:24:51.592 "method": "bdev_nvme_attach_controller" 00:24:51.592 },{ 00:24:51.592 "params": { 00:24:51.592 "name": "Nvme10", 00:24:51.592 "trtype": "tcp", 00:24:51.592 "traddr": "10.0.0.2", 00:24:51.592 "adrfam": "ipv4", 00:24:51.592 "trsvcid": "4420", 00:24:51.592 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:51.592 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:51.592 "hdgst": false, 00:24:51.592 "ddgst": false 00:24:51.592 }, 00:24:51.592 "method": "bdev_nvme_attach_controller" 00:24:51.592 }' 00:24:51.853 [2024-11-26 07:34:35.750988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.853 [2024-11-26 07:34:35.787451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.238 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:53.238 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:24:53.238 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:53.238 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.238 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:53.238 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.238 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2181654 00:24:53.238 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:24:53.238 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:24:54.180 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2181654 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:24:54.180 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2181411 00:24:54.180 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:54.180 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:54.180 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:24:54.180 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:24:54.180 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:54.180 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:54.180 { 00:24:54.180 "params": { 00:24:54.180 "name": "Nvme$subsystem", 00:24:54.180 "trtype": "$TEST_TRANSPORT", 00:24:54.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:54.180 "adrfam": "ipv4", 00:24:54.180 "trsvcid": "$NVMF_PORT", 00:24:54.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:54.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:54.180 "hdgst": ${hdgst:-false}, 00:24:54.180 "ddgst": ${ddgst:-false} 00:24:54.180 }, 00:24:54.180 "method": "bdev_nvme_attach_controller" 00:24:54.180 } 00:24:54.180 EOF 00:24:54.180 )") 00:24:54.180 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:54.180 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:54.180 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:54.180 { 00:24:54.180 "params": { 00:24:54.180 "name": "Nvme$subsystem", 00:24:54.180 "trtype": "$TEST_TRANSPORT", 00:24:54.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:54.180 "adrfam": "ipv4", 00:24:54.180 "trsvcid": "$NVMF_PORT", 00:24:54.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:54.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:54.180 "hdgst": ${hdgst:-false}, 00:24:54.180 "ddgst": ${ddgst:-false} 00:24:54.180 }, 00:24:54.180 "method": "bdev_nvme_attach_controller" 00:24:54.180 } 00:24:54.180 EOF 00:24:54.180 )") 00:24:54.180 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:54.180 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:54.180 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:54.180 { 00:24:54.180 "params": { 00:24:54.180 "name": "Nvme$subsystem", 00:24:54.180 "trtype": "$TEST_TRANSPORT", 00:24:54.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:54.180 "adrfam": "ipv4", 00:24:54.180 "trsvcid": "$NVMF_PORT", 00:24:54.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:54.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:54.181 "hdgst": ${hdgst:-false}, 00:24:54.181 "ddgst": ${ddgst:-false} 00:24:54.181 }, 00:24:54.181 "method": "bdev_nvme_attach_controller" 00:24:54.181 } 00:24:54.181 EOF 00:24:54.181 )") 00:24:54.181 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:54.181 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:54.181 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:54.181 { 00:24:54.181 "params": { 00:24:54.181 "name": "Nvme$subsystem", 00:24:54.181 "trtype": "$TEST_TRANSPORT", 00:24:54.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:54.181 "adrfam": "ipv4", 00:24:54.181 "trsvcid": "$NVMF_PORT", 00:24:54.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:54.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:54.181 "hdgst": ${hdgst:-false}, 00:24:54.181 "ddgst": ${ddgst:-false} 00:24:54.181 }, 00:24:54.181 "method": "bdev_nvme_attach_controller" 00:24:54.181 } 00:24:54.181 EOF 00:24:54.181 )") 00:24:54.181 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:54.181 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:54.181 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:54.181 { 00:24:54.181 "params": { 00:24:54.181 "name": "Nvme$subsystem", 00:24:54.181 "trtype": "$TEST_TRANSPORT", 00:24:54.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:54.181 "adrfam": "ipv4", 00:24:54.181 "trsvcid": "$NVMF_PORT", 00:24:54.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:54.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:54.181 "hdgst": ${hdgst:-false}, 00:24:54.181 "ddgst": ${ddgst:-false} 00:24:54.181 }, 00:24:54.181 "method": "bdev_nvme_attach_controller" 00:24:54.181 } 00:24:54.181 EOF 00:24:54.181 )") 00:24:54.181 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:54.181 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:54.181 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:54.181 { 00:24:54.181 "params": { 00:24:54.181 "name": "Nvme$subsystem", 00:24:54.181 "trtype": "$TEST_TRANSPORT", 00:24:54.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:54.181 "adrfam": "ipv4", 00:24:54.181 "trsvcid": "$NVMF_PORT", 00:24:54.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:54.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:54.181 "hdgst": ${hdgst:-false}, 00:24:54.181 "ddgst": ${ddgst:-false} 00:24:54.181 }, 00:24:54.181 "method": "bdev_nvme_attach_controller" 00:24:54.181 } 00:24:54.181 EOF 00:24:54.181 )") 00:24:54.181 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:54.181 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:54.181 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:54.181 { 00:24:54.181 "params": { 00:24:54.181 "name": "Nvme$subsystem", 00:24:54.181 "trtype": "$TEST_TRANSPORT", 00:24:54.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:54.181 "adrfam": "ipv4", 00:24:54.181 "trsvcid": "$NVMF_PORT", 00:24:54.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:54.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:54.181 "hdgst": ${hdgst:-false}, 00:24:54.181 "ddgst": ${ddgst:-false} 00:24:54.181 }, 00:24:54.181 "method": "bdev_nvme_attach_controller" 00:24:54.181 } 00:24:54.181 EOF 00:24:54.181 )") 00:24:54.181 [2024-11-26 07:34:38.084921] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:24:54.181 [2024-11-26 07:34:38.084974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2182206 ] 00:24:54.181 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:54.181 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:54.181 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:54.181 { 00:24:54.181 "params": { 00:24:54.181 "name": "Nvme$subsystem", 00:24:54.181 "trtype": "$TEST_TRANSPORT", 00:24:54.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:54.181 "adrfam": "ipv4", 00:24:54.181 "trsvcid": "$NVMF_PORT", 00:24:54.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:54.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:54.181 "hdgst": ${hdgst:-false}, 00:24:54.181 "ddgst": ${ddgst:-false} 00:24:54.181 }, 00:24:54.181 "method": "bdev_nvme_attach_controller" 00:24:54.181 } 00:24:54.181 EOF 00:24:54.181 )") 00:24:54.181 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:54.181 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:54.181 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:54.181 { 00:24:54.181 "params": { 00:24:54.181 "name": "Nvme$subsystem", 00:24:54.181 "trtype": "$TEST_TRANSPORT", 00:24:54.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:54.181 "adrfam": "ipv4", 00:24:54.181 "trsvcid": "$NVMF_PORT", 00:24:54.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:54.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:54.181 "hdgst": ${hdgst:-false}, 00:24:54.181 "ddgst": ${ddgst:-false} 00:24:54.181 }, 00:24:54.181 "method": "bdev_nvme_attach_controller" 00:24:54.181 } 00:24:54.181 EOF 00:24:54.181 )") 00:24:54.181 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:54.181 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:54.181 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:54.181 { 00:24:54.181 "params": { 00:24:54.181 "name": "Nvme$subsystem", 00:24:54.181 "trtype": "$TEST_TRANSPORT", 00:24:54.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:54.181 "adrfam": "ipv4", 00:24:54.181 "trsvcid": "$NVMF_PORT", 00:24:54.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:54.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:54.181 "hdgst": ${hdgst:-false}, 00:24:54.181 "ddgst": ${ddgst:-false} 00:24:54.181 }, 00:24:54.181 "method": "bdev_nvme_attach_controller" 00:24:54.181 } 00:24:54.181 EOF 00:24:54.181 )") 00:24:54.181 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:54.181 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:24:54.181 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:24:54.181 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:54.181 "params": { 00:24:54.181 "name": "Nvme1", 00:24:54.181 "trtype": "tcp", 00:24:54.181 "traddr": "10.0.0.2", 00:24:54.181 "adrfam": "ipv4", 00:24:54.181 "trsvcid": "4420", 00:24:54.181 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:54.181 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:54.181 "hdgst": false, 00:24:54.181 "ddgst": false 00:24:54.181 }, 00:24:54.181 "method": "bdev_nvme_attach_controller" 00:24:54.181 },{ 00:24:54.181 "params": { 00:24:54.181 "name": "Nvme2", 00:24:54.181 "trtype": "tcp", 00:24:54.181 "traddr": "10.0.0.2", 00:24:54.181 "adrfam": "ipv4", 00:24:54.181 "trsvcid": "4420", 00:24:54.181 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:54.181 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:54.181 "hdgst": false, 00:24:54.181 "ddgst": false 00:24:54.181 }, 00:24:54.181 "method": "bdev_nvme_attach_controller" 00:24:54.181 },{ 00:24:54.181 "params": { 00:24:54.181 "name": "Nvme3", 00:24:54.181 "trtype": "tcp", 00:24:54.181 "traddr": "10.0.0.2", 00:24:54.181 "adrfam": "ipv4", 00:24:54.181 "trsvcid": "4420", 00:24:54.181 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:54.181 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:54.181 "hdgst": false, 00:24:54.181 "ddgst": false 00:24:54.181 }, 00:24:54.181 "method": "bdev_nvme_attach_controller" 00:24:54.181 },{ 00:24:54.181 "params": { 00:24:54.181 "name": "Nvme4", 00:24:54.181 "trtype": "tcp", 00:24:54.181 "traddr": "10.0.0.2", 00:24:54.181 "adrfam": "ipv4", 00:24:54.181 "trsvcid": "4420", 00:24:54.181 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:54.181 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:54.181 "hdgst": false, 00:24:54.181 "ddgst": false 00:24:54.181 }, 00:24:54.181 "method": "bdev_nvme_attach_controller" 00:24:54.181 },{ 00:24:54.181 "params": { 00:24:54.181 "name": "Nvme5", 00:24:54.181 "trtype": "tcp", 00:24:54.181 "traddr": "10.0.0.2", 00:24:54.181 "adrfam": "ipv4", 00:24:54.181 "trsvcid": "4420", 00:24:54.181 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:54.181 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:54.181 "hdgst": false, 00:24:54.181 "ddgst": false 00:24:54.181 }, 00:24:54.181 "method": "bdev_nvme_attach_controller" 00:24:54.181 },{ 00:24:54.181 "params": { 00:24:54.182 "name": "Nvme6", 00:24:54.182 "trtype": "tcp", 00:24:54.182 "traddr": "10.0.0.2", 00:24:54.182 "adrfam": "ipv4", 00:24:54.182 "trsvcid": "4420", 00:24:54.182 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:54.182 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:54.182 "hdgst": false, 00:24:54.182 "ddgst": false 00:24:54.182 }, 00:24:54.182 "method": "bdev_nvme_attach_controller" 00:24:54.182 },{ 00:24:54.182 "params": { 00:24:54.182 "name": "Nvme7", 00:24:54.182 "trtype": "tcp", 00:24:54.182 "traddr": "10.0.0.2", 00:24:54.182 "adrfam": "ipv4", 00:24:54.182 "trsvcid": "4420", 00:24:54.182 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:54.182 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:54.182 "hdgst": false, 00:24:54.182 "ddgst": false 00:24:54.182 }, 00:24:54.182 "method": "bdev_nvme_attach_controller" 00:24:54.182 },{ 00:24:54.182 "params": { 00:24:54.182 "name": "Nvme8", 00:24:54.182 "trtype": "tcp", 00:24:54.182 "traddr": "10.0.0.2", 00:24:54.182 "adrfam": "ipv4", 00:24:54.182 "trsvcid": "4420", 00:24:54.182 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:54.182 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:54.182 "hdgst": false, 00:24:54.182 "ddgst": false 00:24:54.182 }, 00:24:54.182 "method": "bdev_nvme_attach_controller" 00:24:54.182 },{ 00:24:54.182 "params": { 00:24:54.182 "name": "Nvme9", 00:24:54.182 "trtype": "tcp", 00:24:54.182 "traddr": "10.0.0.2", 00:24:54.182 "adrfam": "ipv4", 00:24:54.182 "trsvcid": "4420", 00:24:54.182 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:54.182 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:54.182 "hdgst": false, 00:24:54.182 "ddgst": false 00:24:54.182 }, 00:24:54.182 "method": "bdev_nvme_attach_controller" 00:24:54.182 },{ 00:24:54.182 "params": { 00:24:54.182 "name": "Nvme10", 00:24:54.182 "trtype": "tcp", 00:24:54.182 "traddr": "10.0.0.2", 00:24:54.182 "adrfam": "ipv4", 00:24:54.182 "trsvcid": "4420", 00:24:54.182 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:54.182 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:54.182 "hdgst": false, 00:24:54.182 "ddgst": false 00:24:54.182 }, 00:24:54.182 "method": "bdev_nvme_attach_controller" 00:24:54.182 }' 00:24:54.182 [2024-11-26 07:34:38.163902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.182 [2024-11-26 07:34:38.200119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.564 Running I/O for 1 seconds... 00:24:56.766 1811.00 IOPS, 113.19 MiB/s 00:24:56.766 Latency(us) 00:24:56.766 [2024-11-26T06:34:40.903Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:56.766 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:56.766 Verification LBA range: start 0x0 length 0x400 00:24:56.766 Nvme1n1 : 1.13 227.19 14.20 0.00 0.00 278947.20 18350.08 295348.91 00:24:56.766 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:56.766 Verification LBA range: start 0x0 length 0x400 00:24:56.766 Nvme2n1 : 1.14 225.27 14.08 0.00 0.00 276332.59 17039.36 241172.48 00:24:56.766 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:56.766 Verification LBA range: start 0x0 length 0x400 00:24:56.766 Nvme3n1 : 1.07 243.06 15.19 0.00 0.00 248717.73 19114.67 230686.72 00:24:56.766 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:56.766 Verification LBA range: start 0x0 length 0x400 00:24:56.766 Nvme4n1 : 1.08 237.82 14.86 0.00 0.00 251790.08 18240.85 272629.76 00:24:56.766 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:56.766 Verification LBA range: start 0x0 length 0x400 00:24:56.766 Nvme5n1 : 1.14 224.54 14.03 0.00 0.00 262834.77 21736.11 256901.12 00:24:56.766 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:56.766 Verification LBA range: start 0x0 length 0x400 00:24:56.767 Nvme6n1 : 1.19 219.08 13.69 0.00 0.00 254727.89 18568.53 251658.24 00:24:56.767 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:56.767 Verification LBA range: start 0x0 length 0x400 00:24:56.767 Nvme7n1 : 1.20 265.94 16.62 0.00 0.00 215114.07 14854.83 237677.23 00:24:56.767 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:56.767 Verification LBA range: start 0x0 length 0x400 00:24:56.767 Nvme8n1 : 1.20 267.36 16.71 0.00 0.00 209623.38 13544.11 256901.12 00:24:56.767 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:56.767 Verification LBA range: start 0x0 length 0x400 00:24:56.767 Nvme9n1 : 1.21 264.18 16.51 0.00 0.00 208956.93 8192.00 267386.88 00:24:56.767 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:56.767 Verification LBA range: start 0x0 length 0x400 00:24:56.767 Nvme10n1 : 1.22 263.08 16.44 0.00 0.00 206275.58 8847.36 267386.88 00:24:56.767 [2024-11-26T06:34:40.904Z] =================================================================================================================== 00:24:56.767 [2024-11-26T06:34:40.904Z] Total : 2437.53 152.35 0.00 0.00 238526.22 8192.00 295348.91 00:24:56.767 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:24:56.767 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:56.767 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:56.767 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:56.767 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:56.767 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:56.767 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:24:56.767 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:56.767 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:24:56.767 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:56.767 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:56.767 rmmod nvme_tcp 00:24:56.767 rmmod nvme_fabrics 00:24:56.767 rmmod nvme_keyring 00:24:56.767 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:56.767 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:24:56.767 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:24:56.767 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2181411 ']' 00:24:56.767 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2181411 00:24:56.767 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2181411 ']' 00:24:56.767 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2181411 00:24:56.767 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:24:56.767 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:57.028 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2181411 00:24:57.028 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:57.028 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:57.028 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2181411' 00:24:57.028 killing process with pid 2181411 00:24:57.028 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2181411 00:24:57.028 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2181411 00:24:57.288 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:57.288 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:57.288 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:57.288 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:24:57.288 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:24:57.288 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:57.288 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:24:57.288 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:57.288 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:57.288 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.288 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:57.288 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.204 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:59.204 00:24:59.204 real 0m17.559s 00:24:59.204 user 0m33.227s 00:24:59.204 sys 0m7.590s 00:24:59.204 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:59.204 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:59.204 ************************************ 00:24:59.204 END TEST nvmf_shutdown_tc1 00:24:59.204 ************************************ 00:24:59.204 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:24:59.204 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:59.204 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:59.204 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:59.465 ************************************ 00:24:59.465 START TEST nvmf_shutdown_tc2 00:24:59.465 ************************************ 00:24:59.465 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:24:59.465 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:24:59.465 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:59.465 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:59.465 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:59.465 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:59.466 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:59.466 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:59.466 Found net devices under 0000:31:00.0: cvl_0_0 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:59.466 Found net devices under 0000:31:00.1: cvl_0_1 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:59.466 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:59.467 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:59.467 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:59.467 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:59.467 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:59.467 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:59.467 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:59.467 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:59.728 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:59.728 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:59.728 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:59.728 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:59.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:59.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.691 ms 00:24:59.728 00:24:59.728 --- 10.0.0.2 ping statistics --- 00:24:59.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.728 rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms 00:24:59.728 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:59.728 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:59.728 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:24:59.728 00:24:59.728 --- 10.0.0.1 ping statistics --- 00:24:59.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.728 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:24:59.728 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:59.728 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:24:59.728 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:59.728 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:59.728 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:59.728 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:59.728 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:59.728 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:59.728 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:59.728 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:59.728 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:59.728 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:59.728 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:59.728 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2183329 00:24:59.728 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2183329 00:24:59.728 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:59.728 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2183329 ']' 00:24:59.728 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:59.728 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:59.728 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:59.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:59.728 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:59.728 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:59.728 [2024-11-26 07:34:43.830762] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:24:59.728 [2024-11-26 07:34:43.830834] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:59.989 [2024-11-26 07:34:43.936083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:59.989 [2024-11-26 07:34:43.969922] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:59.989 [2024-11-26 07:34:43.969956] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:59.989 [2024-11-26 07:34:43.969962] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:59.989 [2024-11-26 07:34:43.969967] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:59.989 [2024-11-26 07:34:43.969971] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:59.989 [2024-11-26 07:34:43.971293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:59.989 [2024-11-26 07:34:43.971456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:59.989 [2024-11-26 07:34:43.971617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:59.989 [2024-11-26 07:34:43.971619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:00.560 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:00.560 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:25:00.560 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:00.560 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:00.560 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.560 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:00.560 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:00.560 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.560 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.560 [2024-11-26 07:34:44.669991] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:00.560 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.560 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:00.560 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:00.560 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:00.560 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.560 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:00.560 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:00.560 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:00.822 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:00.822 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:00.822 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:00.822 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:00.822 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:00.822 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:00.822 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:00.822 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:00.822 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:00.822 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:00.822 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:00.822 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:00.822 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:00.822 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:00.822 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:00.822 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:00.822 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:00.822 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:00.822 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:00.822 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.822 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.822 Malloc1 00:25:00.822 [2024-11-26 07:34:44.783636] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:00.822 Malloc2 00:25:00.822 Malloc3 00:25:00.822 Malloc4 00:25:00.822 Malloc5 00:25:01.083 Malloc6 00:25:01.083 Malloc7 00:25:01.083 Malloc8 00:25:01.083 Malloc9 00:25:01.083 Malloc10 00:25:01.083 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.083 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:01.083 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:01.083 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.083 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2183708 00:25:01.083 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2183708 /var/tmp/bdevperf.sock 00:25:01.083 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2183708 ']' 00:25:01.083 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:01.083 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:01.083 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:01.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:01.083 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:01.083 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:01.083 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:01.083 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.083 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:25:01.083 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:25:01.083 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:01.083 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:01.083 { 00:25:01.083 "params": { 00:25:01.083 "name": "Nvme$subsystem", 00:25:01.083 "trtype": "$TEST_TRANSPORT", 00:25:01.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:01.083 "adrfam": "ipv4", 00:25:01.083 "trsvcid": "$NVMF_PORT", 00:25:01.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:01.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:01.083 "hdgst": ${hdgst:-false}, 00:25:01.083 "ddgst": ${ddgst:-false} 00:25:01.083 }, 00:25:01.083 "method": "bdev_nvme_attach_controller" 00:25:01.083 } 00:25:01.083 EOF 00:25:01.083 )") 00:25:01.083 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:01.083 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:01.083 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:01.083 { 00:25:01.083 "params": { 00:25:01.083 "name": "Nvme$subsystem", 00:25:01.083 "trtype": "$TEST_TRANSPORT", 00:25:01.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:01.083 "adrfam": "ipv4", 00:25:01.083 "trsvcid": "$NVMF_PORT", 00:25:01.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:01.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:01.083 "hdgst": ${hdgst:-false}, 00:25:01.083 "ddgst": ${ddgst:-false} 00:25:01.083 }, 00:25:01.083 "method": "bdev_nvme_attach_controller" 00:25:01.083 } 00:25:01.083 EOF 00:25:01.083 )") 00:25:01.083 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:01.083 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:01.083 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:01.083 { 00:25:01.083 "params": { 00:25:01.083 "name": "Nvme$subsystem", 00:25:01.083 "trtype": "$TEST_TRANSPORT", 00:25:01.084 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:01.084 "adrfam": "ipv4", 00:25:01.084 "trsvcid": "$NVMF_PORT", 00:25:01.084 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:01.084 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:01.084 "hdgst": ${hdgst:-false}, 00:25:01.084 "ddgst": ${ddgst:-false} 00:25:01.084 }, 00:25:01.084 "method": "bdev_nvme_attach_controller" 00:25:01.084 } 00:25:01.084 EOF 00:25:01.084 )") 00:25:01.084 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:01.084 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:01.084 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:01.084 { 00:25:01.084 "params": { 00:25:01.084 "name": "Nvme$subsystem", 00:25:01.084 "trtype": "$TEST_TRANSPORT", 00:25:01.084 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:01.084 "adrfam": "ipv4", 00:25:01.084 "trsvcid": "$NVMF_PORT", 00:25:01.084 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:01.084 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:01.084 "hdgst": ${hdgst:-false}, 00:25:01.084 "ddgst": ${ddgst:-false} 00:25:01.084 }, 00:25:01.084 "method": "bdev_nvme_attach_controller" 00:25:01.084 } 00:25:01.084 EOF 00:25:01.084 )") 00:25:01.084 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:01.345 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:01.345 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:01.345 { 00:25:01.345 "params": { 00:25:01.345 "name": "Nvme$subsystem", 00:25:01.345 "trtype": "$TEST_TRANSPORT", 00:25:01.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:01.345 "adrfam": "ipv4", 00:25:01.345 "trsvcid": "$NVMF_PORT", 00:25:01.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:01.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:01.345 "hdgst": ${hdgst:-false}, 00:25:01.345 "ddgst": ${ddgst:-false} 00:25:01.345 }, 00:25:01.345 "method": "bdev_nvme_attach_controller" 00:25:01.345 } 00:25:01.345 EOF 00:25:01.345 )") 00:25:01.345 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:01.345 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:01.345 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:01.345 { 00:25:01.345 "params": { 00:25:01.345 "name": "Nvme$subsystem", 00:25:01.345 "trtype": "$TEST_TRANSPORT", 00:25:01.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:01.345 "adrfam": "ipv4", 00:25:01.345 "trsvcid": "$NVMF_PORT", 00:25:01.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:01.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:01.345 "hdgst": ${hdgst:-false}, 00:25:01.345 "ddgst": ${ddgst:-false} 00:25:01.345 }, 00:25:01.345 "method": "bdev_nvme_attach_controller" 00:25:01.345 } 00:25:01.345 EOF 00:25:01.345 )") 00:25:01.345 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:01.345 [2024-11-26 07:34:45.232590] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:25:01.345 [2024-11-26 07:34:45.232646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2183708 ] 00:25:01.345 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:01.345 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:01.345 { 00:25:01.345 "params": { 00:25:01.345 "name": "Nvme$subsystem", 00:25:01.345 "trtype": "$TEST_TRANSPORT", 00:25:01.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:01.345 "adrfam": "ipv4", 00:25:01.345 "trsvcid": "$NVMF_PORT", 00:25:01.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:01.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:01.345 "hdgst": ${hdgst:-false}, 00:25:01.345 "ddgst": ${ddgst:-false} 00:25:01.345 }, 00:25:01.345 "method": "bdev_nvme_attach_controller" 00:25:01.345 } 00:25:01.345 EOF 00:25:01.345 )") 00:25:01.345 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:01.345 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:01.345 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:01.345 { 00:25:01.345 "params": { 00:25:01.345 "name": "Nvme$subsystem", 00:25:01.345 "trtype": "$TEST_TRANSPORT", 00:25:01.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:01.345 "adrfam": "ipv4", 00:25:01.345 "trsvcid": "$NVMF_PORT", 00:25:01.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:01.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:01.345 "hdgst": ${hdgst:-false}, 00:25:01.345 "ddgst": ${ddgst:-false} 00:25:01.345 }, 00:25:01.345 "method": "bdev_nvme_attach_controller" 00:25:01.345 } 00:25:01.345 EOF 00:25:01.345 )") 00:25:01.345 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:01.345 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:01.345 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:01.345 { 00:25:01.345 "params": { 00:25:01.345 "name": "Nvme$subsystem", 00:25:01.345 "trtype": "$TEST_TRANSPORT", 00:25:01.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:01.345 "adrfam": "ipv4", 00:25:01.345 "trsvcid": "$NVMF_PORT", 00:25:01.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:01.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:01.345 "hdgst": ${hdgst:-false}, 00:25:01.345 "ddgst": ${ddgst:-false} 00:25:01.345 }, 00:25:01.345 "method": "bdev_nvme_attach_controller" 00:25:01.345 } 00:25:01.345 EOF 00:25:01.345 )") 00:25:01.345 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:01.345 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:01.345 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:01.345 { 00:25:01.345 "params": { 00:25:01.345 "name": "Nvme$subsystem", 00:25:01.345 "trtype": "$TEST_TRANSPORT", 00:25:01.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:01.345 "adrfam": "ipv4", 00:25:01.345 "trsvcid": "$NVMF_PORT", 00:25:01.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:01.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:01.345 "hdgst": ${hdgst:-false}, 00:25:01.345 "ddgst": ${ddgst:-false} 00:25:01.345 }, 00:25:01.345 "method": "bdev_nvme_attach_controller" 00:25:01.345 } 00:25:01.345 EOF 00:25:01.345 )") 00:25:01.345 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:01.345 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:25:01.345 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:25:01.345 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:01.345 "params": { 00:25:01.345 "name": "Nvme1", 00:25:01.345 "trtype": "tcp", 00:25:01.345 "traddr": "10.0.0.2", 00:25:01.345 "adrfam": "ipv4", 00:25:01.345 "trsvcid": "4420", 00:25:01.345 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:01.345 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:01.345 "hdgst": false, 00:25:01.345 "ddgst": false 00:25:01.345 }, 00:25:01.345 "method": "bdev_nvme_attach_controller" 00:25:01.345 },{ 00:25:01.345 "params": { 00:25:01.345 "name": "Nvme2", 00:25:01.345 "trtype": "tcp", 00:25:01.345 "traddr": "10.0.0.2", 00:25:01.345 "adrfam": "ipv4", 00:25:01.345 "trsvcid": "4420", 00:25:01.345 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:01.345 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:01.345 "hdgst": false, 00:25:01.345 "ddgst": false 00:25:01.345 }, 00:25:01.345 "method": "bdev_nvme_attach_controller" 00:25:01.345 },{ 00:25:01.345 "params": { 00:25:01.345 "name": "Nvme3", 00:25:01.345 "trtype": "tcp", 00:25:01.345 "traddr": "10.0.0.2", 00:25:01.345 "adrfam": "ipv4", 00:25:01.345 "trsvcid": "4420", 00:25:01.345 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:01.345 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:01.345 "hdgst": false, 00:25:01.345 "ddgst": false 00:25:01.345 }, 00:25:01.345 "method": "bdev_nvme_attach_controller" 00:25:01.345 },{ 00:25:01.345 "params": { 00:25:01.345 "name": "Nvme4", 00:25:01.345 "trtype": "tcp", 00:25:01.345 "traddr": "10.0.0.2", 00:25:01.345 "adrfam": "ipv4", 00:25:01.345 "trsvcid": "4420", 00:25:01.345 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:01.345 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:01.345 "hdgst": false, 00:25:01.345 "ddgst": false 00:25:01.345 }, 00:25:01.345 "method": "bdev_nvme_attach_controller" 00:25:01.345 },{ 00:25:01.345 "params": { 00:25:01.345 "name": "Nvme5", 00:25:01.345 "trtype": "tcp", 00:25:01.345 "traddr": "10.0.0.2", 00:25:01.345 "adrfam": "ipv4", 00:25:01.345 "trsvcid": "4420", 00:25:01.345 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:01.345 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:01.345 "hdgst": false, 00:25:01.345 "ddgst": false 00:25:01.345 }, 00:25:01.345 "method": "bdev_nvme_attach_controller" 00:25:01.345 },{ 00:25:01.345 "params": { 00:25:01.345 "name": "Nvme6", 00:25:01.345 "trtype": "tcp", 00:25:01.345 "traddr": "10.0.0.2", 00:25:01.345 "adrfam": "ipv4", 00:25:01.345 "trsvcid": "4420", 00:25:01.345 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:01.345 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:01.345 "hdgst": false, 00:25:01.345 "ddgst": false 00:25:01.345 }, 00:25:01.345 "method": "bdev_nvme_attach_controller" 00:25:01.345 },{ 00:25:01.346 "params": { 00:25:01.346 "name": "Nvme7", 00:25:01.346 "trtype": "tcp", 00:25:01.346 "traddr": "10.0.0.2", 00:25:01.346 "adrfam": "ipv4", 00:25:01.346 "trsvcid": "4420", 00:25:01.346 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:01.346 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:01.346 "hdgst": false, 00:25:01.346 "ddgst": false 00:25:01.346 }, 00:25:01.346 "method": "bdev_nvme_attach_controller" 00:25:01.346 },{ 00:25:01.346 "params": { 00:25:01.346 "name": "Nvme8", 00:25:01.346 "trtype": "tcp", 00:25:01.346 "traddr": "10.0.0.2", 00:25:01.346 "adrfam": "ipv4", 00:25:01.346 "trsvcid": "4420", 00:25:01.346 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:01.346 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:01.346 "hdgst": false, 00:25:01.346 "ddgst": false 00:25:01.346 }, 00:25:01.346 "method": "bdev_nvme_attach_controller" 00:25:01.346 },{ 00:25:01.346 "params": { 00:25:01.346 "name": "Nvme9", 00:25:01.346 "trtype": "tcp", 00:25:01.346 "traddr": "10.0.0.2", 00:25:01.346 "adrfam": "ipv4", 00:25:01.346 "trsvcid": "4420", 00:25:01.346 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:01.346 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:01.346 "hdgst": false, 00:25:01.346 "ddgst": false 00:25:01.346 }, 00:25:01.346 "method": "bdev_nvme_attach_controller" 00:25:01.346 },{ 00:25:01.346 "params": { 00:25:01.346 "name": "Nvme10", 00:25:01.346 "trtype": "tcp", 00:25:01.346 "traddr": "10.0.0.2", 00:25:01.346 "adrfam": "ipv4", 00:25:01.346 "trsvcid": "4420", 00:25:01.346 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:01.346 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:01.346 "hdgst": false, 00:25:01.346 "ddgst": false 00:25:01.346 }, 00:25:01.346 "method": "bdev_nvme_attach_controller" 00:25:01.346 }' 00:25:01.346 [2024-11-26 07:34:45.311703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.346 [2024-11-26 07:34:45.347972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.265 Running I/O for 10 seconds... 00:25:03.265 07:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:03.265 07:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:25:03.265 07:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:03.265 07:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.265 07:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:03.265 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.265 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:03.265 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:03.265 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:25:03.265 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:25:03.265 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:25:03.265 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:25:03.265 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:03.265 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:03.265 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:03.265 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.265 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:03.265 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.265 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:25:03.265 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:25:03.265 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:03.530 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:03.530 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:03.530 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:03.530 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:03.530 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.530 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:03.530 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.530 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:25:03.530 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:25:03.530 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:03.791 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:03.791 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:03.791 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:03.791 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:03.791 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.791 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:03.791 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.791 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:25:03.791 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:25:03.791 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:25:03.791 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:25:03.791 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:25:03.791 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2183708 00:25:03.791 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2183708 ']' 00:25:03.791 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2183708 00:25:03.791 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:25:03.791 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:03.791 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2183708 00:25:03.791 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:03.791 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:03.791 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2183708' 00:25:03.791 killing process with pid 2183708 00:25:03.791 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2183708 00:25:03.791 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2183708 00:25:04.053 Received shutdown signal, test time was about 0.991542 seconds 00:25:04.053 00:25:04.053 Latency(us) 00:25:04.053 [2024-11-26T06:34:48.190Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.053 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:04.053 Verification LBA range: start 0x0 length 0x400 00:25:04.053 Nvme1n1 : 0.96 200.04 12.50 0.00 0.00 316229.12 14308.69 255153.49 00:25:04.053 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:04.053 Verification LBA range: start 0x0 length 0x400 00:25:04.053 Nvme2n1 : 0.98 260.99 16.31 0.00 0.00 237516.80 17585.49 255153.49 00:25:04.053 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:04.053 Verification LBA range: start 0x0 length 0x400 00:25:04.053 Nvme3n1 : 0.97 264.54 16.53 0.00 0.00 229653.23 9338.88 249910.61 00:25:04.053 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:04.053 Verification LBA range: start 0x0 length 0x400 00:25:04.053 Nvme4n1 : 0.98 262.49 16.41 0.00 0.00 226838.19 14417.92 246415.36 00:25:04.053 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:04.053 Verification LBA range: start 0x0 length 0x400 00:25:04.053 Nvme5n1 : 0.99 259.72 16.23 0.00 0.00 224685.44 16820.91 246415.36 00:25:04.053 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:04.053 Verification LBA range: start 0x0 length 0x400 00:25:04.053 Nvme6n1 : 0.99 258.49 16.16 0.00 0.00 221076.27 17039.36 267386.88 00:25:04.053 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:04.053 Verification LBA range: start 0x0 length 0x400 00:25:04.053 Nvme7n1 : 0.98 261.69 16.36 0.00 0.00 213402.24 24685.23 241172.48 00:25:04.053 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:04.053 Verification LBA range: start 0x0 length 0x400 00:25:04.053 Nvme8n1 : 0.99 259.03 16.19 0.00 0.00 211188.27 18896.21 235929.60 00:25:04.053 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:04.053 Verification LBA range: start 0x0 length 0x400 00:25:04.053 Nvme9n1 : 0.97 197.69 12.36 0.00 0.00 269800.96 18022.40 267386.88 00:25:04.053 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:04.053 Verification LBA range: start 0x0 length 0x400 00:25:04.053 Nvme10n1 : 0.96 200.90 12.56 0.00 0.00 258390.47 27962.03 246415.36 00:25:04.053 [2024-11-26T06:34:48.190Z] =================================================================================================================== 00:25:04.053 [2024-11-26T06:34:48.190Z] Total : 2425.58 151.60 0.00 0.00 237586.58 9338.88 267386.88 00:25:04.053 07:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:25:04.996 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2183329 00:25:04.996 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:25:04.996 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:04.996 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:04.996 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:04.996 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:04.996 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:04.996 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:25:04.996 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:04.996 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:25:04.996 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:04.996 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:04.996 rmmod nvme_tcp 00:25:04.996 rmmod nvme_fabrics 00:25:05.256 rmmod nvme_keyring 00:25:05.256 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:05.256 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:25:05.256 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:25:05.256 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2183329 ']' 00:25:05.256 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2183329 00:25:05.256 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2183329 ']' 00:25:05.256 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2183329 00:25:05.256 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:25:05.256 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:05.256 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2183329 00:25:05.256 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:05.256 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:05.256 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2183329' 00:25:05.256 killing process with pid 2183329 00:25:05.256 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2183329 00:25:05.256 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2183329 00:25:05.516 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:05.516 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:05.516 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:05.516 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:25:05.516 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:05.516 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:25:05.516 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:25:05.516 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:05.516 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:05.516 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.516 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:05.516 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:07.429 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:07.429 00:25:07.429 real 0m8.182s 00:25:07.429 user 0m24.995s 00:25:07.429 sys 0m1.313s 00:25:07.429 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:07.429 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:07.429 ************************************ 00:25:07.429 END TEST nvmf_shutdown_tc2 00:25:07.429 ************************************ 00:25:07.690 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:25:07.690 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:07.690 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:07.690 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:07.690 ************************************ 00:25:07.690 START TEST nvmf_shutdown_tc3 00:25:07.690 ************************************ 00:25:07.690 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:07.691 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:07.691 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:07.691 Found net devices under 0000:31:00.0: cvl_0_0 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.691 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:07.692 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:07.692 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.692 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:07.692 Found net devices under 0000:31:00.1: cvl_0_1 00:25:07.692 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.692 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:07.692 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:25:07.692 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:07.692 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:07.692 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:07.692 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:07.692 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:07.692 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:07.692 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:07.692 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:07.692 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:07.692 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:07.692 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:07.692 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:07.692 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:07.692 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:07.692 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:07.692 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:07.692 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:07.692 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:07.692 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:07.692 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:07.692 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:07.692 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:07.954 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:07.954 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:07.954 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:07.954 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:07.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:07.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:25:07.954 00:25:07.954 --- 10.0.0.2 ping statistics --- 00:25:07.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.954 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:25:07.954 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:07.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:07.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:25:07.954 00:25:07.954 --- 10.0.0.1 ping statistics --- 00:25:07.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.954 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:25:07.954 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:07.954 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:25:07.954 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:07.954 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:07.954 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:07.954 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:07.954 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:07.954 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:07.954 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:07.954 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:07.954 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:07.954 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:07.954 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:07.954 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2185172 00:25:07.954 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2185172 00:25:07.955 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:07.955 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2185172 ']' 00:25:07.955 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.955 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:07.955 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.955 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:07.955 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:07.955 [2024-11-26 07:34:52.024250] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:25:07.955 [2024-11-26 07:34:52.024316] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:08.216 [2024-11-26 07:34:52.127806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:08.216 [2024-11-26 07:34:52.161941] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:08.216 [2024-11-26 07:34:52.161987] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:08.216 [2024-11-26 07:34:52.161994] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:08.216 [2024-11-26 07:34:52.161999] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:08.216 [2024-11-26 07:34:52.162003] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:08.216 [2024-11-26 07:34:52.163305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:08.216 [2024-11-26 07:34:52.163462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:08.216 [2024-11-26 07:34:52.163615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.216 [2024-11-26 07:34:52.163616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:08.787 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:08.787 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:25:08.787 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:08.787 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:08.787 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:08.787 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:08.787 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:08.787 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.787 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:08.787 [2024-11-26 07:34:52.853634] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:08.787 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.787 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:08.787 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:08.787 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:08.787 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:08.787 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:08.787 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:08.787 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:08.787 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:08.787 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:08.787 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:08.787 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:08.787 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:08.787 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:08.787 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:08.787 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:08.787 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:08.787 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:08.787 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:08.787 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:08.787 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:08.787 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:08.787 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:08.787 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:08.787 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:08.787 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:09.046 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:09.046 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.046 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:09.046 Malloc1 00:25:09.046 [2024-11-26 07:34:52.961731] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:09.046 Malloc2 00:25:09.046 Malloc3 00:25:09.046 Malloc4 00:25:09.046 Malloc5 00:25:09.046 Malloc6 00:25:09.046 Malloc7 00:25:09.307 Malloc8 00:25:09.307 Malloc9 00:25:09.307 Malloc10 00:25:09.307 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.307 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:09.307 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:09.307 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:09.307 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2185448 00:25:09.307 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2185448 /var/tmp/bdevperf.sock 00:25:09.307 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2185448 ']' 00:25:09.307 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:09.307 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:09.307 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:09.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:09.307 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:09.307 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:09.307 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:09.307 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:09.307 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:25:09.307 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:25:09.307 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:09.307 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:09.307 { 00:25:09.307 "params": { 00:25:09.307 "name": "Nvme$subsystem", 00:25:09.307 "trtype": "$TEST_TRANSPORT", 00:25:09.307 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:09.307 "adrfam": "ipv4", 00:25:09.307 "trsvcid": "$NVMF_PORT", 00:25:09.307 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:09.307 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:09.307 "hdgst": ${hdgst:-false}, 00:25:09.307 "ddgst": ${ddgst:-false} 00:25:09.307 }, 00:25:09.307 "method": "bdev_nvme_attach_controller" 00:25:09.307 } 00:25:09.307 EOF 00:25:09.307 )") 00:25:09.307 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:09.307 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:09.307 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:09.307 { 00:25:09.307 "params": { 00:25:09.307 "name": "Nvme$subsystem", 00:25:09.307 "trtype": "$TEST_TRANSPORT", 00:25:09.307 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:09.307 "adrfam": "ipv4", 00:25:09.307 "trsvcid": "$NVMF_PORT", 00:25:09.307 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:09.307 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:09.307 "hdgst": ${hdgst:-false}, 00:25:09.307 "ddgst": ${ddgst:-false} 00:25:09.307 }, 00:25:09.307 "method": "bdev_nvme_attach_controller" 00:25:09.307 } 00:25:09.307 EOF 00:25:09.307 )") 00:25:09.307 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:09.307 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:09.307 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:09.307 { 00:25:09.307 "params": { 00:25:09.307 "name": "Nvme$subsystem", 00:25:09.307 "trtype": "$TEST_TRANSPORT", 00:25:09.307 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:09.307 "adrfam": "ipv4", 00:25:09.307 "trsvcid": "$NVMF_PORT", 00:25:09.307 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:09.307 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:09.307 "hdgst": ${hdgst:-false}, 00:25:09.307 "ddgst": ${ddgst:-false} 00:25:09.307 }, 00:25:09.307 "method": "bdev_nvme_attach_controller" 00:25:09.307 } 00:25:09.307 EOF 00:25:09.307 )") 00:25:09.307 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:09.307 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:09.307 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:09.307 { 00:25:09.307 "params": { 00:25:09.307 "name": "Nvme$subsystem", 00:25:09.307 "trtype": "$TEST_TRANSPORT", 00:25:09.307 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:09.307 "adrfam": "ipv4", 00:25:09.307 "trsvcid": "$NVMF_PORT", 00:25:09.307 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:09.307 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:09.307 "hdgst": ${hdgst:-false}, 00:25:09.307 "ddgst": ${ddgst:-false} 00:25:09.307 }, 00:25:09.307 "method": "bdev_nvme_attach_controller" 00:25:09.307 } 00:25:09.307 EOF 00:25:09.307 )") 00:25:09.307 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:09.307 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:09.307 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:09.307 { 00:25:09.307 "params": { 00:25:09.307 "name": "Nvme$subsystem", 00:25:09.307 "trtype": "$TEST_TRANSPORT", 00:25:09.307 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:09.307 "adrfam": "ipv4", 00:25:09.307 "trsvcid": "$NVMF_PORT", 00:25:09.307 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:09.307 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:09.307 "hdgst": ${hdgst:-false}, 00:25:09.307 "ddgst": ${ddgst:-false} 00:25:09.307 }, 00:25:09.307 "method": "bdev_nvme_attach_controller" 00:25:09.307 } 00:25:09.307 EOF 00:25:09.307 )") 00:25:09.307 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:09.307 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:09.307 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:09.307 { 00:25:09.307 "params": { 00:25:09.307 "name": "Nvme$subsystem", 00:25:09.307 "trtype": "$TEST_TRANSPORT", 00:25:09.307 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:09.307 "adrfam": "ipv4", 00:25:09.307 "trsvcid": "$NVMF_PORT", 00:25:09.307 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:09.307 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:09.307 "hdgst": ${hdgst:-false}, 00:25:09.307 "ddgst": ${ddgst:-false} 00:25:09.307 }, 00:25:09.307 "method": "bdev_nvme_attach_controller" 00:25:09.307 } 00:25:09.307 EOF 00:25:09.307 )") 00:25:09.307 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:09.307 [2024-11-26 07:34:53.416911] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:25:09.307 [2024-11-26 07:34:53.416966] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2185448 ] 00:25:09.307 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:09.308 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:09.308 { 00:25:09.308 "params": { 00:25:09.308 "name": "Nvme$subsystem", 00:25:09.308 "trtype": "$TEST_TRANSPORT", 00:25:09.308 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:09.308 "adrfam": "ipv4", 00:25:09.308 "trsvcid": "$NVMF_PORT", 00:25:09.308 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:09.308 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:09.308 "hdgst": ${hdgst:-false}, 00:25:09.308 "ddgst": ${ddgst:-false} 00:25:09.308 }, 00:25:09.308 "method": "bdev_nvme_attach_controller" 00:25:09.308 } 00:25:09.308 EOF 00:25:09.308 )") 00:25:09.308 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:09.308 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:09.308 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:09.308 { 00:25:09.308 "params": { 00:25:09.308 "name": "Nvme$subsystem", 00:25:09.308 "trtype": "$TEST_TRANSPORT", 00:25:09.308 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:09.308 "adrfam": "ipv4", 00:25:09.308 "trsvcid": "$NVMF_PORT", 00:25:09.308 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:09.308 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:09.308 "hdgst": ${hdgst:-false}, 00:25:09.308 "ddgst": ${ddgst:-false} 00:25:09.308 }, 00:25:09.308 "method": "bdev_nvme_attach_controller" 00:25:09.308 } 00:25:09.308 EOF 00:25:09.308 )") 00:25:09.308 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:09.308 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:09.308 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:09.308 { 00:25:09.308 "params": { 00:25:09.308 "name": "Nvme$subsystem", 00:25:09.308 "trtype": "$TEST_TRANSPORT", 00:25:09.308 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:09.308 "adrfam": "ipv4", 00:25:09.308 "trsvcid": "$NVMF_PORT", 00:25:09.308 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:09.308 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:09.308 "hdgst": ${hdgst:-false}, 00:25:09.308 "ddgst": ${ddgst:-false} 00:25:09.308 }, 00:25:09.308 "method": "bdev_nvme_attach_controller" 00:25:09.308 } 00:25:09.308 EOF 00:25:09.308 )") 00:25:09.308 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:09.569 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:09.569 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:09.569 { 00:25:09.569 "params": { 00:25:09.569 "name": "Nvme$subsystem", 00:25:09.569 "trtype": "$TEST_TRANSPORT", 00:25:09.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:09.569 "adrfam": "ipv4", 00:25:09.569 "trsvcid": "$NVMF_PORT", 00:25:09.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:09.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:09.569 "hdgst": ${hdgst:-false}, 00:25:09.569 "ddgst": ${ddgst:-false} 00:25:09.569 }, 00:25:09.569 "method": "bdev_nvme_attach_controller" 00:25:09.569 } 00:25:09.569 EOF 00:25:09.569 )") 00:25:09.569 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:09.569 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:25:09.569 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:25:09.569 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:09.569 "params": { 00:25:09.569 "name": "Nvme1", 00:25:09.569 "trtype": "tcp", 00:25:09.569 "traddr": "10.0.0.2", 00:25:09.569 "adrfam": "ipv4", 00:25:09.569 "trsvcid": "4420", 00:25:09.569 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:09.569 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:09.569 "hdgst": false, 00:25:09.569 "ddgst": false 00:25:09.569 }, 00:25:09.569 "method": "bdev_nvme_attach_controller" 00:25:09.569 },{ 00:25:09.569 "params": { 00:25:09.569 "name": "Nvme2", 00:25:09.569 "trtype": "tcp", 00:25:09.569 "traddr": "10.0.0.2", 00:25:09.569 "adrfam": "ipv4", 00:25:09.569 "trsvcid": "4420", 00:25:09.569 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:09.569 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:09.569 "hdgst": false, 00:25:09.569 "ddgst": false 00:25:09.570 }, 00:25:09.570 "method": "bdev_nvme_attach_controller" 00:25:09.570 },{ 00:25:09.570 "params": { 00:25:09.570 "name": "Nvme3", 00:25:09.570 "trtype": "tcp", 00:25:09.570 "traddr": "10.0.0.2", 00:25:09.570 "adrfam": "ipv4", 00:25:09.570 "trsvcid": "4420", 00:25:09.570 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:09.570 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:09.570 "hdgst": false, 00:25:09.570 "ddgst": false 00:25:09.570 }, 00:25:09.570 "method": "bdev_nvme_attach_controller" 00:25:09.570 },{ 00:25:09.570 "params": { 00:25:09.570 "name": "Nvme4", 00:25:09.570 "trtype": "tcp", 00:25:09.570 "traddr": "10.0.0.2", 00:25:09.570 "adrfam": "ipv4", 00:25:09.570 "trsvcid": "4420", 00:25:09.570 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:09.570 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:09.570 "hdgst": false, 00:25:09.570 "ddgst": false 00:25:09.570 }, 00:25:09.570 "method": "bdev_nvme_attach_controller" 00:25:09.570 },{ 00:25:09.570 "params": { 00:25:09.570 "name": "Nvme5", 00:25:09.570 "trtype": "tcp", 00:25:09.570 "traddr": "10.0.0.2", 00:25:09.570 "adrfam": "ipv4", 00:25:09.570 "trsvcid": "4420", 00:25:09.570 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:09.570 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:09.570 "hdgst": false, 00:25:09.570 "ddgst": false 00:25:09.570 }, 00:25:09.570 "method": "bdev_nvme_attach_controller" 00:25:09.570 },{ 00:25:09.570 "params": { 00:25:09.570 "name": "Nvme6", 00:25:09.570 "trtype": "tcp", 00:25:09.570 "traddr": "10.0.0.2", 00:25:09.570 "adrfam": "ipv4", 00:25:09.570 "trsvcid": "4420", 00:25:09.570 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:09.570 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:09.570 "hdgst": false, 00:25:09.570 "ddgst": false 00:25:09.570 }, 00:25:09.570 "method": "bdev_nvme_attach_controller" 00:25:09.570 },{ 00:25:09.570 "params": { 00:25:09.570 "name": "Nvme7", 00:25:09.570 "trtype": "tcp", 00:25:09.570 "traddr": "10.0.0.2", 00:25:09.570 "adrfam": "ipv4", 00:25:09.570 "trsvcid": "4420", 00:25:09.570 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:09.570 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:09.570 "hdgst": false, 00:25:09.570 "ddgst": false 00:25:09.570 }, 00:25:09.570 "method": "bdev_nvme_attach_controller" 00:25:09.570 },{ 00:25:09.570 "params": { 00:25:09.570 "name": "Nvme8", 00:25:09.570 "trtype": "tcp", 00:25:09.570 "traddr": "10.0.0.2", 00:25:09.570 "adrfam": "ipv4", 00:25:09.570 "trsvcid": "4420", 00:25:09.570 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:09.570 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:09.570 "hdgst": false, 00:25:09.570 "ddgst": false 00:25:09.570 }, 00:25:09.570 "method": "bdev_nvme_attach_controller" 00:25:09.570 },{ 00:25:09.570 "params": { 00:25:09.570 "name": "Nvme9", 00:25:09.570 "trtype": "tcp", 00:25:09.570 "traddr": "10.0.0.2", 00:25:09.570 "adrfam": "ipv4", 00:25:09.570 "trsvcid": "4420", 00:25:09.570 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:09.570 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:09.570 "hdgst": false, 00:25:09.570 "ddgst": false 00:25:09.570 }, 00:25:09.570 "method": "bdev_nvme_attach_controller" 00:25:09.570 },{ 00:25:09.570 "params": { 00:25:09.570 "name": "Nvme10", 00:25:09.570 "trtype": "tcp", 00:25:09.570 "traddr": "10.0.0.2", 00:25:09.570 "adrfam": "ipv4", 00:25:09.570 "trsvcid": "4420", 00:25:09.570 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:09.570 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:09.570 "hdgst": false, 00:25:09.570 "ddgst": false 00:25:09.570 }, 00:25:09.570 "method": "bdev_nvme_attach_controller" 00:25:09.570 }' 00:25:09.570 [2024-11-26 07:34:53.495521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.570 [2024-11-26 07:34:53.531871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.955 Running I/O for 10 seconds... 00:25:10.955 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:10.955 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:25:10.955 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:10.955 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.955 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:11.217 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.217 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:11.217 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:11.217 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:11.217 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:25:11.217 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:25:11.217 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:25:11.217 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:25:11.217 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:11.217 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:11.217 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:11.217 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.217 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:11.217 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.217 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:25:11.217 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:25:11.217 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:11.479 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:11.479 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:11.479 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:11.479 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:11.479 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.479 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:11.479 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.479 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:25:11.479 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:25:11.479 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:11.740 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:11.740 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:11.740 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:11.740 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:11.740 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.740 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:11.740 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.740 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:25:11.740 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:25:11.740 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:25:11.740 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:25:11.740 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:25:11.740 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2185172 00:25:11.740 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2185172 ']' 00:25:11.740 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2185172 00:25:11.740 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:25:11.740 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:12.017 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2185172 00:25:12.017 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:12.017 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:12.017 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2185172' 00:25:12.017 killing process with pid 2185172 00:25:12.017 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2185172 00:25:12.017 07:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2185172 00:25:12.017 [2024-11-26 07:34:55.934627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.017 [2024-11-26 07:34:55.934971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.934977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.934984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.934990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.934997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.935003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.935009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.935019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.935026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.935032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.935039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.935046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.935052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.935058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.935064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.935071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.935078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.935086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.935093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.935101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743bc0 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.018 [2024-11-26 07:34:55.936488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.936492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724860 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.937605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744090 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.938774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.938798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.938804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.938809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.938814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.938818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.938823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.938828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.938833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.938838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.938842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.938848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.938853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.938857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.938869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.938874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.938879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.938884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.938889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.019 [2024-11-26 07:34:55.938894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.938898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.938903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.938907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.938912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.938920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.938925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.938930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.938935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.938939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.938944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.938949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.938954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.938958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.938963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.938967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.938972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.938977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.938981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.938986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.938991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.938995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744a50 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.020 [2024-11-26 07:34:55.939913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.939921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.939930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.939938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.939946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.939953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.939961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.939969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.939976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.939984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.939992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744f20 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.940999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.941003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.941007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.941012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.941017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.941021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.941026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.021 [2024-11-26 07:34:55.941031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.941036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.941040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.941045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.941049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.941054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.941058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.941063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.941067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.941072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.941077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.941081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.941086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.941090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.941096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.941100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17453f0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d47a0 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.022 [2024-11-26 07:34:55.942874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.942882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.942889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.942897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.942905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.942913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.942920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.942928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.942936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.942943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.942951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.942958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.942966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.942977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.942985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.942993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.943335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4c70 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.950651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.023 [2024-11-26 07:34:55.950689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.023 [2024-11-26 07:34:55.950700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.023 [2024-11-26 07:34:55.950708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.023 [2024-11-26 07:34:55.950717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.023 [2024-11-26 07:34:55.950725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.023 [2024-11-26 07:34:55.950734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.023 [2024-11-26 07:34:55.950749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.023 [2024-11-26 07:34:55.950762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6a530 is same with the state(6) to be set 00:25:12.023 [2024-11-26 07:34:55.950811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.023 [2024-11-26 07:34:55.950822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.023 [2024-11-26 07:34:55.950831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.023 [2024-11-26 07:34:55.950838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.023 [2024-11-26 07:34:55.950846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.023 [2024-11-26 07:34:55.950853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.024 [2024-11-26 07:34:55.950867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.024 [2024-11-26 07:34:55.950875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.024 [2024-11-26 07:34:55.950882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b5c0 is same with the state(6) to be set 00:25:12.024 [2024-11-26 07:34:55.950910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.024 [2024-11-26 07:34:55.950919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.024 [2024-11-26 07:34:55.950928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.024 [2024-11-26 07:34:55.950935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.024 [2024-11-26 07:34:55.950943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.024 [2024-11-26 07:34:55.950951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.024 [2024-11-26 07:34:55.950959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.024 [2024-11-26 07:34:55.950967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.024 [2024-11-26 07:34:55.950974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf36880 is same with the state(6) to be set 00:25:12.024 [2024-11-26 07:34:55.951000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.024 [2024-11-26 07:34:55.951009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.024 [2024-11-26 07:34:55.951017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.024 [2024-11-26 07:34:55.951025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.024 [2024-11-26 07:34:55.951033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.024 [2024-11-26 07:34:55.951041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.024 [2024-11-26 07:34:55.951053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.024 [2024-11-26 07:34:55.951061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.024 [2024-11-26 07:34:55.951068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa20610 is same with the state(6) to be set 00:25:12.024 [2024-11-26 07:34:55.951093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.024 [2024-11-26 07:34:55.951102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.024 [2024-11-26 07:34:55.951111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.024 [2024-11-26 07:34:55.951118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.024 [2024-11-26 07:34:55.951127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.024 [2024-11-26 07:34:55.951134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.024 [2024-11-26 07:34:55.951143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.024 [2024-11-26 07:34:55.951150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.024 [2024-11-26 07:34:55.951157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6ba90 is same with the state(6) to be set 00:25:12.024 [2024-11-26 07:34:55.951184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.024 [2024-11-26 07:34:55.951193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.024 [2024-11-26 07:34:55.951201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.024 [2024-11-26 07:34:55.951208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.024 [2024-11-26 07:34:55.951217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.024 [2024-11-26 07:34:55.951224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.024 [2024-11-26 07:34:55.951232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.024 [2024-11-26 07:34:55.951240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.024 [2024-11-26 07:34:55.951247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02850 is same with the state(6) to be set 00:25:12.024 [2024-11-26 07:34:55.951270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.024 [2024-11-26 07:34:55.951279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.024 [2024-11-26 07:34:55.951287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.024 [2024-11-26 07:34:55.951294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.024 [2024-11-26 07:34:55.951305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.024 [2024-11-26 07:34:55.951312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.024 [2024-11-26 07:34:55.951320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.024 [2024-11-26 07:34:55.951327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.024 [2024-11-26 07:34:55.951334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e8b0 is same with the state(6) to be set 00:25:12.024 [2024-11-26 07:34:55.951358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.024 [2024-11-26 07:34:55.951367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.024 [2024-11-26 07:34:55.951375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.024 [2024-11-26 07:34:55.951383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.024 [2024-11-26 07:34:55.951391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.024 [2024-11-26 07:34:55.951399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.024 [2024-11-26 07:34:55.951407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.024 [2024-11-26 07:34:55.951414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.024 [2024-11-26 07:34:55.951421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb09d40 is same with the state(6) to be set 00:25:12.024 [2024-11-26 07:34:55.951446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.024 [2024-11-26 07:34:55.951455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.024 [2024-11-26 07:34:55.951463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.024 [2024-11-26 07:34:55.951471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.024 [2024-11-26 07:34:55.951479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.024 [2024-11-26 07:34:55.951487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.024 [2024-11-26 07:34:55.951495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.024 [2024-11-26 07:34:55.951502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.024 [2024-11-26 07:34:55.951509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04080 is same with the state(6) to be set 00:25:12.024 [2024-11-26 07:34:55.951531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.024 [2024-11-26 07:34:55.951540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.024 [2024-11-26 07:34:55.951548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.024 [2024-11-26 07:34:55.951558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.024 [2024-11-26 07:34:55.951566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.024 [2024-11-26 07:34:55.951573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.024 [2024-11-26 07:34:55.951581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.024 [2024-11-26 07:34:55.951589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.024 [2024-11-26 07:34:55.951596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf4b00 is same with the state(6) to be set 00:25:12.024 [2024-11-26 07:34:55.952134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.024 [2024-11-26 07:34:55.952155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.024 [2024-11-26 07:34:55.952171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.024 [2024-11-26 07:34:55.952179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.024 [2024-11-26 07:34:55.952190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.025 [2024-11-26 07:34:55.952197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.025 [2024-11-26 07:34:55.952207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.025 [2024-11-26 07:34:55.952215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.025 [2024-11-26 07:34:55.952225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.025 [2024-11-26 07:34:55.952232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.025 [2024-11-26 07:34:55.952242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.025 [2024-11-26 07:34:55.952249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.025 [2024-11-26 07:34:55.952259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.025 [2024-11-26 07:34:55.952266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.025 [2024-11-26 07:34:55.952276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.025 [2024-11-26 07:34:55.952283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.025 [2024-11-26 07:34:55.952293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.025 [2024-11-26 07:34:55.952300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.025 [2024-11-26 07:34:55.952310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.025 [2024-11-26 07:34:55.952325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.025 [2024-11-26 07:34:55.952335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.025 [2024-11-26 07:34:55.952343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.025 [2024-11-26 07:34:55.952352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.025 [2024-11-26 07:34:55.952360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.025 [2024-11-26 07:34:55.952369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.025 [2024-11-26 07:34:55.952377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.025 [2024-11-26 07:34:55.952387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.025 [2024-11-26 07:34:55.952394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.025 [2024-11-26 07:34:55.952404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.025 [2024-11-26 07:34:55.952411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.025 [2024-11-26 07:34:55.952421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.025 [2024-11-26 07:34:55.952428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.025 [2024-11-26 07:34:55.952438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.025 [2024-11-26 07:34:55.952446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.025 [2024-11-26 07:34:55.952455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.025 [2024-11-26 07:34:55.952463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.025 [2024-11-26 07:34:55.952472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.025 [2024-11-26 07:34:55.952480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.025 [2024-11-26 07:34:55.952489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.025 [2024-11-26 07:34:55.952497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.025 [2024-11-26 07:34:55.952506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.025 [2024-11-26 07:34:55.952513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.025 [2024-11-26 07:34:55.952523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.025 [2024-11-26 07:34:55.952530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.025 [2024-11-26 07:34:55.952542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.025 [2024-11-26 07:34:55.952549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.025 [2024-11-26 07:34:55.952559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.025 [2024-11-26 07:34:55.952566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.025 [2024-11-26 07:34:55.952576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.025 [2024-11-26 07:34:55.952583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.025 [2024-11-26 07:34:55.952593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.025 [2024-11-26 07:34:55.952600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.025 [2024-11-26 07:34:55.952610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.025 [2024-11-26 07:34:55.952617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.025 [2024-11-26 07:34:55.952627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.025 [2024-11-26 07:34:55.952634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.025 [2024-11-26 07:34:55.952643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.025 [2024-11-26 07:34:55.952651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.025 [2024-11-26 07:34:55.952660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.025 [2024-11-26 07:34:55.952668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.025 [2024-11-26 07:34:55.952678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.025 [2024-11-26 07:34:55.952685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.025 [2024-11-26 07:34:55.952695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.025 [2024-11-26 07:34:55.952702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.025 [2024-11-26 07:34:55.952711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.025 [2024-11-26 07:34:55.952719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.025 [2024-11-26 07:34:55.952728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.025 [2024-11-26 07:34:55.952736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.025 [2024-11-26 07:34:55.952745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.025 [2024-11-26 07:34:55.952754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.025 [2024-11-26 07:34:55.952765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.025 [2024-11-26 07:34:55.952773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.025 [2024-11-26 07:34:55.952783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.025 [2024-11-26 07:34:55.952790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.025 [2024-11-26 07:34:55.952800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.025 [2024-11-26 07:34:55.952808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.026 [2024-11-26 07:34:55.952818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.026 [2024-11-26 07:34:55.952825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.026 [2024-11-26 07:34:55.952835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.026 [2024-11-26 07:34:55.952843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.026 [2024-11-26 07:34:55.952852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.026 [2024-11-26 07:34:55.952860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.026 [2024-11-26 07:34:55.952876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.026 [2024-11-26 07:34:55.952883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.026 [2024-11-26 07:34:55.952893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.026 [2024-11-26 07:34:55.952901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.026 [2024-11-26 07:34:55.952911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.026 [2024-11-26 07:34:55.952918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.026 [2024-11-26 07:34:55.952928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.026 [2024-11-26 07:34:55.952936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.026 [2024-11-26 07:34:55.952945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.026 [2024-11-26 07:34:55.952953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.026 [2024-11-26 07:34:55.952963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.026 [2024-11-26 07:34:55.952971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.026 [2024-11-26 07:34:55.952982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.026 [2024-11-26 07:34:55.952990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.026 [2024-11-26 07:34:55.952999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.026 [2024-11-26 07:34:55.953007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.026 [2024-11-26 07:34:55.953017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.026 [2024-11-26 07:34:55.953024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.026 [2024-11-26 07:34:55.953034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.026 [2024-11-26 07:34:55.953041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.026 [2024-11-26 07:34:55.953051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.026 [2024-11-26 07:34:55.953059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.026 [2024-11-26 07:34:55.953069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.026 [2024-11-26 07:34:55.953077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.026 [2024-11-26 07:34:55.953086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.026 [2024-11-26 07:34:55.953093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.026 [2024-11-26 07:34:55.953103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.026 [2024-11-26 07:34:55.953110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.026 [2024-11-26 07:34:55.953120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.026 [2024-11-26 07:34:55.953128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.026 [2024-11-26 07:34:55.953138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.026 [2024-11-26 07:34:55.953145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.026 [2024-11-26 07:34:55.953155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.026 [2024-11-26 07:34:55.953162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.026 [2024-11-26 07:34:55.953172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.026 [2024-11-26 07:34:55.953180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.026 [2024-11-26 07:34:55.953189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.026 [2024-11-26 07:34:55.953199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.026 [2024-11-26 07:34:55.953208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.026 [2024-11-26 07:34:55.953216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.026 [2024-11-26 07:34:55.953225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.026 [2024-11-26 07:34:55.953233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.026 [2024-11-26 07:34:55.953243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.026 [2024-11-26 07:34:55.953250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.026 [2024-11-26 07:34:55.953260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.026 [2024-11-26 07:34:55.953268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.026 [2024-11-26 07:34:55.953572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.026 [2024-11-26 07:34:55.953596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.026 [2024-11-26 07:34:55.953615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.026 [2024-11-26 07:34:55.953628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.026 [2024-11-26 07:34:55.953644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.026 [2024-11-26 07:34:55.953654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.026 [2024-11-26 07:34:55.953664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.026 [2024-11-26 07:34:55.953672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.026 [2024-11-26 07:34:55.953682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.026 [2024-11-26 07:34:55.953690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.026 [2024-11-26 07:34:55.953700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.026 [2024-11-26 07:34:55.953708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.026 [2024-11-26 07:34:55.953718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.026 [2024-11-26 07:34:55.953726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.026 [2024-11-26 07:34:55.953736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.026 [2024-11-26 07:34:55.953744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.027 [2024-11-26 07:34:55.953758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.027 [2024-11-26 07:34:55.953766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.027 [2024-11-26 07:34:55.953775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.027 [2024-11-26 07:34:55.953783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.027 [2024-11-26 07:34:55.953793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.027 [2024-11-26 07:34:55.953800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.027 [2024-11-26 07:34:55.953810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.027 [2024-11-26 07:34:55.953817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.027 [2024-11-26 07:34:55.953827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.027 [2024-11-26 07:34:55.953834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.027 [2024-11-26 07:34:55.953844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.027 [2024-11-26 07:34:55.953851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.027 [2024-11-26 07:34:55.953868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.027 [2024-11-26 07:34:55.953876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.027 [2024-11-26 07:34:55.953885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.027 [2024-11-26 07:34:55.953893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.027 [2024-11-26 07:34:55.953903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.027 [2024-11-26 07:34:55.953911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.027 [2024-11-26 07:34:55.953920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.027 [2024-11-26 07:34:55.953928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.027 [2024-11-26 07:34:55.953937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.027 [2024-11-26 07:34:55.953945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.027 [2024-11-26 07:34:55.953955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.027 [2024-11-26 07:34:55.953963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.027 [2024-11-26 07:34:55.953972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.027 [2024-11-26 07:34:55.953982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.027 [2024-11-26 07:34:55.953991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.027 [2024-11-26 07:34:55.953999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.027 [2024-11-26 07:34:55.954009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.027 [2024-11-26 07:34:55.954016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.027 [2024-11-26 07:34:55.954026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.027 [2024-11-26 07:34:55.954034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.027 [2024-11-26 07:34:55.954043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.027 [2024-11-26 07:34:55.954051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.027 [2024-11-26 07:34:55.954060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.027 [2024-11-26 07:34:55.954068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.027 [2024-11-26 07:34:55.954078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.027 [2024-11-26 07:34:55.954085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.027 [2024-11-26 07:34:55.954095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.027 [2024-11-26 07:34:55.954103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.027 [2024-11-26 07:34:55.954112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.027 [2024-11-26 07:34:55.954120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.027 [2024-11-26 07:34:55.954129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.027 [2024-11-26 07:34:55.954137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.027 [2024-11-26 07:34:55.954147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.027 [2024-11-26 07:34:55.954154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.027 [2024-11-26 07:34:55.954164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.027 [2024-11-26 07:34:55.954172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.027 [2024-11-26 07:34:55.954181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.027 [2024-11-26 07:34:55.954189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.027 [2024-11-26 07:34:55.954200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.027 [2024-11-26 07:34:55.954208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.027 [2024-11-26 07:34:55.954218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.027 [2024-11-26 07:34:55.954225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.027 [2024-11-26 07:34:55.954235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.027 [2024-11-26 07:34:55.954243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.027 [2024-11-26 07:34:55.954253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.027 [2024-11-26 07:34:55.954261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.027 [2024-11-26 07:34:55.954271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.027 [2024-11-26 07:34:55.954278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.027 [2024-11-26 07:34:55.954288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.027 [2024-11-26 07:34:55.954296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.027 [2024-11-26 07:34:55.954306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.027 [2024-11-26 07:34:55.954314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.027 [2024-11-26 07:34:55.954323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.027 [2024-11-26 07:34:55.954331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.027 [2024-11-26 07:34:55.954340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.027 [2024-11-26 07:34:55.954348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.027 [2024-11-26 07:34:55.954358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.027 [2024-11-26 07:34:55.954365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.028 [2024-11-26 07:34:55.954375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.028 [2024-11-26 07:34:55.954383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.028 [2024-11-26 07:34:55.954392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.028 [2024-11-26 07:34:55.954400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.028 [2024-11-26 07:34:55.954410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.028 [2024-11-26 07:34:55.954419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.028 [2024-11-26 07:34:55.954428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.028 [2024-11-26 07:34:55.954436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.028 [2024-11-26 07:34:55.954446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.028 [2024-11-26 07:34:55.954453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.028 [2024-11-26 07:34:55.954462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.028 [2024-11-26 07:34:55.954470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.028 [2024-11-26 07:34:55.954479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.028 [2024-11-26 07:34:55.954487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.028 [2024-11-26 07:34:55.954496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.028 [2024-11-26 07:34:55.954504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.028 [2024-11-26 07:34:55.954514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.028 [2024-11-26 07:34:55.954521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.028 [2024-11-26 07:34:55.954531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.028 [2024-11-26 07:34:55.954539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.028 [2024-11-26 07:34:55.954548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.028 [2024-11-26 07:34:55.954555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.028 [2024-11-26 07:34:55.954565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.028 [2024-11-26 07:34:55.954572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.028 [2024-11-26 07:34:55.954582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.028 [2024-11-26 07:34:55.954589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.028 [2024-11-26 07:34:55.954599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.028 [2024-11-26 07:34:55.954606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.028 [2024-11-26 07:34:55.954616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.028 [2024-11-26 07:34:55.954623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.028 [2024-11-26 07:34:55.954634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.028 [2024-11-26 07:34:55.954642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.028 [2024-11-26 07:34:55.954652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.028 [2024-11-26 07:34:55.954660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.028 [2024-11-26 07:34:55.954669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.028 [2024-11-26 07:34:55.954677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.028 [2024-11-26 07:34:55.954686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.028 [2024-11-26 07:34:55.954694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.028 [2024-11-26 07:34:55.954703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.028 [2024-11-26 07:34:55.954711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.028 [2024-11-26 07:34:55.954720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.028 [2024-11-26 07:34:55.954727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.028 [2024-11-26 07:34:55.956233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:25:12.028 [2024-11-26 07:34:55.956265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa20610 (9): Bad file descriptor 00:25:12.028 [2024-11-26 07:34:55.957854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:25:12.028 [2024-11-26 07:34:55.957890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb02850 (9): Bad file descriptor 00:25:12.028 [2024-11-26 07:34:55.958409] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:12.028 [2024-11-26 07:34:55.958459] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:12.028 [2024-11-26 07:34:55.958495] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:12.028 [2024-11-26 07:34:55.958846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.028 [2024-11-26 07:34:55.958870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa20610 with addr=10.0.0.2, port=4420 00:25:12.028 [2024-11-26 07:34:55.958879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa20610 is same with the state(6) to be set 00:25:12.028 [2024-11-26 07:34:55.958932] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:12.028 [2024-11-26 07:34:55.958965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.028 [2024-11-26 07:34:55.958976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.028 [2024-11-26 07:34:55.958989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.028 [2024-11-26 07:34:55.958997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.028 [2024-11-26 07:34:55.959007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.028 [2024-11-26 07:34:55.959018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.028 [2024-11-26 07:34:55.959029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.028 [2024-11-26 07:34:55.959036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.028 [2024-11-26 07:34:55.959046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.028 [2024-11-26 07:34:55.959053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.028 [2024-11-26 07:34:55.959063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.028 [2024-11-26 07:34:55.959070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.028 [2024-11-26 07:34:55.959080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.028 [2024-11-26 07:34:55.959087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.028 [2024-11-26 07:34:55.959097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.028 [2024-11-26 07:34:55.959105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.028 [2024-11-26 07:34:55.959114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.028 [2024-11-26 07:34:55.959122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.028 [2024-11-26 07:34:55.959131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.028 [2024-11-26 07:34:55.959138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.028 [2024-11-26 07:34:55.959148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.028 [2024-11-26 07:34:55.959155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.028 [2024-11-26 07:34:55.959165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.028 [2024-11-26 07:34:55.959172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.028 [2024-11-26 07:34:55.959182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.028 [2024-11-26 07:34:55.959189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.028 [2024-11-26 07:34:55.959199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.029 [2024-11-26 07:34:55.959206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.029 [2024-11-26 07:34:55.959215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.029 [2024-11-26 07:34:55.959223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.029 [2024-11-26 07:34:55.959234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.029 [2024-11-26 07:34:55.959241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.029 [2024-11-26 07:34:55.959250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.029 [2024-11-26 07:34:55.959258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.029 [2024-11-26 07:34:55.959267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.029 [2024-11-26 07:34:55.959275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.029 [2024-11-26 07:34:55.959284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.029 [2024-11-26 07:34:55.959292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.029 [2024-11-26 07:34:55.959301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.029 [2024-11-26 07:34:55.959309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.029 [2024-11-26 07:34:55.959318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.029 [2024-11-26 07:34:55.959326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.029 [2024-11-26 07:34:55.959335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.029 [2024-11-26 07:34:55.959343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.029 [2024-11-26 07:34:55.959352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.029 [2024-11-26 07:34:55.959360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.029 [2024-11-26 07:34:55.959369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.029 [2024-11-26 07:34:55.959377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.029 [2024-11-26 07:34:55.959386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.029 [2024-11-26 07:34:55.959393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.029 [2024-11-26 07:34:55.959403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.029 [2024-11-26 07:34:55.959410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.029 [2024-11-26 07:34:55.959420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.029 [2024-11-26 07:34:55.959427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.029 [2024-11-26 07:34:55.959437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.029 [2024-11-26 07:34:55.959445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.029 [2024-11-26 07:34:55.959455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.029 [2024-11-26 07:34:55.959462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.029 [2024-11-26 07:34:55.959472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.029 [2024-11-26 07:34:55.959479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.029 [2024-11-26 07:34:55.959489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.029 [2024-11-26 07:34:55.959496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.029 [2024-11-26 07:34:55.959505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.029 [2024-11-26 07:34:55.959513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.029 [2024-11-26 07:34:55.959522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.029 [2024-11-26 07:34:55.959530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.029 [2024-11-26 07:34:55.959539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.029 [2024-11-26 07:34:55.959546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.029 [2024-11-26 07:34:55.959556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.029 [2024-11-26 07:34:55.959563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.029 [2024-11-26 07:34:55.959573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.029 [2024-11-26 07:34:55.959580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.029 [2024-11-26 07:34:55.959590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.029 [2024-11-26 07:34:55.959597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.029 [2024-11-26 07:34:55.959607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.029 [2024-11-26 07:34:55.959614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.029 [2024-11-26 07:34:55.959624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.029 [2024-11-26 07:34:55.959631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.029 [2024-11-26 07:34:55.959640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.029 [2024-11-26 07:34:55.959648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.029 [2024-11-26 07:34:55.959658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.029 [2024-11-26 07:34:55.959666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.029 [2024-11-26 07:34:55.959675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.029 [2024-11-26 07:34:55.959683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.029 [2024-11-26 07:34:55.959692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.029 [2024-11-26 07:34:55.959700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.029 [2024-11-26 07:34:55.959709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.029 [2024-11-26 07:34:55.959717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.029 [2024-11-26 07:34:55.959726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.029 [2024-11-26 07:34:55.959733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.029 [2024-11-26 07:34:55.959743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.029 [2024-11-26 07:34:55.959751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.029 [2024-11-26 07:34:55.959760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.029 [2024-11-26 07:34:55.959767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.029 [2024-11-26 07:34:55.959778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.029 [2024-11-26 07:34:55.959786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.029 [2024-11-26 07:34:55.959795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.029 [2024-11-26 07:34:55.959802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.030 [2024-11-26 07:34:55.959812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-26 07:34:55.959819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.030 [2024-11-26 07:34:55.959828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-26 07:34:55.959836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.030 [2024-11-26 07:34:55.959845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-26 07:34:55.959853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.030 [2024-11-26 07:34:55.959866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-26 07:34:55.959878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.030 [2024-11-26 07:34:55.959888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-26 07:34:55.959896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.030 [2024-11-26 07:34:55.959905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-26 07:34:55.959912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.030 [2024-11-26 07:34:55.959922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-26 07:34:55.959929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.030 [2024-11-26 07:34:55.959939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-26 07:34:55.959946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.030 [2024-11-26 07:34:55.959956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-26 07:34:55.959963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.030 [2024-11-26 07:34:55.959973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-26 07:34:55.959981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.030 [2024-11-26 07:34:55.959991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-26 07:34:55.959999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.030 [2024-11-26 07:34:55.960009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-26 07:34:55.960016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.030 [2024-11-26 07:34:55.960026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-26 07:34:55.960034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.030 [2024-11-26 07:34:55.960044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-26 07:34:55.960052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.030 [2024-11-26 07:34:55.960062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-26 07:34:55.960070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.030 [2024-11-26 07:34:55.960078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10090 is same with the state(6) to be set 00:25:12.030 [2024-11-26 07:34:55.960418] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:12.030 [2024-11-26 07:34:55.960464] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:12.030 [2024-11-26 07:34:55.960502] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:12.030 [2024-11-26 07:34:55.960766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.030 [2024-11-26 07:34:55.960782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb02850 with addr=10.0.0.2, port=4420 00:25:12.030 [2024-11-26 07:34:55.960791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02850 is same with the state(6) to be set 00:25:12.030 [2024-11-26 07:34:55.960803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa20610 (9): Bad file descriptor 00:25:12.030 [2024-11-26 07:34:55.962060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-26 07:34:55.962076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.030 [2024-11-26 07:34:55.962090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-26 07:34:55.962100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.030 [2024-11-26 07:34:55.962112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-26 07:34:55.962121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.030 [2024-11-26 07:34:55.962133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-26 07:34:55.962142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.030 [2024-11-26 07:34:55.962154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-26 07:34:55.962163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.030 [2024-11-26 07:34:55.962175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-26 07:34:55.962185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.030 [2024-11-26 07:34:55.962197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-26 07:34:55.962205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.030 [2024-11-26 07:34:55.962214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-26 07:34:55.962222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.030 [2024-11-26 07:34:55.962232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-26 07:34:55.962240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.030 [2024-11-26 07:34:55.962250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-26 07:34:55.962258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.030 [2024-11-26 07:34:55.962268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-26 07:34:55.962280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.030 [2024-11-26 07:34:55.962290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-26 07:34:55.962297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.030 [2024-11-26 07:34:55.962308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-26 07:34:55.962315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.030 [2024-11-26 07:34:55.962325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-26 07:34:55.962333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.030 [2024-11-26 07:34:55.962343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-26 07:34:55.962350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.030 [2024-11-26 07:34:55.962360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-26 07:34:55.962368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.030 [2024-11-26 07:34:55.962377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-26 07:34:55.962385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.030 [2024-11-26 07:34:55.962395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-26 07:34:55.962402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.030 [2024-11-26 07:34:55.962412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-26 07:34:55.962419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.030 [2024-11-26 07:34:55.962429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-26 07:34:55.962437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.030 [2024-11-26 07:34:55.962447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-26 07:34:55.962454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.031 [2024-11-26 07:34:55.962464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-26 07:34:55.962472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.031 [2024-11-26 07:34:55.962481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-26 07:34:55.962489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.031 [2024-11-26 07:34:55.962500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-26 07:34:55.962508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.031 [2024-11-26 07:34:55.962517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-26 07:34:55.962525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.031 [2024-11-26 07:34:55.962534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-26 07:34:55.962542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.031 [2024-11-26 07:34:55.962551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-26 07:34:55.962558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.031 [2024-11-26 07:34:55.962568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-26 07:34:55.962575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.031 [2024-11-26 07:34:55.962586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-26 07:34:55.962593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.031 [2024-11-26 07:34:55.962602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-26 07:34:55.962610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.031 [2024-11-26 07:34:55.962619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-26 07:34:55.962627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.031 [2024-11-26 07:34:55.962636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-26 07:34:55.962644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.031 [2024-11-26 07:34:55.962654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-26 07:34:55.962661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.031 [2024-11-26 07:34:55.962670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-26 07:34:55.962678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.031 [2024-11-26 07:34:55.962687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-26 07:34:55.962695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.031 [2024-11-26 07:34:55.962704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-26 07:34:55.962713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.031 [2024-11-26 07:34:55.962723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-26 07:34:55.962730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.031 [2024-11-26 07:34:55.962740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-26 07:34:55.962748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.031 [2024-11-26 07:34:55.962757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-26 07:34:55.962765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.031 [2024-11-26 07:34:55.962774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-26 07:34:55.962782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.031 [2024-11-26 07:34:55.962791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-26 07:34:55.962799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.031 [2024-11-26 07:34:55.962808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-26 07:34:55.962816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.031 [2024-11-26 07:34:55.962826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-26 07:34:55.962833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.031 [2024-11-26 07:34:55.962843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-26 07:34:55.962850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.031 [2024-11-26 07:34:55.962860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-26 07:34:55.962871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.031 [2024-11-26 07:34:55.962880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-26 07:34:55.962888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.031 [2024-11-26 07:34:55.962898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-26 07:34:55.962905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.031 [2024-11-26 07:34:55.962915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-26 07:34:55.962923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.031 [2024-11-26 07:34:55.962934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-26 07:34:55.962942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.031 [2024-11-26 07:34:55.962951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-26 07:34:55.962958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.031 [2024-11-26 07:34:55.962968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-26 07:34:55.962976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.031 [2024-11-26 07:34:55.962985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-26 07:34:55.962993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.031 [2024-11-26 07:34:55.963002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-26 07:34:55.963009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.031 [2024-11-26 07:34:55.963019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-26 07:34:55.963027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.031 [2024-11-26 07:34:55.963036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-26 07:34:55.963043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.032 [2024-11-26 07:34:55.963052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.032 [2024-11-26 07:34:55.963060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.032 [2024-11-26 07:34:55.963069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.032 [2024-11-26 07:34:55.963077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.032 [2024-11-26 07:34:55.963086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.032 [2024-11-26 07:34:55.963094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.032 [2024-11-26 07:34:55.963104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.032 [2024-11-26 07:34:55.963111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.032 [2024-11-26 07:34:55.963121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.032 [2024-11-26 07:34:55.963128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.032 [2024-11-26 07:34:55.963138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.032 [2024-11-26 07:34:55.963147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.032 [2024-11-26 07:34:55.963157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.032 [2024-11-26 07:34:55.963165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.032 [2024-11-26 07:34:55.963174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.032 [2024-11-26 07:34:55.963182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.032 [2024-11-26 07:34:55.963191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.032 [2024-11-26 07:34:55.963199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.032 [2024-11-26 07:34:55.963207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf01e90 is same with the state(6) to be set 00:25:12.032 [2024-11-26 07:34:55.963308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:25:12.032 [2024-11-26 07:34:55.963329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04080 (9): Bad file descriptor 00:25:12.032 [2024-11-26 07:34:55.963340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb02850 (9): Bad file descriptor 00:25:12.032 [2024-11-26 07:34:55.963350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:25:12.032 [2024-11-26 07:34:55.963357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:25:12.032 [2024-11-26 07:34:55.963366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:25:12.032 [2024-11-26 07:34:55.963374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:25:12.032 [2024-11-26 07:34:55.963386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6a530 (9): Bad file descriptor 00:25:12.032 [2024-11-26 07:34:55.963406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6b5c0 (9): Bad file descriptor 00:25:12.032 [2024-11-26 07:34:55.963425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf36880 (9): Bad file descriptor 00:25:12.032 [2024-11-26 07:34:55.963442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6ba90 (9): Bad file descriptor 00:25:12.032 [2024-11-26 07:34:55.963462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2e8b0 (9): Bad file descriptor 00:25:12.032 [2024-11-26 07:34:55.963479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb09d40 (9): Bad file descriptor 00:25:12.032 [2024-11-26 07:34:55.963494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf4b00 (9): Bad file descriptor 00:25:12.032 [2024-11-26 07:34:55.964769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:25:12.032 [2024-11-26 07:34:55.964800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:25:12.032 [2024-11-26 07:34:55.964809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:25:12.032 [2024-11-26 07:34:55.964817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:25:12.032 [2024-11-26 07:34:55.964825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:25:12.032 [2024-11-26 07:34:55.965370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.032 [2024-11-26 07:34:55.965390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04080 with addr=10.0.0.2, port=4420 00:25:12.032 [2024-11-26 07:34:55.965398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04080 is same with the state(6) to be set 00:25:12.032 [2024-11-26 07:34:55.965731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.032 [2024-11-26 07:34:55.965742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb09d40 with addr=10.0.0.2, port=4420 00:25:12.032 [2024-11-26 07:34:55.965749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb09d40 is same with the state(6) to be set 00:25:12.032 [2024-11-26 07:34:55.966060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04080 (9): Bad file descriptor 00:25:12.032 [2024-11-26 07:34:55.966073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb09d40 (9): Bad file descriptor 00:25:12.032 [2024-11-26 07:34:55.966123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:25:12.032 [2024-11-26 07:34:55.966131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:25:12.032 [2024-11-26 07:34:55.966139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:25:12.032 [2024-11-26 07:34:55.966146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:25:12.032 [2024-11-26 07:34:55.966153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:25:12.032 [2024-11-26 07:34:55.966160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:25:12.032 [2024-11-26 07:34:55.966167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:25:12.032 [2024-11-26 07:34:55.966173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:25:12.032 [2024-11-26 07:34:55.968007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:25:12.032 [2024-11-26 07:34:55.968243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.032 [2024-11-26 07:34:55.968256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa20610 with addr=10.0.0.2, port=4420 00:25:12.032 [2024-11-26 07:34:55.968264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa20610 is same with the state(6) to be set 00:25:12.032 [2024-11-26 07:34:55.968300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa20610 (9): Bad file descriptor 00:25:12.032 [2024-11-26 07:34:55.968336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:25:12.032 [2024-11-26 07:34:55.968344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:25:12.032 [2024-11-26 07:34:55.968351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:25:12.032 [2024-11-26 07:34:55.968358] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:25:12.032 [2024-11-26 07:34:55.969005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:25:12.032 [2024-11-26 07:34:55.969355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.032 [2024-11-26 07:34:55.969369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb02850 with addr=10.0.0.2, port=4420 00:25:12.032 [2024-11-26 07:34:55.969376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02850 is same with the state(6) to be set 00:25:12.032 [2024-11-26 07:34:55.969412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb02850 (9): Bad file descriptor 00:25:12.032 [2024-11-26 07:34:55.969452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:25:12.032 [2024-11-26 07:34:55.969460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:25:12.032 [2024-11-26 07:34:55.969467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:25:12.032 [2024-11-26 07:34:55.969474] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:25:12.032 [2024-11-26 07:34:55.973462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.032 [2024-11-26 07:34:55.973482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.032 [2024-11-26 07:34:55.973495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.032 [2024-11-26 07:34:55.973502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.032 [2024-11-26 07:34:55.973513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.032 [2024-11-26 07:34:55.973520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.032 [2024-11-26 07:34:55.973530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.032 [2024-11-26 07:34:55.973538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.032 [2024-11-26 07:34:55.973547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.032 [2024-11-26 07:34:55.973555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.032 [2024-11-26 07:34:55.973564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.032 [2024-11-26 07:34:55.973572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.973581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.973588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.973598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.973605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.973615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.973622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.973632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.973639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.973649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.973657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.973669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.973677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.973687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.973694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.973703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.973711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.973721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.973728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.973737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.973745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.973755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.973762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.973772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.973779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.973788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.973796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.973805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.973813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.973822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.973830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.973839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.973846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.973856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.973868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.973877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.973886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.973896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.973903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.973912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.973920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.973929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.973937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.973946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.973953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.973963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.973970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.973980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.973987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.973997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.974004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.974013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.974021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.974030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.974037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.974047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.974054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.974063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.974071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.974080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.974087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.974098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.974105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.974115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.974122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.974132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.974139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.974148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.974156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.974165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.974173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.974182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.974189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.974198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.974206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.974215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.974223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.974232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.033 [2024-11-26 07:34:55.974239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.033 [2024-11-26 07:34:55.974249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.974256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.974265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.974272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.974282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.974289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.974298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.974307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.974317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.974325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.974334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.974341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.974351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.974358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.974367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.974375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.974384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.974392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.974401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.974409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.974419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.974427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.974436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.974443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.974453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.974461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.974470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.974478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.974487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.974495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.974504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.974512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.974523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.974530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.974540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.974547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.974557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.974565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.974573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0eef0 is same with the state(6) to be set 00:25:12.034 [2024-11-26 07:34:55.975847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.975864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.975875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.975883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.975894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.975901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.975912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.975919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.975929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.975937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.975947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.975954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.975965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.975972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.975982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.975990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.976000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.976007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.976019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.976027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.976037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.976044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.976054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.976061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.976071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.976079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.976088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.976096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.976105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.976113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.976122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.976130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.976139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.976147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.976156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.976164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.976173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.976181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.976190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.976198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.034 [2024-11-26 07:34:55.976207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.034 [2024-11-26 07:34:55.976214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.035 [2024-11-26 07:34:55.976224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.035 [2024-11-26 07:34:55.976236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.035 [2024-11-26 07:34:55.976246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.035 [2024-11-26 07:34:55.976253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.035 [2024-11-26 07:34:55.976263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.035 [2024-11-26 07:34:55.976270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.035 [2024-11-26 07:34:55.976280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.035 [2024-11-26 07:34:55.976287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.035 [2024-11-26 07:34:55.976297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.035 [2024-11-26 07:34:55.976304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.035 [2024-11-26 07:34:55.976314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.035 [2024-11-26 07:34:55.976321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.035 [2024-11-26 07:34:55.976331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.035 [2024-11-26 07:34:55.976338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.035 [2024-11-26 07:34:55.976348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.035 [2024-11-26 07:34:55.976355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.035 [2024-11-26 07:34:55.976364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.035 [2024-11-26 07:34:55.976372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.035 [2024-11-26 07:34:55.976381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.035 [2024-11-26 07:34:55.976389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.035 [2024-11-26 07:34:55.976398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.035 [2024-11-26 07:34:55.976406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.035 [2024-11-26 07:34:55.976415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.035 [2024-11-26 07:34:55.976423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.035 [2024-11-26 07:34:55.976432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.035 [2024-11-26 07:34:55.976439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.035 [2024-11-26 07:34:55.976450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.035 [2024-11-26 07:34:55.976458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.035 [2024-11-26 07:34:55.976467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.035 [2024-11-26 07:34:55.976475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.035 [2024-11-26 07:34:55.976484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.035 [2024-11-26 07:34:55.976492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.035 [2024-11-26 07:34:55.976501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.035 [2024-11-26 07:34:55.976509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.035 [2024-11-26 07:34:55.976518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.035 [2024-11-26 07:34:55.976526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.035 [2024-11-26 07:34:55.976535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.035 [2024-11-26 07:34:55.976543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.035 [2024-11-26 07:34:55.976552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.035 [2024-11-26 07:34:55.976560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.035 [2024-11-26 07:34:55.976569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.035 [2024-11-26 07:34:55.976577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.035 [2024-11-26 07:34:55.976587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.035 [2024-11-26 07:34:55.976594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.035 [2024-11-26 07:34:55.976604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.035 [2024-11-26 07:34:55.976611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.035 [2024-11-26 07:34:55.976621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.035 [2024-11-26 07:34:55.976628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.035 [2024-11-26 07:34:55.976638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.035 [2024-11-26 07:34:55.976645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.035 [2024-11-26 07:34:55.976654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.035 [2024-11-26 07:34:55.976663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.035 [2024-11-26 07:34:55.976673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.035 [2024-11-26 07:34:55.976680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.035 [2024-11-26 07:34:55.976690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.035 [2024-11-26 07:34:55.976698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.035 [2024-11-26 07:34:55.976707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.035 [2024-11-26 07:34:55.976714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.035 [2024-11-26 07:34:55.976724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.035 [2024-11-26 07:34:55.976731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.035 [2024-11-26 07:34:55.976741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.035 [2024-11-26 07:34:55.976748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.035 [2024-11-26 07:34:55.976758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.035 [2024-11-26 07:34:55.976765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.035 [2024-11-26 07:34:55.976776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.035 [2024-11-26 07:34:55.976784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.035 [2024-11-26 07:34:55.976794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.035 [2024-11-26 07:34:55.976801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.035 [2024-11-26 07:34:55.976811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.976818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.976828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.976835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.976845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.976852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.976867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.976874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.976885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.976893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.976902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.976910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.976919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.976927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.976936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.976944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.976953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.976961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.976969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f2a0 is same with the state(6) to be set 00:25:12.036 [2024-11-26 07:34:55.978233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.978247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.978260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.978269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.978281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.978290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.978300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.978308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.978317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.978325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.978335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.978342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.978352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.978359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.978372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.978380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.978389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.978397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.978406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.978413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.978423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.978430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.978440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.978447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.978457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.978464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.978474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.978481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.978491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.978498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.978508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.978515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.978525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.978532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.978542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.978549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.978559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.978566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.978575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.978584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.978594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.978601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.978611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.978618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.978627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.978635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.978644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.978651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.978661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.978668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.978677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.978684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.978694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.978701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.978711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.978718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.978727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.978735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.978744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.036 [2024-11-26 07:34:55.978752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.036 [2024-11-26 07:34:55.978761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.978768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.978778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.978785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.978797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.978804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.978814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.978821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.978831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.978838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.978848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.978855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.978869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.978876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.978886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.978893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.978903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.978910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.978920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.978927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.978936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.978944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.978953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.978960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.978970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.978977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.978987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.978994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.979004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.979013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.979022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.979030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.979039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.979047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.979056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.979064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.979073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.979080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.979090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.979098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.979107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.979114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.979124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.979131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.979141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.979148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.979157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.979164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.979174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.979181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.979191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.979198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.979208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.979215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.979225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.979234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.979243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.979251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.979260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.979268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.979278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.979284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.979294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.979301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.979311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.979318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.979328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.979335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.979343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf10820 is same with the state(6) to be set 00:25:12.037 [2024-11-26 07:34:55.980614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.980627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.980639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.980649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.980660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.980669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.980680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.980689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.980700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.037 [2024-11-26 07:34:55.980710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.037 [2024-11-26 07:34:55.980719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.980729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.980739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.980746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.980755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.980763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.980773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.980780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.980790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.980797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.980807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.980814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.980824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.980832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.980841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.980849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.980858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.980870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.980879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.980886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.980896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.980903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.980912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.980919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.980929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.980937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.980948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.980956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.980965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.980972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.980981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.980989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.980999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.981006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.981015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.981022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.981032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.981039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.981049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.981057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.981066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.981073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.981083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.981090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.981099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.981107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.981116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.981124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.981133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.981141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.981150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.981159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.981168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.981176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.981185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.981192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.981201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.981209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.981218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.981225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.981235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.981242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.981251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.981259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.981268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.981276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.981285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.981293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.981302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.981310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.981319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.981327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.981337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.981344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.981354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.981361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.981372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.038 [2024-11-26 07:34:55.981380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.038 [2024-11-26 07:34:55.981389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.981396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.981406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.981413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.981422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.981430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.981439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.981446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.981455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.981463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.981472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.981480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.981489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.981497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.981506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.981513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.981522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.981529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.981539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.981546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.981555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.981563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.981572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.981582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.981592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.981599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.981608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.981616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.981625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.981632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.981642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.981649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.981658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.981665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.981675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.981682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.981691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.981699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.981708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.981715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.981723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56490 is same with the state(6) to be set 00:25:12.039 [2024-11-26 07:34:55.983003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.983016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.983026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.983034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.983044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.983051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.983061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.983071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.983080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.983088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.983098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.983105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.983115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.983122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.983131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.983139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.983148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.983156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.983165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.983173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.983182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.983190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.983200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.983207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.983216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.983224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.983233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.983241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.983250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.983257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.983267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.983274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.983288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.983296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.983306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.983313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.983323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.983330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.039 [2024-11-26 07:34:55.983340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.039 [2024-11-26 07:34:55.983347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.040 [2024-11-26 07:34:55.983357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.040 [2024-11-26 07:34:55.983364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.040 [2024-11-26 07:34:55.983374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.040 [2024-11-26 07:34:55.983381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.040 [2024-11-26 07:34:55.983391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.040 [2024-11-26 07:34:55.983398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.040 [2024-11-26 07:34:55.983408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.040 [2024-11-26 07:34:55.983415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.040 [2024-11-26 07:34:55.983425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.040 [2024-11-26 07:34:55.983433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.040 [2024-11-26 07:34:55.983442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.040 [2024-11-26 07:34:55.983449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.040 [2024-11-26 07:34:55.983459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.040 [2024-11-26 07:34:55.983467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.040 [2024-11-26 07:34:55.983476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.040 [2024-11-26 07:34:55.983483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.040 [2024-11-26 07:34:55.983493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.040 [2024-11-26 07:34:55.983503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.040 [2024-11-26 07:34:55.983513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.040 [2024-11-26 07:34:55.983520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.040 [2024-11-26 07:34:55.983530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.040 [2024-11-26 07:34:55.983537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.040 [2024-11-26 07:34:55.983547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.040 [2024-11-26 07:34:55.983554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.040 [2024-11-26 07:34:55.983564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.040 [2024-11-26 07:34:55.983571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.040 [2024-11-26 07:34:55.983581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.040 [2024-11-26 07:34:55.983588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.040 [2024-11-26 07:34:55.983598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.040 [2024-11-26 07:34:55.983606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.040 [2024-11-26 07:34:55.983616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.040 [2024-11-26 07:34:55.983624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.040 [2024-11-26 07:34:55.983633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.040 [2024-11-26 07:34:55.983641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.040 [2024-11-26 07:34:55.983650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.040 [2024-11-26 07:34:55.983658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.040 [2024-11-26 07:34:55.983667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.040 [2024-11-26 07:34:55.983675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.040 [2024-11-26 07:34:55.983684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.040 [2024-11-26 07:34:55.983692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.040 [2024-11-26 07:34:55.983702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.040 [2024-11-26 07:34:55.983710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.040 [2024-11-26 07:34:55.983721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.040 [2024-11-26 07:34:55.983728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.040 [2024-11-26 07:34:55.983738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.040 [2024-11-26 07:34:55.983746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.040 [2024-11-26 07:34:55.983755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.040 [2024-11-26 07:34:55.983763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.040 [2024-11-26 07:34:55.983772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.040 [2024-11-26 07:34:55.983780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.040 [2024-11-26 07:34:55.983789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.040 [2024-11-26 07:34:55.983797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.040 [2024-11-26 07:34:55.983807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.040 [2024-11-26 07:34:55.983814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.040 [2024-11-26 07:34:55.983824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.040 [2024-11-26 07:34:55.983832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.040 [2024-11-26 07:34:55.983841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.040 [2024-11-26 07:34:55.983849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.040 [2024-11-26 07:34:55.983858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.040 [2024-11-26 07:34:55.983869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.040 [2024-11-26 07:34:55.983879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.040 [2024-11-26 07:34:55.983886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.040 [2024-11-26 07:34:55.983895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.040 [2024-11-26 07:34:55.983903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.040 [2024-11-26 07:34:55.983912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.040 [2024-11-26 07:34:55.983920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.040 [2024-11-26 07:34:55.983929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.040 [2024-11-26 07:34:55.983939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.040 [2024-11-26 07:34:55.983949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.040 [2024-11-26 07:34:55.983956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.983966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.041 [2024-11-26 07:34:55.983973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.983983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.041 [2024-11-26 07:34:55.983990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.984000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.041 [2024-11-26 07:34:55.984007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.984017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.041 [2024-11-26 07:34:55.984024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.984034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.041 [2024-11-26 07:34:55.984041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.984051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.041 [2024-11-26 07:34:55.984058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.984067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.041 [2024-11-26 07:34:55.984075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.984085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.041 [2024-11-26 07:34:55.984092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.984102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.041 [2024-11-26 07:34:55.984109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.984117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89030 is same with the state(6) to be set 00:25:12.041 [2024-11-26 07:34:55.985385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.041 [2024-11-26 07:34:55.985397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.985410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.041 [2024-11-26 07:34:55.985422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.985434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.041 [2024-11-26 07:34:55.985443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.985454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.041 [2024-11-26 07:34:55.985464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.985475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.041 [2024-11-26 07:34:55.985484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.985495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.041 [2024-11-26 07:34:55.985503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.985512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.041 [2024-11-26 07:34:55.985520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.985529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.041 [2024-11-26 07:34:55.985537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.985546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.041 [2024-11-26 07:34:55.985554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.985564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.041 [2024-11-26 07:34:55.985571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.985581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.041 [2024-11-26 07:34:55.985588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.985597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.041 [2024-11-26 07:34:55.985605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.985615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.041 [2024-11-26 07:34:55.985622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.985631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.041 [2024-11-26 07:34:55.985639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.985651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.041 [2024-11-26 07:34:55.985658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.985668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.041 [2024-11-26 07:34:55.985676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.985685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.041 [2024-11-26 07:34:55.985693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.985702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.041 [2024-11-26 07:34:55.985709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.985719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.041 [2024-11-26 07:34:55.985726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.985735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.041 [2024-11-26 07:34:55.985743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.985753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.041 [2024-11-26 07:34:55.985760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.985769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.041 [2024-11-26 07:34:55.985777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.985787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.041 [2024-11-26 07:34:55.985794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.985804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.041 [2024-11-26 07:34:55.985811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.985821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.041 [2024-11-26 07:34:55.985829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.985839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.041 [2024-11-26 07:34:55.985846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.985856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.041 [2024-11-26 07:34:55.985921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.985933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.041 [2024-11-26 07:34:55.985940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.985950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.041 [2024-11-26 07:34:55.985957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.041 [2024-11-26 07:34:55.985967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.042 [2024-11-26 07:34:55.985975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.042 [2024-11-26 07:34:55.985984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.042 [2024-11-26 07:34:55.985992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.042 [2024-11-26 07:34:55.986001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.042 [2024-11-26 07:34:55.986009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.042 [2024-11-26 07:34:55.986018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.042 [2024-11-26 07:34:55.986026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.042 [2024-11-26 07:34:55.986036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.042 [2024-11-26 07:34:55.986043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.042 [2024-11-26 07:34:55.986053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.042 [2024-11-26 07:34:55.986060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.042 [2024-11-26 07:34:55.986070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.042 [2024-11-26 07:34:55.986077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.042 [2024-11-26 07:34:55.986087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.042 [2024-11-26 07:34:55.986094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.042 [2024-11-26 07:34:55.986104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.042 [2024-11-26 07:34:55.986111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.042 [2024-11-26 07:34:55.986121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.042 [2024-11-26 07:34:55.986128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.042 [2024-11-26 07:34:55.986137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.042 [2024-11-26 07:34:55.986147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.042 [2024-11-26 07:34:55.986157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.042 [2024-11-26 07:34:55.986165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.042 [2024-11-26 07:34:55.986175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.042 [2024-11-26 07:34:55.986183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.042 [2024-11-26 07:34:55.986193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.042 [2024-11-26 07:34:55.986200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.042 [2024-11-26 07:34:55.986210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.042 [2024-11-26 07:34:55.986218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.042 [2024-11-26 07:34:55.986227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.042 [2024-11-26 07:34:55.986235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.042 [2024-11-26 07:34:55.986244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.042 [2024-11-26 07:34:55.986252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.042 [2024-11-26 07:34:55.986262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.042 [2024-11-26 07:34:55.986269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.042 [2024-11-26 07:34:55.986279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.042 [2024-11-26 07:34:55.986286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.042 [2024-11-26 07:34:55.986296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.042 [2024-11-26 07:34:55.986303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.042 [2024-11-26 07:34:55.986312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.042 [2024-11-26 07:34:55.986320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.042 [2024-11-26 07:34:55.986329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.042 [2024-11-26 07:34:55.986337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.042 [2024-11-26 07:34:55.986347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.042 [2024-11-26 07:34:55.986355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.042 [2024-11-26 07:34:55.986366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.042 [2024-11-26 07:34:55.986373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.042 [2024-11-26 07:34:55.986383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.042 [2024-11-26 07:34:55.986390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.042 [2024-11-26 07:34:55.986400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.042 [2024-11-26 07:34:55.986407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.042 [2024-11-26 07:34:55.986417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.042 [2024-11-26 07:34:55.986424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.042 [2024-11-26 07:34:55.986434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.042 [2024-11-26 07:34:55.986441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.042 [2024-11-26 07:34:55.986451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.042 [2024-11-26 07:34:55.986458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.042 [2024-11-26 07:34:55.986467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.042 [2024-11-26 07:34:55.986475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.042 [2024-11-26 07:34:55.986484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.042 [2024-11-26 07:34:55.986492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.042 [2024-11-26 07:34:55.986501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.042 [2024-11-26 07:34:55.986508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.042 [2024-11-26 07:34:55.986518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.042 [2024-11-26 07:34:55.986525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.042 [2024-11-26 07:34:55.986535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.042 [2024-11-26 07:34:55.986542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.042 [2024-11-26 07:34:55.986552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.042 [2024-11-26 07:34:55.986559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.042 [2024-11-26 07:34:55.986568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8a500 is same with the state(6) to be set 00:25:12.042 [2024-11-26 07:34:55.988301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:12.042 [2024-11-26 07:34:55.988326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:25:12.042 [2024-11-26 07:34:55.988336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:25:12.042 [2024-11-26 07:34:55.988346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:25:12.042 [2024-11-26 07:34:55.988441] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:25:12.042 [2024-11-26 07:34:55.988456] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:25:12.042 [2024-11-26 07:34:55.988528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:25:12.042 task offset: 24576 on job bdev=Nvme7n1 fails 00:25:12.042 00:25:12.042 Latency(us) 00:25:12.042 [2024-11-26T06:34:56.179Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.043 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:12.043 Job: Nvme1n1 ended in about 0.96 seconds with error 00:25:12.043 Verification LBA range: start 0x0 length 0x400 00:25:12.043 Nvme1n1 : 0.96 199.08 12.44 66.36 0.00 238384.64 19879.25 248162.99 00:25:12.043 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:12.043 Job: Nvme2n1 ended in about 0.95 seconds with error 00:25:12.043 Verification LBA range: start 0x0 length 0x400 00:25:12.043 Nvme2n1 : 0.95 201.95 12.62 67.32 0.00 230128.64 21736.11 270882.13 00:25:12.043 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:12.043 Job: Nvme3n1 ended in about 0.95 seconds with error 00:25:12.043 Verification LBA range: start 0x0 length 0x400 00:25:12.043 Nvme3n1 : 0.95 202.90 12.68 67.63 0.00 224153.23 4696.75 251658.24 00:25:12.043 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:12.043 Job: Nvme4n1 ended in about 0.95 seconds with error 00:25:12.043 Verification LBA range: start 0x0 length 0x400 00:25:12.043 Nvme4n1 : 0.95 201.38 12.59 67.13 0.00 221090.35 8956.59 225443.84 00:25:12.043 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:12.043 Job: Nvme5n1 ended in about 0.97 seconds with error 00:25:12.043 Verification LBA range: start 0x0 length 0x400 00:25:12.043 Nvme5n1 : 0.97 132.39 8.27 66.20 0.00 292970.67 16384.00 256901.12 00:25:12.043 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:12.043 Job: Nvme6n1 ended in about 0.97 seconds with error 00:25:12.043 Verification LBA range: start 0x0 length 0x400 00:25:12.043 Nvme6n1 : 0.97 132.07 8.25 66.04 0.00 287361.14 19551.57 270882.13 00:25:12.043 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:12.043 Job: Nvme7n1 ended in about 0.94 seconds with error 00:25:12.043 Verification LBA range: start 0x0 length 0x400 00:25:12.043 Nvme7n1 : 0.94 203.22 12.70 67.74 0.00 204454.13 4724.05 246415.36 00:25:12.043 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:12.043 Job: Nvme8n1 ended in about 0.97 seconds with error 00:25:12.043 Verification LBA range: start 0x0 length 0x400 00:25:12.043 Nvme8n1 : 0.97 131.75 8.23 65.87 0.00 275204.12 12997.97 251658.24 00:25:12.043 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:12.043 Job: Nvme9n1 ended in about 0.97 seconds with error 00:25:12.043 Verification LBA range: start 0x0 length 0x400 00:25:12.043 Nvme9n1 : 0.97 131.42 8.21 65.71 0.00 269522.49 16930.13 274377.39 00:25:12.043 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:12.043 Job: Nvme10n1 ended in about 0.98 seconds with error 00:25:12.043 Verification LBA range: start 0x0 length 0x400 00:25:12.043 Nvme10n1 : 0.98 196.64 12.29 65.55 0.00 197941.55 18350.08 241172.48 00:25:12.043 [2024-11-26T06:34:56.180Z] =================================================================================================================== 00:25:12.043 [2024-11-26T06:34:56.180Z] Total : 1732.82 108.30 665.54 0.00 239994.04 4696.75 274377.39 00:25:12.043 [2024-11-26 07:34:56.015797] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:12.043 [2024-11-26 07:34:56.015826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:25:12.043 [2024-11-26 07:34:56.016275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.043 [2024-11-26 07:34:56.016293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf4b00 with addr=10.0.0.2, port=4420 00:25:12.043 [2024-11-26 07:34:56.016302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf4b00 is same with the state(6) to be set 00:25:12.043 [2024-11-26 07:34:56.016476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.043 [2024-11-26 07:34:56.016486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2e8b0 with addr=10.0.0.2, port=4420 00:25:12.043 [2024-11-26 07:34:56.016493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e8b0 is same with the state(6) to be set 00:25:12.043 [2024-11-26 07:34:56.016859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.043 [2024-11-26 07:34:56.016874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf36880 with addr=10.0.0.2, port=4420 00:25:12.043 [2024-11-26 07:34:56.016882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf36880 is same with the state(6) to be set 00:25:12.043 [2024-11-26 07:34:56.017214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.043 [2024-11-26 07:34:56.017224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf6ba90 with addr=10.0.0.2, port=4420 00:25:12.043 [2024-11-26 07:34:56.017232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6ba90 is same with the state(6) to be set 00:25:12.043 [2024-11-26 07:34:56.018835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:25:12.043 [2024-11-26 07:34:56.018849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:25:12.043 [2024-11-26 07:34:56.018860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:25:12.043 [2024-11-26 07:34:56.018889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:25:12.043 [2024-11-26 07:34:56.019321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.043 [2024-11-26 07:34:56.019335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf6b5c0 with addr=10.0.0.2, port=4420 00:25:12.043 [2024-11-26 07:34:56.019343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b5c0 is same with the state(6) to be set 00:25:12.043 [2024-11-26 07:34:56.019701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.043 [2024-11-26 07:34:56.019712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf6a530 with addr=10.0.0.2, port=4420 00:25:12.043 [2024-11-26 07:34:56.019720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6a530 is same with the state(6) to be set 00:25:12.043 [2024-11-26 07:34:56.019732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf4b00 (9): Bad file descriptor 00:25:12.043 [2024-11-26 07:34:56.019744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2e8b0 (9): Bad file descriptor 00:25:12.043 [2024-11-26 07:34:56.019753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf36880 (9): Bad file descriptor 00:25:12.043 [2024-11-26 07:34:56.019763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6ba90 (9): Bad file descriptor 00:25:12.043 [2024-11-26 07:34:56.019798] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:25:12.043 [2024-11-26 07:34:56.019811] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:25:12.043 [2024-11-26 07:34:56.019822] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:25:12.043 [2024-11-26 07:34:56.019835] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:25:12.043 [2024-11-26 07:34:56.020246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.043 [2024-11-26 07:34:56.020260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb09d40 with addr=10.0.0.2, port=4420 00:25:12.043 [2024-11-26 07:34:56.020268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb09d40 is same with the state(6) to be set 00:25:12.043 [2024-11-26 07:34:56.020595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.043 [2024-11-26 07:34:56.020605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04080 with addr=10.0.0.2, port=4420 00:25:12.043 [2024-11-26 07:34:56.020613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04080 is same with the state(6) to be set 00:25:12.043 [2024-11-26 07:34:56.020811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.043 [2024-11-26 07:34:56.020821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa20610 with addr=10.0.0.2, port=4420 00:25:12.043 [2024-11-26 07:34:56.020829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa20610 is same with the state(6) to be set 00:25:12.043 [2024-11-26 07:34:56.021018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.043 [2024-11-26 07:34:56.021030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb02850 with addr=10.0.0.2, port=4420 00:25:12.043 [2024-11-26 07:34:56.021038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02850 is same with the state(6) to be set 00:25:12.043 [2024-11-26 07:34:56.021047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6b5c0 (9): Bad file descriptor 00:25:12.043 [2024-11-26 07:34:56.021057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6a530 (9): Bad file descriptor 00:25:12.043 [2024-11-26 07:34:56.021066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:25:12.043 [2024-11-26 07:34:56.021073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:25:12.043 [2024-11-26 07:34:56.021082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:12.043 [2024-11-26 07:34:56.021090] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:25:12.043 [2024-11-26 07:34:56.021098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:25:12.043 [2024-11-26 07:34:56.021105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:25:12.043 [2024-11-26 07:34:56.021112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:25:12.043 [2024-11-26 07:34:56.021119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:25:12.043 [2024-11-26 07:34:56.021126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:25:12.043 [2024-11-26 07:34:56.021132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:25:12.043 [2024-11-26 07:34:56.021143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:25:12.043 [2024-11-26 07:34:56.021149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:25:12.043 [2024-11-26 07:34:56.021157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:25:12.043 [2024-11-26 07:34:56.021164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:25:12.043 [2024-11-26 07:34:56.021171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:25:12.043 [2024-11-26 07:34:56.021178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:25:12.043 [2024-11-26 07:34:56.021258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb09d40 (9): Bad file descriptor 00:25:12.044 [2024-11-26 07:34:56.021269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04080 (9): Bad file descriptor 00:25:12.044 [2024-11-26 07:34:56.021279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa20610 (9): Bad file descriptor 00:25:12.044 [2024-11-26 07:34:56.021288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb02850 (9): Bad file descriptor 00:25:12.044 [2024-11-26 07:34:56.021297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:25:12.044 [2024-11-26 07:34:56.021303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:25:12.044 [2024-11-26 07:34:56.021311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:25:12.044 [2024-11-26 07:34:56.021317] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:25:12.044 [2024-11-26 07:34:56.021325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:25:12.044 [2024-11-26 07:34:56.021331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:25:12.044 [2024-11-26 07:34:56.021338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:25:12.044 [2024-11-26 07:34:56.021345] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:25:12.044 [2024-11-26 07:34:56.021370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:25:12.044 [2024-11-26 07:34:56.021378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:25:12.044 [2024-11-26 07:34:56.021385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:25:12.044 [2024-11-26 07:34:56.021391] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:25:12.044 [2024-11-26 07:34:56.021398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:25:12.044 [2024-11-26 07:34:56.021405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:25:12.044 [2024-11-26 07:34:56.021412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:25:12.044 [2024-11-26 07:34:56.021418] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:25:12.044 [2024-11-26 07:34:56.021426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:25:12.044 [2024-11-26 07:34:56.021432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:25:12.044 [2024-11-26 07:34:56.021439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:25:12.044 [2024-11-26 07:34:56.021448] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:25:12.044 [2024-11-26 07:34:56.021455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:25:12.044 [2024-11-26 07:34:56.021462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:25:12.044 [2024-11-26 07:34:56.021469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:25:12.044 [2024-11-26 07:34:56.021475] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:25:12.305 07:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2185448 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2185448 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2185448 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:13.248 rmmod nvme_tcp 00:25:13.248 rmmod nvme_fabrics 00:25:13.248 rmmod nvme_keyring 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2185172 ']' 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2185172 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2185172 ']' 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2185172 00:25:13.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2185172) - No such process 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2185172 is not found' 00:25:13.248 Process with pid 2185172 is not found 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:13.248 07:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:15.794 00:25:15.794 real 0m7.749s 00:25:15.794 user 0m18.989s 00:25:15.794 sys 0m1.224s 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:15.794 ************************************ 00:25:15.794 END TEST nvmf_shutdown_tc3 00:25:15.794 ************************************ 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:15.794 ************************************ 00:25:15.794 START TEST nvmf_shutdown_tc4 00:25:15.794 ************************************ 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:15.794 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:15.795 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:15.795 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:15.795 Found net devices under 0000:31:00.0: cvl_0_0 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:15.795 Found net devices under 0000:31:00.1: cvl_0_1 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:15.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:15.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:25:15.795 00:25:15.795 --- 10.0.0.2 ping statistics --- 00:25:15.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.795 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:15.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:15.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:25:15.795 00:25:15.795 --- 10.0.0.1 ping statistics --- 00:25:15.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.795 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:15.795 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2186702 00:25:15.796 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2186702 00:25:15.796 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2186702 ']' 00:25:15.796 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:15.796 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:15.796 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:15.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:15.796 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:15.796 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:15.796 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:15.796 [2024-11-26 07:34:59.849573] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:25:15.796 [2024-11-26 07:34:59.849627] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:16.057 [2024-11-26 07:34:59.948915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:16.057 [2024-11-26 07:34:59.980563] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:16.057 [2024-11-26 07:34:59.980593] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:16.057 [2024-11-26 07:34:59.980600] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:16.057 [2024-11-26 07:34:59.980605] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:16.057 [2024-11-26 07:34:59.980609] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:16.057 [2024-11-26 07:34:59.981897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:16.057 [2024-11-26 07:34:59.982045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:16.057 [2024-11-26 07:34:59.982179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:16.057 [2024-11-26 07:34:59.982180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:16.628 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:16.628 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:25:16.628 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:16.628 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:16.628 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:16.628 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:16.628 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:16.628 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.628 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:16.628 [2024-11-26 07:35:00.689541] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:16.628 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.628 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:16.629 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:16.629 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:16.629 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:16.629 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:16.629 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:16.629 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:16.629 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:16.629 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:16.629 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:16.629 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:16.629 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:16.629 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:16.629 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:16.629 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:16.629 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:16.629 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:16.629 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:16.629 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:16.629 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:16.629 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:16.629 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:16.629 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:16.629 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:16.629 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:16.629 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:16.629 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.629 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:16.890 Malloc1 00:25:16.890 [2024-11-26 07:35:00.798469] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:16.890 Malloc2 00:25:16.890 Malloc3 00:25:16.890 Malloc4 00:25:16.890 Malloc5 00:25:16.890 Malloc6 00:25:16.890 Malloc7 00:25:17.175 Malloc8 00:25:17.175 Malloc9 00:25:17.175 Malloc10 00:25:17.175 07:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.175 07:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:17.175 07:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:17.175 07:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:17.175 07:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2187080 00:25:17.175 07:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:25:17.175 07:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:25:17.175 [2024-11-26 07:35:01.267661] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:22.462 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:22.462 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2186702 00:25:22.462 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2186702 ']' 00:25:22.462 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2186702 00:25:22.462 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:25:22.462 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:22.462 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2186702 00:25:22.462 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:22.462 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:22.462 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2186702' 00:25:22.462 killing process with pid 2186702 00:25:22.462 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2186702 00:25:22.462 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2186702 00:25:22.462 Write completed with error (sct=0, sc=8) 00:25:22.462 Write completed with error (sct=0, sc=8) 00:25:22.462 Write completed with error (sct=0, sc=8) 00:25:22.462 starting I/O failed: -6 00:25:22.462 Write completed with error (sct=0, sc=8) 00:25:22.462 Write completed with error (sct=0, sc=8) 00:25:22.462 Write completed with error (sct=0, sc=8) 00:25:22.462 Write completed with error (sct=0, sc=8) 00:25:22.462 starting I/O failed: -6 00:25:22.462 Write completed with error (sct=0, sc=8) 00:25:22.462 Write completed with error (sct=0, sc=8) 00:25:22.462 Write completed with error (sct=0, sc=8) 00:25:22.462 Write completed with error (sct=0, sc=8) 00:25:22.462 starting I/O failed: -6 00:25:22.462 Write completed with error (sct=0, sc=8) 00:25:22.462 Write completed with error (sct=0, sc=8) 00:25:22.462 Write completed with error (sct=0, sc=8) 00:25:22.462 Write completed with error (sct=0, sc=8) 00:25:22.462 starting I/O failed: -6 00:25:22.462 Write completed with error (sct=0, sc=8) 00:25:22.462 Write completed with error (sct=0, sc=8) 00:25:22.462 Write completed with error (sct=0, sc=8) 00:25:22.462 Write completed with error (sct=0, sc=8) 00:25:22.462 starting I/O failed: -6 00:25:22.462 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 [2024-11-26 07:35:06.279072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 [2024-11-26 07:35:06.279946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.463 starting I/O failed: -6 00:25:22.463 [2024-11-26 07:35:06.280482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e1650 is same with tWrite completed with error (sct=0, sc=8) 00:25:22.463 he state(6) to be set 00:25:22.463 [2024-11-26 07:35:06.280512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e1650 is same with the state(6) to be set 00:25:22.463 Write completed with error (sct=0, sc=8) 00:25:22.464 [2024-11-26 07:35:06.280518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e1650 is same with the state(6) to be set 00:25:22.464 [2024-11-26 07:35:06.280523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e1650 is same with the state(6) to be set 00:25:22.464 starting I/O failed: -6 00:25:22.464 [2024-11-26 07:35:06.280528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e1650 is same with the state(6) to be set 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 [2024-11-26 07:35:06.280533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e1650 is same with the state(6) to be set 00:25:22.464 [2024-11-26 07:35:06.280539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e1650 is same with the state(6) to be set 00:25:22.464 starting I/O failed: -6 00:25:22.464 [2024-11-26 07:35:06.280544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e1650 is same with the state(6) to be set 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 [2024-11-26 07:35:06.280728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e1b20 is same with the state(6) to be set 00:25:22.464 starting I/O failed: -6 00:25:22.464 [2024-11-26 07:35:06.280750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e1b20 is same with tWrite completed with error (sct=0, sc=8) 00:25:22.464 he state(6) to be set 00:25:22.464 [2024-11-26 07:35:06.280758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e1b20 is same with the state(6) to be set 00:25:22.464 [2024-11-26 07:35:06.280763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e1b20 is same with the state(6) to be set 00:25:22.464 [2024-11-26 07:35:06.280768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e1b20 is same with the state(6) to be set 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 [2024-11-26 07:35:06.280774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e1b20 is same with the state(6) to be set 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 [2024-11-26 07:35:06.280855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 [2024-11-26 07:35:06.281319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e24c0 is same with the state(6) to be set 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 [2024-11-26 07:35:06.281339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e24c0 is same with the state(6) to be set 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 [2024-11-26 07:35:06.281348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e24c0 is same with the state(6) to be set 00:25:22.464 starting I/O failed: -6 00:25:22.464 [2024-11-26 07:35:06.281357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e24c0 is same with the state(6) to be set 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 [2024-11-26 07:35:06.281366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e24c0 is same with the state(6) to be set 00:25:22.464 starting I/O failed: -6 00:25:22.464 [2024-11-26 07:35:06.281374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e24c0 is same with the state(6) to be set 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 [2024-11-26 07:35:06.281382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e24c0 is same with the state(6) to be set 00:25:22.464 starting I/O failed: -6 00:25:22.464 [2024-11-26 07:35:06.281391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e24c0 is same with the state(6) to be set 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.464 Write completed with error (sct=0, sc=8) 00:25:22.464 starting I/O failed: -6 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 starting I/O failed: -6 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 starting I/O failed: -6 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 starting I/O failed: -6 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 starting I/O failed: -6 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 starting I/O failed: -6 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 starting I/O failed: -6 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 starting I/O failed: -6 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 starting I/O failed: -6 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 starting I/O failed: -6 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 starting I/O failed: -6 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 starting I/O failed: -6 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 starting I/O failed: -6 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 starting I/O failed: -6 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 starting I/O failed: -6 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 starting I/O failed: -6 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 [2024-11-26 07:35:06.281763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e2990 is same with the state(6) to be set 00:25:22.465 starting I/O failed: -6 00:25:22.465 [2024-11-26 07:35:06.281776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e2990 is same with the state(6) to be set 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 [2024-11-26 07:35:06.281781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e2990 is same with the state(6) to be set 00:25:22.465 [2024-11-26 07:35:06.281786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e2990 is same with the state(6) to be set 00:25:22.465 starting I/O failed: -6 00:25:22.465 [2024-11-26 07:35:06.281791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e2990 is same with the state(6) to be set 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 [2024-11-26 07:35:06.281796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e2990 is same with the state(6) to be set 00:25:22.465 [2024-11-26 07:35:06.281802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e2990 is same with the state(6) to be set 00:25:22.465 starting I/O failed: -6 00:25:22.465 [2024-11-26 07:35:06.281806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e2990 is same with the state(6) to be set 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 starting I/O failed: -6 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 starting I/O failed: -6 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 starting I/O failed: -6 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 starting I/O failed: -6 00:25:22.465 [2024-11-26 07:35:06.282058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e2e60 is same with the state(6) to be set 00:25:22.465 [2024-11-26 07:35:06.282069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e2e60 is same with the state(6) to be set 00:25:22.465 [2024-11-26 07:35:06.282074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e2e60 is same with the state(6) to be set 00:25:22.465 [2024-11-26 07:35:06.282079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e2e60 is same with the state(6) to be set 00:25:22.465 [2024-11-26 07:35:06.282084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e2e60 is same with the state(6) to be set 00:25:22.465 [2024-11-26 07:35:06.282089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e2e60 is same with the state(6) to be set 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 starting I/O failed: -6 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 starting I/O failed: -6 00:25:22.465 [2024-11-26 07:35:06.282252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.465 NVMe io qpair process completion error 00:25:22.465 [2024-11-26 07:35:06.282469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e1ff0 is same with the state(6) to be set 00:25:22.465 [2024-11-26 07:35:06.282495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e1ff0 is same with the state(6) to be set 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 starting I/O failed: -6 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 starting I/O failed: -6 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 starting I/O failed: -6 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 starting I/O failed: -6 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 starting I/O failed: -6 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 starting I/O failed: -6 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 starting I/O failed: -6 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 starting I/O failed: -6 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 starting I/O failed: -6 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 [2024-11-26 07:35:06.283443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:22.465 starting I/O failed: -6 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 starting I/O failed: -6 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 Write completed with error (sct=0, sc=8) 00:25:22.465 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 [2024-11-26 07:35:06.284269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 [2024-11-26 07:35:06.285198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.466 Write completed with error (sct=0, sc=8) 00:25:22.466 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 [2024-11-26 07:35:06.286647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.467 NVMe io qpair process completion error 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 starting I/O failed: -6 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.467 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 [2024-11-26 07:35:06.287706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 [2024-11-26 07:35:06.288585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.468 starting I/O failed: -6 00:25:22.468 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 [2024-11-26 07:35:06.289500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.469 starting I/O failed: -6 00:25:22.469 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 [2024-11-26 07:35:06.292326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.470 NVMe io qpair process completion error 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 [2024-11-26 07:35:06.293359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 [2024-11-26 07:35:06.294209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 starting I/O failed: -6 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.470 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 [2024-11-26 07:35:06.295139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.471 starting I/O failed: -6 00:25:22.471 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 [2024-11-26 07:35:06.297119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.472 NVMe io qpair process completion error 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 [2024-11-26 07:35:06.298328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 [2024-11-26 07:35:06.299167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.472 starting I/O failed: -6 00:25:22.472 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 [2024-11-26 07:35:06.300100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.473 starting I/O failed: -6 00:25:22.473 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 [2024-11-26 07:35:06.301740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.474 NVMe io qpair process completion error 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 [2024-11-26 07:35:06.303062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.474 starting I/O failed: -6 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 [2024-11-26 07:35:06.304029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 Write completed with error (sct=0, sc=8) 00:25:22.474 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 [2024-11-26 07:35:06.304971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.475 Write completed with error (sct=0, sc=8) 00:25:22.475 starting I/O failed: -6 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 starting I/O failed: -6 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 starting I/O failed: -6 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 starting I/O failed: -6 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 starting I/O failed: -6 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 starting I/O failed: -6 00:25:22.476 [2024-11-26 07:35:06.307921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:22.476 NVMe io qpair process completion error 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 starting I/O failed: -6 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 starting I/O failed: -6 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 starting I/O failed: -6 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 starting I/O failed: -6 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 starting I/O failed: -6 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 starting I/O failed: -6 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 starting I/O failed: -6 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 starting I/O failed: -6 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 starting I/O failed: -6 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 starting I/O failed: -6 00:25:22.476 [2024-11-26 07:35:06.309130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.476 starting I/O failed: -6 00:25:22.476 starting I/O failed: -6 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 starting I/O failed: -6 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 starting I/O failed: -6 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 starting I/O failed: -6 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 starting I/O failed: -6 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 starting I/O failed: -6 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 starting I/O failed: -6 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 starting I/O failed: -6 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 starting I/O failed: -6 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 starting I/O failed: -6 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 starting I/O failed: -6 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 starting I/O failed: -6 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 starting I/O failed: -6 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 starting I/O failed: -6 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 starting I/O failed: -6 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 starting I/O failed: -6 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 starting I/O failed: -6 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 starting I/O failed: -6 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 starting I/O failed: -6 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.476 Write completed with error (sct=0, sc=8) 00:25:22.477 [2024-11-26 07:35:06.310092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 [2024-11-26 07:35:06.311046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.477 Write completed with error (sct=0, sc=8) 00:25:22.477 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 [2024-11-26 07:35:06.312487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:22.478 NVMe io qpair process completion error 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 [2024-11-26 07:35:06.313541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.478 starting I/O failed: -6 00:25:22.478 Write completed with error (sct=0, sc=8) 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 [2024-11-26 07:35:06.314347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 [2024-11-26 07:35:06.315300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.479 starting I/O failed: -6 00:25:22.479 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 [2024-11-26 07:35:06.318031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.480 NVMe io qpair process completion error 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 [2024-11-26 07:35:06.319061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.480 Write completed with error (sct=0, sc=8) 00:25:22.480 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 [2024-11-26 07:35:06.319853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:22.481 starting I/O failed: -6 00:25:22.481 starting I/O failed: -6 00:25:22.481 starting I/O failed: -6 00:25:22.481 starting I/O failed: -6 00:25:22.481 starting I/O failed: -6 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 [2024-11-26 07:35:06.321227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.481 starting I/O failed: -6 00:25:22.481 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 [2024-11-26 07:35:06.322882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.482 NVMe io qpair process completion error 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 [2024-11-26 07:35:06.323977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 Write completed with error (sct=0, sc=8) 00:25:22.482 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 [2024-11-26 07:35:06.324785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.483 starting I/O failed: -6 00:25:22.483 starting I/O failed: -6 00:25:22.483 starting I/O failed: -6 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 [2024-11-26 07:35:06.325925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.483 starting I/O failed: -6 00:25:22.483 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 Write completed with error (sct=0, sc=8) 00:25:22.484 starting I/O failed: -6 00:25:22.484 [2024-11-26 07:35:06.329226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:22.484 NVMe io qpair process completion error 00:25:22.484 Initializing NVMe Controllers 00:25:22.484 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:25:22.484 Controller IO queue size 128, less than required. 00:25:22.484 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:22.484 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:25:22.484 Controller IO queue size 128, less than required. 00:25:22.484 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:22.484 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:22.484 Controller IO queue size 128, less than required. 00:25:22.484 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:22.484 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:25:22.484 Controller IO queue size 128, less than required. 00:25:22.484 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:22.484 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:25:22.484 Controller IO queue size 128, less than required. 00:25:22.484 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:22.484 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:25:22.484 Controller IO queue size 128, less than required. 00:25:22.484 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:22.484 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:25:22.484 Controller IO queue size 128, less than required. 00:25:22.484 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:22.484 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:25:22.484 Controller IO queue size 128, less than required. 00:25:22.484 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:22.484 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:25:22.484 Controller IO queue size 128, less than required. 00:25:22.484 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:22.485 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:25:22.485 Controller IO queue size 128, less than required. 00:25:22.485 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:22.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:25:22.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:25:22.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:22.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:25:22.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:25:22.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:25:22.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:25:22.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:25:22.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:25:22.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:25:22.485 Initialization complete. Launching workers. 00:25:22.485 ======================================================== 00:25:22.485 Latency(us) 00:25:22.485 Device Information : IOPS MiB/s Average min max 00:25:22.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1885.88 81.03 67889.93 872.36 122050.78 00:25:22.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1913.74 82.23 66919.95 666.18 121546.54 00:25:22.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1859.10 79.88 68905.82 780.62 153684.10 00:25:22.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1853.53 79.64 69150.74 799.67 119718.92 00:25:22.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1891.03 81.26 67804.42 694.92 123394.10 00:25:22.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1909.45 82.05 67176.41 648.04 118708.74 00:25:22.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1883.96 80.95 68128.09 684.65 128188.35 00:25:22.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1860.17 79.93 69017.37 823.93 120922.11 00:25:22.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1901.10 81.69 67569.94 697.69 121454.16 00:25:22.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1909.45 82.05 67299.23 801.32 134431.60 00:25:22.485 ======================================================== 00:25:22.485 Total : 18867.41 810.71 67977.64 648.04 153684.10 00:25:22.485 00:25:22.485 [2024-11-26 07:35:06.334320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399540 is same with the state(6) to be set 00:25:22.485 [2024-11-26 07:35:06.334373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397390 is same with the state(6) to be set 00:25:22.485 [2024-11-26 07:35:06.334404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399360 is same with the state(6) to be set 00:25:22.485 [2024-11-26 07:35:06.334434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23989e0 is same with the state(6) to be set 00:25:22.485 [2024-11-26 07:35:06.334462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397060 is same with the state(6) to be set 00:25:22.485 [2024-11-26 07:35:06.334493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23979f0 is same with the state(6) to be set 00:25:22.485 [2024-11-26 07:35:06.334525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23976c0 is same with the state(6) to be set 00:25:22.485 [2024-11-26 07:35:06.334556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398050 is same with the state(6) to be set 00:25:22.485 [2024-11-26 07:35:06.334583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398380 is same with the state(6) to be set 00:25:22.485 [2024-11-26 07:35:06.334611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23986b0 is same with the state(6) to be set 00:25:22.485 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:25:22.485 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:25:23.430 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2187080 00:25:23.430 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:25:23.430 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2187080 00:25:23.430 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:25:23.430 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:23.430 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:25:23.430 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:23.430 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2187080 00:25:23.430 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:25:23.430 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:23.430 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:23.430 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:23.430 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:25:23.430 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:23.430 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:23.430 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:23.430 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:23.430 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:23.430 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:25:23.430 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:23.430 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:25:23.430 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:23.430 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:23.430 rmmod nvme_tcp 00:25:23.430 rmmod nvme_fabrics 00:25:23.691 rmmod nvme_keyring 00:25:23.691 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:23.691 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:25:23.691 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:25:23.691 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2186702 ']' 00:25:23.691 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2186702 00:25:23.691 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2186702 ']' 00:25:23.691 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2186702 00:25:23.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2186702) - No such process 00:25:23.691 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2186702 is not found' 00:25:23.691 Process with pid 2186702 is not found 00:25:23.691 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:23.691 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:23.691 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:23.691 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:25:23.691 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:25:23.691 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:23.691 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:25:23.691 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:23.691 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:23.692 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.692 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:23.692 07:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.608 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:25.608 00:25:25.608 real 0m10.243s 00:25:25.608 user 0m27.933s 00:25:25.608 sys 0m4.003s 00:25:25.608 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:25.608 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:25.608 ************************************ 00:25:25.608 END TEST nvmf_shutdown_tc4 00:25:25.608 ************************************ 00:25:25.608 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:25:25.608 00:25:25.608 real 0m44.299s 00:25:25.608 user 1m45.422s 00:25:25.608 sys 0m14.451s 00:25:25.608 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:25.608 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:25.608 ************************************ 00:25:25.608 END TEST nvmf_shutdown 00:25:25.608 ************************************ 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:25.871 ************************************ 00:25:25.871 START TEST nvmf_nsid 00:25:25.871 ************************************ 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:25:25.871 * Looking for test storage... 00:25:25.871 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:25.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.871 --rc genhtml_branch_coverage=1 00:25:25.871 --rc genhtml_function_coverage=1 00:25:25.871 --rc genhtml_legend=1 00:25:25.871 --rc geninfo_all_blocks=1 00:25:25.871 --rc geninfo_unexecuted_blocks=1 00:25:25.871 00:25:25.871 ' 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:25.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.871 --rc genhtml_branch_coverage=1 00:25:25.871 --rc genhtml_function_coverage=1 00:25:25.871 --rc genhtml_legend=1 00:25:25.871 --rc geninfo_all_blocks=1 00:25:25.871 --rc geninfo_unexecuted_blocks=1 00:25:25.871 00:25:25.871 ' 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:25.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.871 --rc genhtml_branch_coverage=1 00:25:25.871 --rc genhtml_function_coverage=1 00:25:25.871 --rc genhtml_legend=1 00:25:25.871 --rc geninfo_all_blocks=1 00:25:25.871 --rc geninfo_unexecuted_blocks=1 00:25:25.871 00:25:25.871 ' 00:25:25.871 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:25.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.871 --rc genhtml_branch_coverage=1 00:25:25.871 --rc genhtml_function_coverage=1 00:25:25.871 --rc genhtml_legend=1 00:25:25.871 --rc geninfo_all_blocks=1 00:25:25.871 --rc geninfo_unexecuted_blocks=1 00:25:25.871 00:25:25.872 ' 00:25:25.872 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:25.872 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:25:25.872 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:25.872 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:25.872 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:25.872 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:25.872 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:25.872 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:25.872 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:25.872 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:25.872 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:25.872 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:26.134 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:26.134 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:26.134 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:26.134 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:26.134 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:26.134 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:26.134 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:26.134 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:25:26.134 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:26.134 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:26.134 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:26.134 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.134 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.134 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.134 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:25:26.134 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.134 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:25:26.134 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:26.134 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:26.134 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:26.134 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:26.134 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:26.134 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:26.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:26.134 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:26.134 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:26.134 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:26.134 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:25:26.134 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:25:26.134 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:25:26.134 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:25:26.134 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:25:26.134 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:25:26.134 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:26.134 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:26.134 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:26.134 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:26.134 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:26.135 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.135 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:26.135 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.135 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:26.135 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:26.135 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:25:26.135 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:34.283 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:34.283 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:34.284 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:34.284 Found net devices under 0000:31:00.0: cvl_0_0 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:34.284 Found net devices under 0000:31:00.1: cvl_0_1 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:34.284 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:34.545 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:34.545 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:34.545 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:34.545 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:34.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:34.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:25:34.545 00:25:34.545 --- 10.0.0.2 ping statistics --- 00:25:34.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.545 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:25:34.545 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:34.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:34.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:25:34.545 00:25:34.545 --- 10.0.0.1 ping statistics --- 00:25:34.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.545 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:25:34.545 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:34.545 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:25:34.545 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:34.545 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:34.545 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:34.545 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:34.545 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:34.545 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:34.545 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:34.545 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:25:34.545 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:34.545 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:34.545 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:34.545 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2193107 00:25:34.545 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2193107 00:25:34.545 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:25:34.545 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2193107 ']' 00:25:34.545 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:34.545 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:34.545 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:34.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:34.545 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:34.545 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:34.545 [2024-11-26 07:35:18.616418] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:25:34.546 [2024-11-26 07:35:18.616485] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:34.807 [2024-11-26 07:35:18.706017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.807 [2024-11-26 07:35:18.746812] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:34.807 [2024-11-26 07:35:18.746849] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:34.807 [2024-11-26 07:35:18.746857] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:34.807 [2024-11-26 07:35:18.746869] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:34.807 [2024-11-26 07:35:18.746876] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:34.807 [2024-11-26 07:35:18.747505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.378 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:35.378 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:25:35.378 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:35.378 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:35.378 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:35.378 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:35.378 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:35.378 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2193139 00:25:35.378 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:25:35.378 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:25:35.378 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:25:35.378 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:25:35.378 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.378 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.378 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.378 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.378 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.378 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.378 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.378 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.378 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.378 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:25:35.378 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:25:35.378 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=dfc1b866-4c81-448e-b742-fe06533e562f 00:25:35.378 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:25:35.378 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=45f7ad61-f264-43cb-9a25-e315d6bb7b6c 00:25:35.378 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:25:35.378 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=05e72c38-bcb4-40a7-a87e-a1ec11878dd3 00:25:35.378 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:25:35.378 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.379 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:35.379 null0 00:25:35.379 null1 00:25:35.379 null2 00:25:35.379 [2024-11-26 07:35:19.502009] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:25:35.379 [2024-11-26 07:35:19.502061] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2193139 ] 00:25:35.379 [2024-11-26 07:35:19.504538] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:35.641 [2024-11-26 07:35:19.528733] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:35.641 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.641 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2193139 /var/tmp/tgt2.sock 00:25:35.641 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2193139 ']' 00:25:35.641 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:25:35.641 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:35.641 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:25:35.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:25:35.641 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:35.641 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:35.641 [2024-11-26 07:35:19.595814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.641 [2024-11-26 07:35:19.633325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:35.902 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:35.902 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:25:35.902 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:25:36.163 [2024-11-26 07:35:20.122274] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:36.163 [2024-11-26 07:35:20.138438] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:25:36.163 nvme0n1 nvme0n2 00:25:36.163 nvme1n1 00:25:36.163 07:35:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:25:36.163 07:35:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:25:36.163 07:35:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:37.616 07:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:25:37.616 07:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:25:37.616 07:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:25:37.616 07:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:25:37.616 07:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:25:37.616 07:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:25:37.616 07:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:25:37.616 07:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:25:37.616 07:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:37.617 07:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:25:37.617 07:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:25:37.617 07:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:25:37.617 07:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:25:38.559 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:38.559 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:25:38.559 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:25:38.559 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:25:38.559 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:25:38.559 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid dfc1b866-4c81-448e-b742-fe06533e562f 00:25:38.559 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:25:38.559 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:25:38.559 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:25:38.559 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:25:38.559 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:25:38.821 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=dfc1b8664c81448eb742fe06533e562f 00:25:38.821 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo DFC1B8664C81448EB742FE06533E562F 00:25:38.821 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ DFC1B8664C81448EB742FE06533E562F == \D\F\C\1\B\8\6\6\4\C\8\1\4\4\8\E\B\7\4\2\F\E\0\6\5\3\3\E\5\6\2\F ]] 00:25:38.821 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:25:38.821 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:25:38.821 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:38.821 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:25:38.821 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:25:38.821 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:25:38.821 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:25:38.821 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 45f7ad61-f264-43cb-9a25-e315d6bb7b6c 00:25:38.821 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:25:38.821 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:25:38.821 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:25:38.821 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:25:38.821 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:25:38.821 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=45f7ad61f26443cb9a25e315d6bb7b6c 00:25:38.821 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 45F7AD61F26443CB9A25E315D6BB7B6C 00:25:38.821 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 45F7AD61F26443CB9A25E315D6BB7B6C == \4\5\F\7\A\D\6\1\F\2\6\4\4\3\C\B\9\A\2\5\E\3\1\5\D\6\B\B\7\B\6\C ]] 00:25:38.821 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:25:38.821 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:25:38.821 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:38.821 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:25:38.821 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:25:38.821 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:25:38.821 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:25:38.821 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 05e72c38-bcb4-40a7-a87e-a1ec11878dd3 00:25:38.821 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:25:38.821 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:25:38.821 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:25:38.821 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:25:38.821 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:25:38.821 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=05e72c38bcb440a7a87ea1ec11878dd3 00:25:38.821 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 05E72C38BCB440A7A87EA1EC11878DD3 00:25:38.821 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 05E72C38BCB440A7A87EA1EC11878DD3 == \0\5\E\7\2\C\3\8\B\C\B\4\4\0\A\7\A\8\7\E\A\1\E\C\1\1\8\7\8\D\D\3 ]] 00:25:38.821 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:25:39.082 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:25:39.082 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:25:39.082 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2193139 00:25:39.082 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2193139 ']' 00:25:39.082 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2193139 00:25:39.082 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:25:39.082 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:39.082 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2193139 00:25:39.082 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:39.082 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:39.082 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2193139' 00:25:39.082 killing process with pid 2193139 00:25:39.082 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2193139 00:25:39.082 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2193139 00:25:39.344 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:25:39.344 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:39.344 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:25:39.344 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:39.344 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:25:39.344 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:39.344 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:39.344 rmmod nvme_tcp 00:25:39.344 rmmod nvme_fabrics 00:25:39.344 rmmod nvme_keyring 00:25:39.344 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:39.344 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:25:39.344 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:25:39.344 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2193107 ']' 00:25:39.344 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2193107 00:25:39.344 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2193107 ']' 00:25:39.344 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2193107 00:25:39.344 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:25:39.344 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:39.344 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2193107 00:25:39.344 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:39.344 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:39.344 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2193107' 00:25:39.344 killing process with pid 2193107 00:25:39.344 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2193107 00:25:39.344 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2193107 00:25:39.604 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:39.604 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:39.604 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:39.604 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:25:39.604 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:25:39.604 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:25:39.604 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:39.604 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:39.604 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:39.604 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.604 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:39.604 07:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.151 07:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:42.151 00:25:42.151 real 0m15.852s 00:25:42.151 user 0m11.445s 00:25:42.151 sys 0m7.576s 00:25:42.151 07:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:42.151 07:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:42.151 ************************************ 00:25:42.151 END TEST nvmf_nsid 00:25:42.151 ************************************ 00:25:42.151 07:35:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:25:42.151 00:25:42.151 real 13m30.318s 00:25:42.151 user 27m33.867s 00:25:42.151 sys 4m11.267s 00:25:42.151 07:35:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:42.151 07:35:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:42.151 ************************************ 00:25:42.151 END TEST nvmf_target_extra 00:25:42.151 ************************************ 00:25:42.151 07:35:25 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:25:42.151 07:35:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:42.151 07:35:25 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:42.151 07:35:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:42.151 ************************************ 00:25:42.151 START TEST nvmf_host 00:25:42.151 ************************************ 00:25:42.151 07:35:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:25:42.151 * Looking for test storage... 00:25:42.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:25:42.151 07:35:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:42.151 07:35:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:25:42.151 07:35:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:42.151 07:35:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:42.151 07:35:25 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:42.151 07:35:25 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:42.151 07:35:25 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:42.151 07:35:25 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:42.151 07:35:25 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:42.151 07:35:25 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:42.151 07:35:25 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:42.151 07:35:25 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:42.151 07:35:25 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:42.151 07:35:25 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:42.151 07:35:25 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:42.151 07:35:25 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:25:42.151 07:35:25 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:25:42.151 07:35:25 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:42.151 07:35:25 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:42.151 07:35:25 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:25:42.151 07:35:25 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:25:42.151 07:35:25 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:42.151 07:35:25 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:25:42.151 07:35:25 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:42.151 07:35:25 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:25:42.151 07:35:25 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:25:42.151 07:35:25 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:42.151 07:35:25 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:25:42.151 07:35:25 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:42.151 07:35:25 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:42.151 07:35:25 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:42.151 07:35:25 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:25:42.151 07:35:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:42.151 07:35:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:42.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.151 --rc genhtml_branch_coverage=1 00:25:42.151 --rc genhtml_function_coverage=1 00:25:42.151 --rc genhtml_legend=1 00:25:42.151 --rc geninfo_all_blocks=1 00:25:42.151 --rc geninfo_unexecuted_blocks=1 00:25:42.151 00:25:42.151 ' 00:25:42.151 07:35:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:42.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.152 --rc genhtml_branch_coverage=1 00:25:42.152 --rc genhtml_function_coverage=1 00:25:42.152 --rc genhtml_legend=1 00:25:42.152 --rc geninfo_all_blocks=1 00:25:42.152 --rc geninfo_unexecuted_blocks=1 00:25:42.152 00:25:42.152 ' 00:25:42.152 07:35:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:42.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.152 --rc genhtml_branch_coverage=1 00:25:42.152 --rc genhtml_function_coverage=1 00:25:42.152 --rc genhtml_legend=1 00:25:42.152 --rc geninfo_all_blocks=1 00:25:42.152 --rc geninfo_unexecuted_blocks=1 00:25:42.152 00:25:42.152 ' 00:25:42.152 07:35:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:42.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.152 --rc genhtml_branch_coverage=1 00:25:42.152 --rc genhtml_function_coverage=1 00:25:42.152 --rc genhtml_legend=1 00:25:42.152 --rc geninfo_all_blocks=1 00:25:42.152 --rc geninfo_unexecuted_blocks=1 00:25:42.152 00:25:42.152 ' 00:25:42.152 07:35:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:42.152 07:35:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:25:42.152 07:35:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:42.152 07:35:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:42.152 07:35:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:42.152 07:35:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:42.152 07:35:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:42.152 07:35:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:42.152 07:35:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:42.152 07:35:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:42.152 07:35:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:42.152 07:35:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:42.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.152 ************************************ 00:25:42.152 START TEST nvmf_multicontroller 00:25:42.152 ************************************ 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:42.152 * Looking for test storage... 00:25:42.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:42.152 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:42.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.153 --rc genhtml_branch_coverage=1 00:25:42.153 --rc genhtml_function_coverage=1 00:25:42.153 --rc genhtml_legend=1 00:25:42.153 --rc geninfo_all_blocks=1 00:25:42.153 --rc geninfo_unexecuted_blocks=1 00:25:42.153 00:25:42.153 ' 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:42.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.153 --rc genhtml_branch_coverage=1 00:25:42.153 --rc genhtml_function_coverage=1 00:25:42.153 --rc genhtml_legend=1 00:25:42.153 --rc geninfo_all_blocks=1 00:25:42.153 --rc geninfo_unexecuted_blocks=1 00:25:42.153 00:25:42.153 ' 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:42.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.153 --rc genhtml_branch_coverage=1 00:25:42.153 --rc genhtml_function_coverage=1 00:25:42.153 --rc genhtml_legend=1 00:25:42.153 --rc geninfo_all_blocks=1 00:25:42.153 --rc geninfo_unexecuted_blocks=1 00:25:42.153 00:25:42.153 ' 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:42.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.153 --rc genhtml_branch_coverage=1 00:25:42.153 --rc genhtml_function_coverage=1 00:25:42.153 --rc genhtml_legend=1 00:25:42.153 --rc geninfo_all_blocks=1 00:25:42.153 --rc geninfo_unexecuted_blocks=1 00:25:42.153 00:25:42.153 ' 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:42.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:25:42.415 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:42.415 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:42.415 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:42.415 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.415 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.415 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.415 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:25:42.415 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.415 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:25:42.415 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:42.415 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:42.415 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:42.415 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:42.415 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:42.415 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:42.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:42.415 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:42.415 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:42.415 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:42.415 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:42.415 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:42.415 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:25:42.415 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:25:42.415 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:42.415 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:25:42.415 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:25:42.415 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:42.415 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:42.415 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:42.415 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:42.415 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:42.415 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.415 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:42.415 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.415 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:42.415 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:42.415 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:25:42.415 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:50.564 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:50.564 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:50.564 Found net devices under 0000:31:00.0: cvl_0_0 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:50.564 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:50.565 Found net devices under 0000:31:00.1: cvl_0_1 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:50.565 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:50.565 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:25:50.565 00:25:50.565 --- 10.0.0.2 ping statistics --- 00:25:50.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.565 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:50.565 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:50.565 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:25:50.565 00:25:50.565 --- 10.0.0.1 ping statistics --- 00:25:50.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.565 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2198914 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2198914 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2198914 ']' 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:50.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:50.565 07:35:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:50.565 [2024-11-26 07:35:34.668776] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:25:50.565 [2024-11-26 07:35:34.668841] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:50.827 [2024-11-26 07:35:34.775117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:50.827 [2024-11-26 07:35:34.827674] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:50.827 [2024-11-26 07:35:34.827727] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:50.827 [2024-11-26 07:35:34.827735] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:50.827 [2024-11-26 07:35:34.827742] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:50.827 [2024-11-26 07:35:34.827749] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:50.827 [2024-11-26 07:35:34.829597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:50.827 [2024-11-26 07:35:34.829762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:50.827 [2024-11-26 07:35:34.829762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:51.400 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:51.400 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:25:51.400 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:51.400 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:51.400 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:51.400 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:51.400 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:51.400 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.400 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:51.400 [2024-11-26 07:35:35.516440] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:51.400 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.400 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:51.400 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.400 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:51.661 Malloc0 00:25:51.661 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.661 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:51.661 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.661 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:51.661 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.661 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:51.661 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.661 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:51.661 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.661 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:51.661 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.661 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:51.661 [2024-11-26 07:35:35.571897] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:51.661 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.662 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:51.662 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.662 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:51.662 [2024-11-26 07:35:35.579867] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:51.662 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.662 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:51.662 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.662 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:51.662 Malloc1 00:25:51.662 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.662 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:25:51.662 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.662 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:51.662 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.662 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:25:51.662 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.662 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:51.662 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.662 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:51.662 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.662 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:51.662 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.662 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:25:51.662 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.662 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:51.662 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.662 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2198967 00:25:51.662 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:51.662 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2198967 /var/tmp/bdevperf.sock 00:25:51.662 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2198967 ']' 00:25:51.662 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:51.662 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:51.662 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:51.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:51.662 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:51.662 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:51.662 07:35:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:52.608 NVMe0n1 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.608 1 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:52.608 request: 00:25:52.608 { 00:25:52.608 "name": "NVMe0", 00:25:52.608 "trtype": "tcp", 00:25:52.608 "traddr": "10.0.0.2", 00:25:52.608 "adrfam": "ipv4", 00:25:52.608 "trsvcid": "4420", 00:25:52.608 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:52.608 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:25:52.608 "hostaddr": "10.0.0.1", 00:25:52.608 "prchk_reftag": false, 00:25:52.608 "prchk_guard": false, 00:25:52.608 "hdgst": false, 00:25:52.608 "ddgst": false, 00:25:52.608 "allow_unrecognized_csi": false, 00:25:52.608 "method": "bdev_nvme_attach_controller", 00:25:52.608 "req_id": 1 00:25:52.608 } 00:25:52.608 Got JSON-RPC error response 00:25:52.608 response: 00:25:52.608 { 00:25:52.608 "code": -114, 00:25:52.608 "message": "A controller named NVMe0 already exists with the specified network path" 00:25:52.608 } 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:52.608 request: 00:25:52.608 { 00:25:52.608 "name": "NVMe0", 00:25:52.608 "trtype": "tcp", 00:25:52.608 "traddr": "10.0.0.2", 00:25:52.608 "adrfam": "ipv4", 00:25:52.608 "trsvcid": "4420", 00:25:52.608 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:52.608 "hostaddr": "10.0.0.1", 00:25:52.608 "prchk_reftag": false, 00:25:52.608 "prchk_guard": false, 00:25:52.608 "hdgst": false, 00:25:52.608 "ddgst": false, 00:25:52.608 "allow_unrecognized_csi": false, 00:25:52.608 "method": "bdev_nvme_attach_controller", 00:25:52.608 "req_id": 1 00:25:52.608 } 00:25:52.608 Got JSON-RPC error response 00:25:52.608 response: 00:25:52.608 { 00:25:52.608 "code": -114, 00:25:52.608 "message": "A controller named NVMe0 already exists with the specified network path" 00:25:52.608 } 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:52.608 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:52.609 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:52.609 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:52.609 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:25:52.609 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:52.609 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:52.609 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:52.609 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:52.609 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:52.609 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:52.609 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.609 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:52.609 request: 00:25:52.609 { 00:25:52.609 "name": "NVMe0", 00:25:52.609 "trtype": "tcp", 00:25:52.609 "traddr": "10.0.0.2", 00:25:52.609 "adrfam": "ipv4", 00:25:52.609 "trsvcid": "4420", 00:25:52.609 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:52.609 "hostaddr": "10.0.0.1", 00:25:52.609 "prchk_reftag": false, 00:25:52.609 "prchk_guard": false, 00:25:52.609 "hdgst": false, 00:25:52.609 "ddgst": false, 00:25:52.609 "multipath": "disable", 00:25:52.609 "allow_unrecognized_csi": false, 00:25:52.609 "method": "bdev_nvme_attach_controller", 00:25:52.609 "req_id": 1 00:25:52.609 } 00:25:52.609 Got JSON-RPC error response 00:25:52.609 response: 00:25:52.609 { 00:25:52.609 "code": -114, 00:25:52.609 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:25:52.609 } 00:25:52.609 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:52.609 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:25:52.609 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:52.609 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:52.609 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:52.609 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:52.609 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:25:52.609 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:52.609 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:52.609 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:52.609 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:52.609 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:52.609 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:52.609 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.609 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:52.609 request: 00:25:52.609 { 00:25:52.609 "name": "NVMe0", 00:25:52.609 "trtype": "tcp", 00:25:52.609 "traddr": "10.0.0.2", 00:25:52.609 "adrfam": "ipv4", 00:25:52.609 "trsvcid": "4420", 00:25:52.609 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:52.609 "hostaddr": "10.0.0.1", 00:25:52.609 "prchk_reftag": false, 00:25:52.609 "prchk_guard": false, 00:25:52.609 "hdgst": false, 00:25:52.609 "ddgst": false, 00:25:52.609 "multipath": "failover", 00:25:52.609 "allow_unrecognized_csi": false, 00:25:52.609 "method": "bdev_nvme_attach_controller", 00:25:52.609 "req_id": 1 00:25:52.609 } 00:25:52.609 Got JSON-RPC error response 00:25:52.609 response: 00:25:52.609 { 00:25:52.609 "code": -114, 00:25:52.609 "message": "A controller named NVMe0 already exists with the specified network path" 00:25:52.609 } 00:25:52.609 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:52.609 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:25:52.609 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:52.609 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:52.609 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:52.609 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:52.609 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.609 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:52.872 NVMe0n1 00:25:52.872 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.872 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:52.872 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.872 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:52.872 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.872 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:25:52.872 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.872 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:52.872 00:25:52.872 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.872 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:52.872 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:25:52.872 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.872 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:52.872 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.872 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:25:52.872 07:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:54.262 { 00:25:54.262 "results": [ 00:25:54.262 { 00:25:54.262 "job": "NVMe0n1", 00:25:54.262 "core_mask": "0x1", 00:25:54.262 "workload": "write", 00:25:54.262 "status": "finished", 00:25:54.262 "queue_depth": 128, 00:25:54.262 "io_size": 4096, 00:25:54.262 "runtime": 1.004994, 00:25:54.262 "iops": 20002.10946533014, 00:25:54.262 "mibps": 78.13324009894586, 00:25:54.262 "io_failed": 0, 00:25:54.262 "io_timeout": 0, 00:25:54.262 "avg_latency_us": 6390.166163234172, 00:25:54.262 "min_latency_us": 4014.08, 00:25:54.262 "max_latency_us": 15947.093333333334 00:25:54.262 } 00:25:54.262 ], 00:25:54.262 "core_count": 1 00:25:54.262 } 00:25:54.262 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:25:54.262 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.262 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:54.262 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.262 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:25:54.262 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2198967 00:25:54.262 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2198967 ']' 00:25:54.262 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2198967 00:25:54.262 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:25:54.262 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:54.262 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2198967 00:25:54.262 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:54.262 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:54.262 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2198967' 00:25:54.262 killing process with pid 2198967 00:25:54.262 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2198967 00:25:54.262 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2198967 00:25:54.262 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:54.262 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.262 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:54.262 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.262 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:54.262 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.263 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:54.263 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.263 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:25:54.263 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:54.263 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:25:54.263 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:25:54.263 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:25:54.263 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:25:54.263 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:54.263 [2024-11-26 07:35:35.680334] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:25:54.263 [2024-11-26 07:35:35.680394] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2198967 ] 00:25:54.263 [2024-11-26 07:35:35.758118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.263 [2024-11-26 07:35:35.794419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:54.263 [2024-11-26 07:35:36.903228] bdev.c:4696:bdev_name_add: *ERROR*: Bdev name 39abfc24-011f-4df1-93de-8ffb39f39dda already exists 00:25:54.263 [2024-11-26 07:35:36.903259] bdev.c:7832:bdev_register: *ERROR*: Unable to add uuid:39abfc24-011f-4df1-93de-8ffb39f39dda alias for bdev NVMe1n1 00:25:54.263 [2024-11-26 07:35:36.903268] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:25:54.263 Running I/O for 1 seconds... 00:25:54.263 19974.00 IOPS, 78.02 MiB/s 00:25:54.263 Latency(us) 00:25:54.263 [2024-11-26T06:35:38.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:54.263 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:25:54.263 NVMe0n1 : 1.00 20002.11 78.13 0.00 0.00 6390.17 4014.08 15947.09 00:25:54.263 [2024-11-26T06:35:38.400Z] =================================================================================================================== 00:25:54.263 [2024-11-26T06:35:38.400Z] Total : 20002.11 78.13 0.00 0.00 6390.17 4014.08 15947.09 00:25:54.263 Received shutdown signal, test time was about 1.000000 seconds 00:25:54.263 00:25:54.263 Latency(us) 00:25:54.263 [2024-11-26T06:35:38.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:54.263 [2024-11-26T06:35:38.400Z] =================================================================================================================== 00:25:54.263 [2024-11-26T06:35:38.400Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:54.263 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:54.263 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:54.263 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:25:54.263 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:25:54.263 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:54.263 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:25:54.263 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:54.263 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:25:54.263 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:54.263 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:54.263 rmmod nvme_tcp 00:25:54.263 rmmod nvme_fabrics 00:25:54.263 rmmod nvme_keyring 00:25:54.263 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:54.263 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:25:54.263 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:25:54.263 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2198914 ']' 00:25:54.263 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2198914 00:25:54.263 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2198914 ']' 00:25:54.263 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2198914 00:25:54.263 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:25:54.263 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:54.263 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2198914 00:25:54.525 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:54.525 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:54.525 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2198914' 00:25:54.525 killing process with pid 2198914 00:25:54.525 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2198914 00:25:54.525 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2198914 00:25:54.525 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:54.525 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:54.525 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:54.525 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:25:54.525 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:25:54.525 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:54.525 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:25:54.525 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:54.525 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:54.525 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.525 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:54.525 07:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:57.075 00:25:57.075 real 0m14.584s 00:25:57.075 user 0m16.512s 00:25:57.075 sys 0m7.064s 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:57.075 ************************************ 00:25:57.075 END TEST nvmf_multicontroller 00:25:57.075 ************************************ 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.075 ************************************ 00:25:57.075 START TEST nvmf_aer 00:25:57.075 ************************************ 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:57.075 * Looking for test storage... 00:25:57.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:57.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.075 --rc genhtml_branch_coverage=1 00:25:57.075 --rc genhtml_function_coverage=1 00:25:57.075 --rc genhtml_legend=1 00:25:57.075 --rc geninfo_all_blocks=1 00:25:57.075 --rc geninfo_unexecuted_blocks=1 00:25:57.075 00:25:57.075 ' 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:57.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.075 --rc genhtml_branch_coverage=1 00:25:57.075 --rc genhtml_function_coverage=1 00:25:57.075 --rc genhtml_legend=1 00:25:57.075 --rc geninfo_all_blocks=1 00:25:57.075 --rc geninfo_unexecuted_blocks=1 00:25:57.075 00:25:57.075 ' 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:57.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.075 --rc genhtml_branch_coverage=1 00:25:57.075 --rc genhtml_function_coverage=1 00:25:57.075 --rc genhtml_legend=1 00:25:57.075 --rc geninfo_all_blocks=1 00:25:57.075 --rc geninfo_unexecuted_blocks=1 00:25:57.075 00:25:57.075 ' 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:57.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.075 --rc genhtml_branch_coverage=1 00:25:57.075 --rc genhtml_function_coverage=1 00:25:57.075 --rc genhtml_legend=1 00:25:57.075 --rc geninfo_all_blocks=1 00:25:57.075 --rc geninfo_unexecuted_blocks=1 00:25:57.075 00:25:57.075 ' 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:57.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:57.076 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.076 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.076 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.076 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:25:57.076 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.076 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:25:57.076 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:57.076 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:57.076 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:57.076 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:57.076 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:57.076 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:57.076 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:57.076 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:57.076 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:57.076 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:57.076 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:25:57.076 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:57.076 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:57.076 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:57.076 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:57.076 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:57.076 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.076 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:57.076 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.076 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:57.076 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:57.076 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:25:57.076 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:05.224 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:05.224 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:26:05.224 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:05.224 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:05.224 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:05.224 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:05.224 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:05.224 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:26:05.224 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:05.224 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:26:05.224 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:26:05.224 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:26:05.224 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:26:05.224 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:26:05.224 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:05.225 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:05.225 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:05.225 Found net devices under 0000:31:00.0: cvl_0_0 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:05.225 Found net devices under 0000:31:00.1: cvl_0_1 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:05.225 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:05.491 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:05.491 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:05.491 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:05.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:05.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:26:05.491 00:26:05.491 --- 10.0.0.2 ping statistics --- 00:26:05.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.491 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:26:05.491 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:05.491 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:05.491 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:26:05.491 00:26:05.491 --- 10.0.0.1 ping statistics --- 00:26:05.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.491 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:26:05.491 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:05.491 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:26:05.491 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:05.491 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:05.491 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:05.491 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:05.491 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:05.491 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:05.491 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:05.491 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:26:05.491 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:05.491 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:05.491 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:05.491 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2204310 00:26:05.491 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2204310 00:26:05.491 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:05.491 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2204310 ']' 00:26:05.491 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:05.491 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:05.491 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:05.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:05.491 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:05.491 07:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:05.491 [2024-11-26 07:35:49.495726] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:26:05.491 [2024-11-26 07:35:49.495796] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:05.491 [2024-11-26 07:35:49.588011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:05.752 [2024-11-26 07:35:49.629521] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:05.752 [2024-11-26 07:35:49.629558] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:05.752 [2024-11-26 07:35:49.629566] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:05.752 [2024-11-26 07:35:49.629573] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:05.752 [2024-11-26 07:35:49.629579] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:05.752 [2024-11-26 07:35:49.631145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:05.752 [2024-11-26 07:35:49.631290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:05.752 [2024-11-26 07:35:49.631450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:05.752 [2024-11-26 07:35:49.631451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:06.323 [2024-11-26 07:35:50.338048] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:06.323 Malloc0 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:06.323 [2024-11-26 07:35:50.409043] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:06.323 [ 00:26:06.323 { 00:26:06.323 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:06.323 "subtype": "Discovery", 00:26:06.323 "listen_addresses": [], 00:26:06.323 "allow_any_host": true, 00:26:06.323 "hosts": [] 00:26:06.323 }, 00:26:06.323 { 00:26:06.323 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:06.323 "subtype": "NVMe", 00:26:06.323 "listen_addresses": [ 00:26:06.323 { 00:26:06.323 "trtype": "TCP", 00:26:06.323 "adrfam": "IPv4", 00:26:06.323 "traddr": "10.0.0.2", 00:26:06.323 "trsvcid": "4420" 00:26:06.323 } 00:26:06.323 ], 00:26:06.323 "allow_any_host": true, 00:26:06.323 "hosts": [], 00:26:06.323 "serial_number": "SPDK00000000000001", 00:26:06.323 "model_number": "SPDK bdev Controller", 00:26:06.323 "max_namespaces": 2, 00:26:06.323 "min_cntlid": 1, 00:26:06.323 "max_cntlid": 65519, 00:26:06.323 "namespaces": [ 00:26:06.323 { 00:26:06.323 "nsid": 1, 00:26:06.323 "bdev_name": "Malloc0", 00:26:06.323 "name": "Malloc0", 00:26:06.323 "nguid": "A3F49AD58EC44E999275718FFB97BCD3", 00:26:06.323 "uuid": "a3f49ad5-8ec4-4e99-9275-718ffb97bcd3" 00:26:06.323 } 00:26:06.323 ] 00:26:06.323 } 00:26:06.323 ] 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2204606 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:26:06.323 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:26:06.584 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:06.584 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:26:06.584 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:26:06.584 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:26:06.584 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:06.584 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:26:06.584 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:26:06.584 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:26:06.845 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:06.845 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 3 -lt 200 ']' 00:26:06.845 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=4 00:26:06.845 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:26:06.845 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:06.845 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:06.845 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:26:06.845 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:26:06.845 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.845 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:06.845 Malloc1 00:26:06.845 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.845 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:26:06.845 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.845 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:06.845 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.845 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:26:06.845 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.845 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:06.845 Asynchronous Event Request test 00:26:06.845 Attaching to 10.0.0.2 00:26:06.845 Attached to 10.0.0.2 00:26:06.845 Registering asynchronous event callbacks... 00:26:06.845 Starting namespace attribute notice tests for all controllers... 00:26:06.845 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:26:06.845 aer_cb - Changed Namespace 00:26:06.845 Cleaning up... 00:26:06.845 [ 00:26:06.845 { 00:26:06.845 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:06.845 "subtype": "Discovery", 00:26:06.845 "listen_addresses": [], 00:26:06.845 "allow_any_host": true, 00:26:06.845 "hosts": [] 00:26:06.845 }, 00:26:06.845 { 00:26:06.845 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:06.845 "subtype": "NVMe", 00:26:06.845 "listen_addresses": [ 00:26:06.845 { 00:26:06.845 "trtype": "TCP", 00:26:06.845 "adrfam": "IPv4", 00:26:06.845 "traddr": "10.0.0.2", 00:26:06.845 "trsvcid": "4420" 00:26:06.845 } 00:26:06.845 ], 00:26:06.845 "allow_any_host": true, 00:26:06.845 "hosts": [], 00:26:06.845 "serial_number": "SPDK00000000000001", 00:26:06.845 "model_number": "SPDK bdev Controller", 00:26:06.845 "max_namespaces": 2, 00:26:06.845 "min_cntlid": 1, 00:26:06.845 "max_cntlid": 65519, 00:26:06.845 "namespaces": [ 00:26:06.845 { 00:26:06.845 "nsid": 1, 00:26:06.845 "bdev_name": "Malloc0", 00:26:06.845 "name": "Malloc0", 00:26:06.845 "nguid": "A3F49AD58EC44E999275718FFB97BCD3", 00:26:06.845 "uuid": "a3f49ad5-8ec4-4e99-9275-718ffb97bcd3" 00:26:06.845 }, 00:26:06.845 { 00:26:06.845 "nsid": 2, 00:26:06.845 "bdev_name": "Malloc1", 00:26:06.845 "name": "Malloc1", 00:26:06.845 "nguid": "24C81979442B4C4696D45C19AB6C845D", 00:26:06.845 "uuid": "24c81979-442b-4c46-96d4-5c19ab6c845d" 00:26:06.845 } 00:26:06.845 ] 00:26:06.845 } 00:26:06.845 ] 00:26:06.845 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.845 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2204606 00:26:06.845 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:06.845 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.845 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:06.845 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.845 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:26:06.845 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.845 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:06.845 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.845 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:06.845 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.845 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:06.845 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.845 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:26:06.845 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:26:06.845 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:06.845 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:26:07.106 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:07.106 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:26:07.106 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:07.106 07:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:07.106 rmmod nvme_tcp 00:26:07.106 rmmod nvme_fabrics 00:26:07.106 rmmod nvme_keyring 00:26:07.106 07:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:07.106 07:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:26:07.106 07:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:26:07.106 07:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2204310 ']' 00:26:07.106 07:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2204310 00:26:07.106 07:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2204310 ']' 00:26:07.106 07:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2204310 00:26:07.106 07:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:26:07.106 07:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:07.106 07:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2204310 00:26:07.106 07:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:07.106 07:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:07.106 07:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2204310' 00:26:07.106 killing process with pid 2204310 00:26:07.106 07:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2204310 00:26:07.106 07:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2204310 00:26:07.106 07:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:07.106 07:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:07.106 07:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:07.106 07:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:26:07.106 07:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:26:07.106 07:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:07.106 07:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:26:07.368 07:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:07.369 07:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:07.369 07:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:07.369 07:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:07.369 07:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.283 07:35:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:09.283 00:26:09.283 real 0m12.600s 00:26:09.283 user 0m8.902s 00:26:09.283 sys 0m6.880s 00:26:09.283 07:35:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:09.283 07:35:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:09.283 ************************************ 00:26:09.283 END TEST nvmf_aer 00:26:09.283 ************************************ 00:26:09.283 07:35:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:09.283 07:35:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:09.283 07:35:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:09.283 07:35:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.283 ************************************ 00:26:09.283 START TEST nvmf_async_init 00:26:09.283 ************************************ 00:26:09.283 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:09.545 * Looking for test storage... 00:26:09.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:09.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.545 --rc genhtml_branch_coverage=1 00:26:09.545 --rc genhtml_function_coverage=1 00:26:09.545 --rc genhtml_legend=1 00:26:09.545 --rc geninfo_all_blocks=1 00:26:09.545 --rc geninfo_unexecuted_blocks=1 00:26:09.545 00:26:09.545 ' 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:09.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.545 --rc genhtml_branch_coverage=1 00:26:09.545 --rc genhtml_function_coverage=1 00:26:09.545 --rc genhtml_legend=1 00:26:09.545 --rc geninfo_all_blocks=1 00:26:09.545 --rc geninfo_unexecuted_blocks=1 00:26:09.545 00:26:09.545 ' 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:09.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.545 --rc genhtml_branch_coverage=1 00:26:09.545 --rc genhtml_function_coverage=1 00:26:09.545 --rc genhtml_legend=1 00:26:09.545 --rc geninfo_all_blocks=1 00:26:09.545 --rc geninfo_unexecuted_blocks=1 00:26:09.545 00:26:09.545 ' 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:09.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.545 --rc genhtml_branch_coverage=1 00:26:09.545 --rc genhtml_function_coverage=1 00:26:09.545 --rc genhtml_legend=1 00:26:09.545 --rc geninfo_all_blocks=1 00:26:09.545 --rc geninfo_unexecuted_blocks=1 00:26:09.545 00:26:09.545 ' 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:09.545 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:09.546 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=263b513241c1427b9c1a46329aae1d3b 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:26:09.546 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:17.690 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:17.690 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:26:17.690 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:17.690 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:17.690 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:17.690 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:17.690 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:17.690 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:26:17.690 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:17.690 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:26:17.690 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:26:17.690 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:26:17.690 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:26:17.690 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:26:17.690 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:26:17.690 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:17.690 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:17.690 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:17.690 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:17.690 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:17.690 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:17.690 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:17.690 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:17.690 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:17.690 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:17.690 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:17.690 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:17.690 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:17.690 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:17.690 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:17.690 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:17.690 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:17.691 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:17.691 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:17.691 Found net devices under 0000:31:00.0: cvl_0_0 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:17.691 Found net devices under 0000:31:00.1: cvl_0_1 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:17.691 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:17.951 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:17.951 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:17.951 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:17.951 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:17.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:17.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:26:17.951 00:26:17.951 --- 10.0.0.2 ping statistics --- 00:26:17.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.951 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:26:17.951 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:17.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:17.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:26:17.951 00:26:17.951 --- 10.0.0.1 ping statistics --- 00:26:17.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.951 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:26:17.951 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:17.951 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:26:17.951 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:17.951 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:17.951 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:17.951 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:17.951 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:17.951 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:17.951 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:17.951 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:26:17.951 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:17.951 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:17.951 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:17.951 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:17.951 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2209463 00:26:17.951 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2209463 00:26:17.951 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2209463 ']' 00:26:17.951 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:17.951 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:17.951 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:17.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:17.951 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:17.951 07:36:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:17.951 [2024-11-26 07:36:02.025961] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:26:17.951 [2024-11-26 07:36:02.026017] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:18.212 [2024-11-26 07:36:02.111513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.212 [2024-11-26 07:36:02.147853] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:18.212 [2024-11-26 07:36:02.147889] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:18.212 [2024-11-26 07:36:02.147898] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:18.212 [2024-11-26 07:36:02.147905] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:18.212 [2024-11-26 07:36:02.147910] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:18.212 [2024-11-26 07:36:02.148525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.782 07:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:18.782 07:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:26:18.782 07:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:18.782 07:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:18.782 07:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:18.782 07:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:18.782 07:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:18.782 07:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.782 07:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:18.782 [2024-11-26 07:36:02.876817] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:18.782 07:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.782 07:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:26:18.782 07:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.782 07:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:18.782 null0 00:26:18.782 07:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.782 07:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:26:18.782 07:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.782 07:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:18.782 07:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.782 07:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:26:18.782 07:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.782 07:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:19.043 07:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.043 07:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 263b513241c1427b9c1a46329aae1d3b 00:26:19.043 07:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.043 07:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:19.043 07:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.043 07:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:19.043 07:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.043 07:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:19.043 [2024-11-26 07:36:02.937132] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:19.043 07:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.043 07:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:26:19.043 07:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.043 07:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:19.304 nvme0n1 00:26:19.304 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.304 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:19.304 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.304 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:19.304 [ 00:26:19.304 { 00:26:19.304 "name": "nvme0n1", 00:26:19.304 "aliases": [ 00:26:19.304 "263b5132-41c1-427b-9c1a-46329aae1d3b" 00:26:19.304 ], 00:26:19.304 "product_name": "NVMe disk", 00:26:19.304 "block_size": 512, 00:26:19.304 "num_blocks": 2097152, 00:26:19.304 "uuid": "263b5132-41c1-427b-9c1a-46329aae1d3b", 00:26:19.304 "numa_id": 0, 00:26:19.304 "assigned_rate_limits": { 00:26:19.304 "rw_ios_per_sec": 0, 00:26:19.304 "rw_mbytes_per_sec": 0, 00:26:19.304 "r_mbytes_per_sec": 0, 00:26:19.304 "w_mbytes_per_sec": 0 00:26:19.304 }, 00:26:19.304 "claimed": false, 00:26:19.304 "zoned": false, 00:26:19.304 "supported_io_types": { 00:26:19.304 "read": true, 00:26:19.304 "write": true, 00:26:19.304 "unmap": false, 00:26:19.304 "flush": true, 00:26:19.304 "reset": true, 00:26:19.304 "nvme_admin": true, 00:26:19.304 "nvme_io": true, 00:26:19.304 "nvme_io_md": false, 00:26:19.304 "write_zeroes": true, 00:26:19.304 "zcopy": false, 00:26:19.304 "get_zone_info": false, 00:26:19.304 "zone_management": false, 00:26:19.304 "zone_append": false, 00:26:19.304 "compare": true, 00:26:19.304 "compare_and_write": true, 00:26:19.304 "abort": true, 00:26:19.304 "seek_hole": false, 00:26:19.304 "seek_data": false, 00:26:19.304 "copy": true, 00:26:19.304 "nvme_iov_md": false 00:26:19.304 }, 00:26:19.304 "memory_domains": [ 00:26:19.304 { 00:26:19.304 "dma_device_id": "system", 00:26:19.304 "dma_device_type": 1 00:26:19.304 } 00:26:19.304 ], 00:26:19.304 "driver_specific": { 00:26:19.304 "nvme": [ 00:26:19.304 { 00:26:19.304 "trid": { 00:26:19.304 "trtype": "TCP", 00:26:19.304 "adrfam": "IPv4", 00:26:19.304 "traddr": "10.0.0.2", 00:26:19.304 "trsvcid": "4420", 00:26:19.304 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:19.304 }, 00:26:19.304 "ctrlr_data": { 00:26:19.304 "cntlid": 1, 00:26:19.304 "vendor_id": "0x8086", 00:26:19.304 "model_number": "SPDK bdev Controller", 00:26:19.304 "serial_number": "00000000000000000000", 00:26:19.304 "firmware_revision": "25.01", 00:26:19.304 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:19.304 "oacs": { 00:26:19.304 "security": 0, 00:26:19.304 "format": 0, 00:26:19.304 "firmware": 0, 00:26:19.304 "ns_manage": 0 00:26:19.304 }, 00:26:19.304 "multi_ctrlr": true, 00:26:19.304 "ana_reporting": false 00:26:19.304 }, 00:26:19.304 "vs": { 00:26:19.304 "nvme_version": "1.3" 00:26:19.304 }, 00:26:19.304 "ns_data": { 00:26:19.304 "id": 1, 00:26:19.304 "can_share": true 00:26:19.304 } 00:26:19.304 } 00:26:19.304 ], 00:26:19.304 "mp_policy": "active_passive" 00:26:19.304 } 00:26:19.304 } 00:26:19.304 ] 00:26:19.304 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.304 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:26:19.304 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.304 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:19.304 [2024-11-26 07:36:03.211323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:19.304 [2024-11-26 07:36:03.211385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f8f460 (9): Bad file descriptor 00:26:19.304 [2024-11-26 07:36:03.342958] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:26:19.304 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.304 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:19.304 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.305 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:19.305 [ 00:26:19.305 { 00:26:19.305 "name": "nvme0n1", 00:26:19.305 "aliases": [ 00:26:19.305 "263b5132-41c1-427b-9c1a-46329aae1d3b" 00:26:19.305 ], 00:26:19.305 "product_name": "NVMe disk", 00:26:19.305 "block_size": 512, 00:26:19.305 "num_blocks": 2097152, 00:26:19.305 "uuid": "263b5132-41c1-427b-9c1a-46329aae1d3b", 00:26:19.305 "numa_id": 0, 00:26:19.305 "assigned_rate_limits": { 00:26:19.305 "rw_ios_per_sec": 0, 00:26:19.305 "rw_mbytes_per_sec": 0, 00:26:19.305 "r_mbytes_per_sec": 0, 00:26:19.305 "w_mbytes_per_sec": 0 00:26:19.305 }, 00:26:19.305 "claimed": false, 00:26:19.305 "zoned": false, 00:26:19.305 "supported_io_types": { 00:26:19.305 "read": true, 00:26:19.305 "write": true, 00:26:19.305 "unmap": false, 00:26:19.305 "flush": true, 00:26:19.305 "reset": true, 00:26:19.305 "nvme_admin": true, 00:26:19.305 "nvme_io": true, 00:26:19.305 "nvme_io_md": false, 00:26:19.305 "write_zeroes": true, 00:26:19.305 "zcopy": false, 00:26:19.305 "get_zone_info": false, 00:26:19.305 "zone_management": false, 00:26:19.305 "zone_append": false, 00:26:19.305 "compare": true, 00:26:19.305 "compare_and_write": true, 00:26:19.305 "abort": true, 00:26:19.305 "seek_hole": false, 00:26:19.305 "seek_data": false, 00:26:19.305 "copy": true, 00:26:19.305 "nvme_iov_md": false 00:26:19.305 }, 00:26:19.305 "memory_domains": [ 00:26:19.305 { 00:26:19.305 "dma_device_id": "system", 00:26:19.305 "dma_device_type": 1 00:26:19.305 } 00:26:19.305 ], 00:26:19.305 "driver_specific": { 00:26:19.305 "nvme": [ 00:26:19.305 { 00:26:19.305 "trid": { 00:26:19.305 "trtype": "TCP", 00:26:19.305 "adrfam": "IPv4", 00:26:19.305 "traddr": "10.0.0.2", 00:26:19.305 "trsvcid": "4420", 00:26:19.305 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:19.305 }, 00:26:19.305 "ctrlr_data": { 00:26:19.305 "cntlid": 2, 00:26:19.305 "vendor_id": "0x8086", 00:26:19.305 "model_number": "SPDK bdev Controller", 00:26:19.305 "serial_number": "00000000000000000000", 00:26:19.305 "firmware_revision": "25.01", 00:26:19.305 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:19.305 "oacs": { 00:26:19.305 "security": 0, 00:26:19.305 "format": 0, 00:26:19.305 "firmware": 0, 00:26:19.305 "ns_manage": 0 00:26:19.305 }, 00:26:19.305 "multi_ctrlr": true, 00:26:19.305 "ana_reporting": false 00:26:19.305 }, 00:26:19.305 "vs": { 00:26:19.305 "nvme_version": "1.3" 00:26:19.305 }, 00:26:19.305 "ns_data": { 00:26:19.305 "id": 1, 00:26:19.305 "can_share": true 00:26:19.305 } 00:26:19.305 } 00:26:19.305 ], 00:26:19.305 "mp_policy": "active_passive" 00:26:19.305 } 00:26:19.305 } 00:26:19.305 ] 00:26:19.305 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.305 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.305 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.305 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:19.305 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.305 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:26:19.305 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.1FnZXjKcNH 00:26:19.305 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:26:19.305 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.1FnZXjKcNH 00:26:19.305 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.1FnZXjKcNH 00:26:19.305 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.305 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:19.305 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.305 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:26:19.305 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.305 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:19.305 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.305 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:26:19.305 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.305 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:19.305 [2024-11-26 07:36:03.432013] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:19.305 [2024-11-26 07:36:03.432122] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:19.567 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.567 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:26:19.567 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.567 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:19.567 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.567 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:26:19.567 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.567 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:19.567 [2024-11-26 07:36:03.456093] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:19.567 nvme0n1 00:26:19.567 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.567 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:19.567 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.567 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:19.567 [ 00:26:19.567 { 00:26:19.567 "name": "nvme0n1", 00:26:19.567 "aliases": [ 00:26:19.567 "263b5132-41c1-427b-9c1a-46329aae1d3b" 00:26:19.567 ], 00:26:19.567 "product_name": "NVMe disk", 00:26:19.567 "block_size": 512, 00:26:19.567 "num_blocks": 2097152, 00:26:19.567 "uuid": "263b5132-41c1-427b-9c1a-46329aae1d3b", 00:26:19.567 "numa_id": 0, 00:26:19.567 "assigned_rate_limits": { 00:26:19.567 "rw_ios_per_sec": 0, 00:26:19.567 "rw_mbytes_per_sec": 0, 00:26:19.567 "r_mbytes_per_sec": 0, 00:26:19.567 "w_mbytes_per_sec": 0 00:26:19.567 }, 00:26:19.567 "claimed": false, 00:26:19.567 "zoned": false, 00:26:19.567 "supported_io_types": { 00:26:19.567 "read": true, 00:26:19.567 "write": true, 00:26:19.567 "unmap": false, 00:26:19.567 "flush": true, 00:26:19.567 "reset": true, 00:26:19.567 "nvme_admin": true, 00:26:19.567 "nvme_io": true, 00:26:19.567 "nvme_io_md": false, 00:26:19.567 "write_zeroes": true, 00:26:19.567 "zcopy": false, 00:26:19.567 "get_zone_info": false, 00:26:19.567 "zone_management": false, 00:26:19.567 "zone_append": false, 00:26:19.567 "compare": true, 00:26:19.567 "compare_and_write": true, 00:26:19.567 "abort": true, 00:26:19.567 "seek_hole": false, 00:26:19.567 "seek_data": false, 00:26:19.567 "copy": true, 00:26:19.567 "nvme_iov_md": false 00:26:19.567 }, 00:26:19.567 "memory_domains": [ 00:26:19.567 { 00:26:19.567 "dma_device_id": "system", 00:26:19.567 "dma_device_type": 1 00:26:19.567 } 00:26:19.567 ], 00:26:19.567 "driver_specific": { 00:26:19.567 "nvme": [ 00:26:19.567 { 00:26:19.567 "trid": { 00:26:19.567 "trtype": "TCP", 00:26:19.567 "adrfam": "IPv4", 00:26:19.567 "traddr": "10.0.0.2", 00:26:19.567 "trsvcid": "4421", 00:26:19.567 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:19.567 }, 00:26:19.567 "ctrlr_data": { 00:26:19.567 "cntlid": 3, 00:26:19.567 "vendor_id": "0x8086", 00:26:19.567 "model_number": "SPDK bdev Controller", 00:26:19.567 "serial_number": "00000000000000000000", 00:26:19.567 "firmware_revision": "25.01", 00:26:19.567 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:19.567 "oacs": { 00:26:19.567 "security": 0, 00:26:19.567 "format": 0, 00:26:19.567 "firmware": 0, 00:26:19.567 "ns_manage": 0 00:26:19.567 }, 00:26:19.567 "multi_ctrlr": true, 00:26:19.567 "ana_reporting": false 00:26:19.567 }, 00:26:19.567 "vs": { 00:26:19.567 "nvme_version": "1.3" 00:26:19.567 }, 00:26:19.567 "ns_data": { 00:26:19.567 "id": 1, 00:26:19.567 "can_share": true 00:26:19.567 } 00:26:19.567 } 00:26:19.567 ], 00:26:19.567 "mp_policy": "active_passive" 00:26:19.567 } 00:26:19.567 } 00:26:19.567 ] 00:26:19.567 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.567 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.567 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.567 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:19.567 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.567 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.1FnZXjKcNH 00:26:19.567 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:26:19.567 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:26:19.567 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:19.567 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:26:19.567 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:19.567 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:26:19.567 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:19.567 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:19.567 rmmod nvme_tcp 00:26:19.567 rmmod nvme_fabrics 00:26:19.567 rmmod nvme_keyring 00:26:19.567 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:19.567 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:26:19.567 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:26:19.567 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2209463 ']' 00:26:19.567 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2209463 00:26:19.567 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2209463 ']' 00:26:19.567 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2209463 00:26:19.567 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:26:19.567 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:19.568 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2209463 00:26:19.829 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:19.829 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:19.829 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2209463' 00:26:19.829 killing process with pid 2209463 00:26:19.829 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2209463 00:26:19.829 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2209463 00:26:19.829 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:19.829 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:19.829 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:19.829 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:26:19.829 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:26:19.829 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:19.829 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:26:19.829 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:19.829 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:19.829 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.829 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:19.829 07:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:22.377 07:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:22.377 00:26:22.377 real 0m12.514s 00:26:22.377 user 0m4.299s 00:26:22.377 sys 0m6.735s 00:26:22.377 07:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:22.377 07:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:22.377 ************************************ 00:26:22.377 END TEST nvmf_async_init 00:26:22.377 ************************************ 00:26:22.377 07:36:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:22.377 07:36:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:22.377 07:36:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:22.377 07:36:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.377 ************************************ 00:26:22.377 START TEST dma 00:26:22.377 ************************************ 00:26:22.377 07:36:05 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:22.377 * Looking for test storage... 00:26:22.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:22.377 07:36:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:22.377 07:36:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:26:22.377 07:36:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:22.377 07:36:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:22.377 07:36:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:22.377 07:36:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:22.377 07:36:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:22.377 07:36:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:26:22.377 07:36:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:26:22.377 07:36:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:26:22.377 07:36:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:26:22.377 07:36:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:26:22.377 07:36:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:26:22.377 07:36:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:26:22.377 07:36:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:22.377 07:36:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:26:22.377 07:36:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:26:22.377 07:36:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:22.377 07:36:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:22.377 07:36:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:26:22.377 07:36:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:26:22.377 07:36:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:22.377 07:36:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:26:22.377 07:36:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:26:22.377 07:36:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:26:22.377 07:36:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:26:22.377 07:36:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:22.377 07:36:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:26:22.377 07:36:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:26:22.377 07:36:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:22.377 07:36:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:22.377 07:36:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:26:22.377 07:36:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:22.377 07:36:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:22.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.377 --rc genhtml_branch_coverage=1 00:26:22.377 --rc genhtml_function_coverage=1 00:26:22.377 --rc genhtml_legend=1 00:26:22.377 --rc geninfo_all_blocks=1 00:26:22.377 --rc geninfo_unexecuted_blocks=1 00:26:22.377 00:26:22.377 ' 00:26:22.377 07:36:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:22.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.377 --rc genhtml_branch_coverage=1 00:26:22.377 --rc genhtml_function_coverage=1 00:26:22.377 --rc genhtml_legend=1 00:26:22.377 --rc geninfo_all_blocks=1 00:26:22.377 --rc geninfo_unexecuted_blocks=1 00:26:22.377 00:26:22.377 ' 00:26:22.377 07:36:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:22.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.377 --rc genhtml_branch_coverage=1 00:26:22.377 --rc genhtml_function_coverage=1 00:26:22.377 --rc genhtml_legend=1 00:26:22.377 --rc geninfo_all_blocks=1 00:26:22.377 --rc geninfo_unexecuted_blocks=1 00:26:22.377 00:26:22.377 ' 00:26:22.377 07:36:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:22.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.378 --rc genhtml_branch_coverage=1 00:26:22.378 --rc genhtml_function_coverage=1 00:26:22.378 --rc genhtml_legend=1 00:26:22.378 --rc geninfo_all_blocks=1 00:26:22.378 --rc geninfo_unexecuted_blocks=1 00:26:22.378 00:26:22.378 ' 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:22.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:26:22.378 00:26:22.378 real 0m0.240s 00:26:22.378 user 0m0.140s 00:26:22.378 sys 0m0.116s 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:22.378 ************************************ 00:26:22.378 END TEST dma 00:26:22.378 ************************************ 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.378 ************************************ 00:26:22.378 START TEST nvmf_identify 00:26:22.378 ************************************ 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:22.378 * Looking for test storage... 00:26:22.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:22.378 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:22.641 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:22.641 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:26:22.641 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:26:22.641 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:26:22.641 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:26:22.641 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:26:22.641 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:26:22.641 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:26:22.641 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:22.641 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:26:22.641 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:26:22.641 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:22.641 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:22.641 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:26:22.641 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:26:22.641 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:22.641 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:26:22.641 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:26:22.641 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:26:22.641 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:26:22.641 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:22.641 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:26:22.641 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:26:22.641 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:22.641 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:22.641 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:26:22.641 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:22.641 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:22.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.641 --rc genhtml_branch_coverage=1 00:26:22.641 --rc genhtml_function_coverage=1 00:26:22.641 --rc genhtml_legend=1 00:26:22.641 --rc geninfo_all_blocks=1 00:26:22.641 --rc geninfo_unexecuted_blocks=1 00:26:22.641 00:26:22.641 ' 00:26:22.641 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:22.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.641 --rc genhtml_branch_coverage=1 00:26:22.641 --rc genhtml_function_coverage=1 00:26:22.641 --rc genhtml_legend=1 00:26:22.641 --rc geninfo_all_blocks=1 00:26:22.641 --rc geninfo_unexecuted_blocks=1 00:26:22.641 00:26:22.642 ' 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:22.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.642 --rc genhtml_branch_coverage=1 00:26:22.642 --rc genhtml_function_coverage=1 00:26:22.642 --rc genhtml_legend=1 00:26:22.642 --rc geninfo_all_blocks=1 00:26:22.642 --rc geninfo_unexecuted_blocks=1 00:26:22.642 00:26:22.642 ' 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:22.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.642 --rc genhtml_branch_coverage=1 00:26:22.642 --rc genhtml_function_coverage=1 00:26:22.642 --rc genhtml_legend=1 00:26:22.642 --rc geninfo_all_blocks=1 00:26:22.642 --rc geninfo_unexecuted_blocks=1 00:26:22.642 00:26:22.642 ' 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:22.642 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:26:22.642 07:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:30.791 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:30.791 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:30.791 Found net devices under 0000:31:00.0: cvl_0_0 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:30.791 Found net devices under 0000:31:00.1: cvl_0_1 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:30.791 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:30.792 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:30.792 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:30.792 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:30.792 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:31.053 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:31.053 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:31.053 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:31.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:31.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.510 ms 00:26:31.053 00:26:31.053 --- 10.0.0.2 ping statistics --- 00:26:31.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.053 rtt min/avg/max/mdev = 0.510/0.510/0.510/0.000 ms 00:26:31.054 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:31.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:31.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:26:31.054 00:26:31.054 --- 10.0.0.1 ping statistics --- 00:26:31.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.054 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:26:31.054 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:31.054 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:26:31.054 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:31.054 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:31.054 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:31.054 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:31.054 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:31.054 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:31.054 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:31.054 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:26:31.054 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:31.054 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:31.054 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2215069 00:26:31.054 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:31.054 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:31.054 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2215069 00:26:31.054 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2215069 ']' 00:26:31.054 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:31.054 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:31.054 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:31.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:31.054 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:31.054 07:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:31.054 [2024-11-26 07:36:15.048985] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:26:31.054 [2024-11-26 07:36:15.049050] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:31.054 [2024-11-26 07:36:15.140524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:31.054 [2024-11-26 07:36:15.183430] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:31.054 [2024-11-26 07:36:15.183470] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:31.054 [2024-11-26 07:36:15.183479] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:31.054 [2024-11-26 07:36:15.183486] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:31.054 [2024-11-26 07:36:15.183492] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:31.314 [2024-11-26 07:36:15.185194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:31.315 [2024-11-26 07:36:15.185310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:31.315 [2024-11-26 07:36:15.185470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:31.315 [2024-11-26 07:36:15.185471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.884 07:36:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:31.884 07:36:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:26:31.884 07:36:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:31.884 07:36:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.884 07:36:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:31.884 [2024-11-26 07:36:15.864201] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:31.884 07:36:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.884 07:36:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:26:31.884 07:36:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:31.884 07:36:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:31.885 07:36:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:31.885 07:36:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.885 07:36:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:31.885 Malloc0 00:26:31.885 07:36:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.885 07:36:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:31.885 07:36:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.885 07:36:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:31.885 07:36:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.885 07:36:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:26:31.885 07:36:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.885 07:36:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:31.885 07:36:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.885 07:36:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:31.885 07:36:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.885 07:36:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:31.885 [2024-11-26 07:36:15.975201] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:31.885 07:36:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.885 07:36:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:31.885 07:36:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.885 07:36:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:31.885 07:36:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.885 07:36:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:26:31.885 07:36:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.885 07:36:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:31.885 [ 00:26:31.885 { 00:26:31.885 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:31.885 "subtype": "Discovery", 00:26:31.885 "listen_addresses": [ 00:26:31.885 { 00:26:31.885 "trtype": "TCP", 00:26:31.885 "adrfam": "IPv4", 00:26:31.885 "traddr": "10.0.0.2", 00:26:31.885 "trsvcid": "4420" 00:26:31.885 } 00:26:31.885 ], 00:26:31.885 "allow_any_host": true, 00:26:31.885 "hosts": [] 00:26:31.885 }, 00:26:31.885 { 00:26:31.885 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:31.885 "subtype": "NVMe", 00:26:31.885 "listen_addresses": [ 00:26:31.885 { 00:26:31.885 "trtype": "TCP", 00:26:31.885 "adrfam": "IPv4", 00:26:31.885 "traddr": "10.0.0.2", 00:26:31.885 "trsvcid": "4420" 00:26:31.885 } 00:26:31.885 ], 00:26:31.885 "allow_any_host": true, 00:26:31.885 "hosts": [], 00:26:31.885 "serial_number": "SPDK00000000000001", 00:26:31.885 "model_number": "SPDK bdev Controller", 00:26:31.885 "max_namespaces": 32, 00:26:31.885 "min_cntlid": 1, 00:26:31.885 "max_cntlid": 65519, 00:26:31.885 "namespaces": [ 00:26:31.885 { 00:26:31.885 "nsid": 1, 00:26:31.885 "bdev_name": "Malloc0", 00:26:31.885 "name": "Malloc0", 00:26:31.885 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:26:31.885 "eui64": "ABCDEF0123456789", 00:26:31.885 "uuid": "73286024-dbad-4358-a508-8bfd1ced173f" 00:26:31.885 } 00:26:31.885 ] 00:26:31.885 } 00:26:31.885 ] 00:26:31.885 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.885 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:26:32.149 [2024-11-26 07:36:16.038902] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:26:32.149 [2024-11-26 07:36:16.038941] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2215344 ] 00:26:32.149 [2024-11-26 07:36:16.095076] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:26:32.149 [2024-11-26 07:36:16.095134] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:32.149 [2024-11-26 07:36:16.095139] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:32.149 [2024-11-26 07:36:16.095154] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:32.149 [2024-11-26 07:36:16.095165] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:32.149 [2024-11-26 07:36:16.095825] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:26:32.149 [2024-11-26 07:36:16.095858] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xadc550 0 00:26:32.149 [2024-11-26 07:36:16.101878] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:32.149 [2024-11-26 07:36:16.101890] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:32.149 [2024-11-26 07:36:16.101895] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:32.149 [2024-11-26 07:36:16.101899] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:32.149 [2024-11-26 07:36:16.101930] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.149 [2024-11-26 07:36:16.101936] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.149 [2024-11-26 07:36:16.101944] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xadc550) 00:26:32.149 [2024-11-26 07:36:16.101957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:32.149 [2024-11-26 07:36:16.101974] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3e100, cid 0, qid 0 00:26:32.149 [2024-11-26 07:36:16.109874] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.149 [2024-11-26 07:36:16.109884] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.149 [2024-11-26 07:36:16.109887] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.149 [2024-11-26 07:36:16.109892] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3e100) on tqpair=0xadc550 00:26:32.149 [2024-11-26 07:36:16.109903] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:32.149 [2024-11-26 07:36:16.109911] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:26:32.149 [2024-11-26 07:36:16.109916] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:26:32.149 [2024-11-26 07:36:16.109929] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.149 [2024-11-26 07:36:16.109933] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.149 [2024-11-26 07:36:16.109937] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xadc550) 00:26:32.149 [2024-11-26 07:36:16.109945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.149 [2024-11-26 07:36:16.109958] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3e100, cid 0, qid 0 00:26:32.149 [2024-11-26 07:36:16.110151] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.149 [2024-11-26 07:36:16.110158] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.149 [2024-11-26 07:36:16.110161] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.149 [2024-11-26 07:36:16.110165] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3e100) on tqpair=0xadc550 00:26:32.149 [2024-11-26 07:36:16.110170] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:26:32.150 [2024-11-26 07:36:16.110177] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:26:32.150 [2024-11-26 07:36:16.110184] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.150 [2024-11-26 07:36:16.110188] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.150 [2024-11-26 07:36:16.110192] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xadc550) 00:26:32.150 [2024-11-26 07:36:16.110198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.150 [2024-11-26 07:36:16.110209] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3e100, cid 0, qid 0 00:26:32.150 [2024-11-26 07:36:16.110397] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.150 [2024-11-26 07:36:16.110404] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.150 [2024-11-26 07:36:16.110407] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.150 [2024-11-26 07:36:16.110411] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3e100) on tqpair=0xadc550 00:26:32.150 [2024-11-26 07:36:16.110416] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:26:32.150 [2024-11-26 07:36:16.110424] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:26:32.150 [2024-11-26 07:36:16.110430] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.150 [2024-11-26 07:36:16.110434] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.150 [2024-11-26 07:36:16.110440] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xadc550) 00:26:32.150 [2024-11-26 07:36:16.110447] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.150 [2024-11-26 07:36:16.110458] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3e100, cid 0, qid 0 00:26:32.150 [2024-11-26 07:36:16.110627] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.150 [2024-11-26 07:36:16.110633] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.150 [2024-11-26 07:36:16.110636] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.150 [2024-11-26 07:36:16.110640] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3e100) on tqpair=0xadc550 00:26:32.150 [2024-11-26 07:36:16.110645] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:32.150 [2024-11-26 07:36:16.110655] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.150 [2024-11-26 07:36:16.110659] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.150 [2024-11-26 07:36:16.110662] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xadc550) 00:26:32.150 [2024-11-26 07:36:16.110669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.150 [2024-11-26 07:36:16.110679] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3e100, cid 0, qid 0 00:26:32.150 [2024-11-26 07:36:16.110910] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.150 [2024-11-26 07:36:16.110916] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.150 [2024-11-26 07:36:16.110920] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.150 [2024-11-26 07:36:16.110924] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3e100) on tqpair=0xadc550 00:26:32.150 [2024-11-26 07:36:16.110928] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:26:32.150 [2024-11-26 07:36:16.110933] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:26:32.150 [2024-11-26 07:36:16.110941] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:32.150 [2024-11-26 07:36:16.111049] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:26:32.150 [2024-11-26 07:36:16.111054] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:32.150 [2024-11-26 07:36:16.111062] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.150 [2024-11-26 07:36:16.111066] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.150 [2024-11-26 07:36:16.111070] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xadc550) 00:26:32.150 [2024-11-26 07:36:16.111077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.150 [2024-11-26 07:36:16.111087] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3e100, cid 0, qid 0 00:26:32.150 [2024-11-26 07:36:16.111282] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.150 [2024-11-26 07:36:16.111288] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.150 [2024-11-26 07:36:16.111292] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.150 [2024-11-26 07:36:16.111296] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3e100) on tqpair=0xadc550 00:26:32.150 [2024-11-26 07:36:16.111300] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:32.150 [2024-11-26 07:36:16.111309] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.150 [2024-11-26 07:36:16.111316] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.150 [2024-11-26 07:36:16.111319] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xadc550) 00:26:32.150 [2024-11-26 07:36:16.111326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.150 [2024-11-26 07:36:16.111336] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3e100, cid 0, qid 0 00:26:32.150 [2024-11-26 07:36:16.111499] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.150 [2024-11-26 07:36:16.111506] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.150 [2024-11-26 07:36:16.111509] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.150 [2024-11-26 07:36:16.111513] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3e100) on tqpair=0xadc550 00:26:32.150 [2024-11-26 07:36:16.111518] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:32.150 [2024-11-26 07:36:16.111522] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:26:32.150 [2024-11-26 07:36:16.111530] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:26:32.150 [2024-11-26 07:36:16.111540] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:26:32.150 [2024-11-26 07:36:16.111549] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.150 [2024-11-26 07:36:16.111552] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xadc550) 00:26:32.150 [2024-11-26 07:36:16.111559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.150 [2024-11-26 07:36:16.111569] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3e100, cid 0, qid 0 00:26:32.150 [2024-11-26 07:36:16.111795] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:32.150 [2024-11-26 07:36:16.111802] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:32.150 [2024-11-26 07:36:16.111806] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:32.150 [2024-11-26 07:36:16.111810] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xadc550): datao=0, datal=4096, cccid=0 00:26:32.150 [2024-11-26 07:36:16.111815] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb3e100) on tqpair(0xadc550): expected_datao=0, payload_size=4096 00:26:32.150 [2024-11-26 07:36:16.111819] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.150 [2024-11-26 07:36:16.111835] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:32.150 [2024-11-26 07:36:16.111839] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:32.150 [2024-11-26 07:36:16.152078] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.150 [2024-11-26 07:36:16.152087] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.150 [2024-11-26 07:36:16.152091] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.150 [2024-11-26 07:36:16.152095] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3e100) on tqpair=0xadc550 00:26:32.150 [2024-11-26 07:36:16.152102] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:26:32.150 [2024-11-26 07:36:16.152107] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:26:32.150 [2024-11-26 07:36:16.152112] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:26:32.150 [2024-11-26 07:36:16.152120] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:26:32.150 [2024-11-26 07:36:16.152127] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:26:32.150 [2024-11-26 07:36:16.152132] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:26:32.150 [2024-11-26 07:36:16.152142] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:26:32.150 [2024-11-26 07:36:16.152149] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.150 [2024-11-26 07:36:16.152153] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.150 [2024-11-26 07:36:16.152157] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xadc550) 00:26:32.150 [2024-11-26 07:36:16.152164] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:32.150 [2024-11-26 07:36:16.152176] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3e100, cid 0, qid 0 00:26:32.150 [2024-11-26 07:36:16.152348] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.150 [2024-11-26 07:36:16.152355] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.150 [2024-11-26 07:36:16.152358] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.150 [2024-11-26 07:36:16.152362] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3e100) on tqpair=0xadc550 00:26:32.150 [2024-11-26 07:36:16.152369] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.150 [2024-11-26 07:36:16.152373] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.150 [2024-11-26 07:36:16.152377] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xadc550) 00:26:32.150 [2024-11-26 07:36:16.152383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.150 [2024-11-26 07:36:16.152389] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.150 [2024-11-26 07:36:16.152393] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.150 [2024-11-26 07:36:16.152396] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xadc550) 00:26:32.151 [2024-11-26 07:36:16.152402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.151 [2024-11-26 07:36:16.152408] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.151 [2024-11-26 07:36:16.152412] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.151 [2024-11-26 07:36:16.152416] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xadc550) 00:26:32.151 [2024-11-26 07:36:16.152421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.151 [2024-11-26 07:36:16.152427] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.151 [2024-11-26 07:36:16.152431] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.151 [2024-11-26 07:36:16.152435] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadc550) 00:26:32.151 [2024-11-26 07:36:16.152440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.151 [2024-11-26 07:36:16.152445] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:32.151 [2024-11-26 07:36:16.152453] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:32.151 [2024-11-26 07:36:16.152459] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.151 [2024-11-26 07:36:16.152463] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xadc550) 00:26:32.151 [2024-11-26 07:36:16.152470] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.151 [2024-11-26 07:36:16.152484] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3e100, cid 0, qid 0 00:26:32.151 [2024-11-26 07:36:16.152489] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3e280, cid 1, qid 0 00:26:32.151 [2024-11-26 07:36:16.152494] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3e400, cid 2, qid 0 00:26:32.151 [2024-11-26 07:36:16.152499] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3e580, cid 3, qid 0 00:26:32.151 [2024-11-26 07:36:16.152503] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3e700, cid 4, qid 0 00:26:32.151 [2024-11-26 07:36:16.152760] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.151 [2024-11-26 07:36:16.152766] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.151 [2024-11-26 07:36:16.152770] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.151 [2024-11-26 07:36:16.152774] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3e700) on tqpair=0xadc550 00:26:32.151 [2024-11-26 07:36:16.152781] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:26:32.151 [2024-11-26 07:36:16.152786] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:26:32.151 [2024-11-26 07:36:16.152797] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.151 [2024-11-26 07:36:16.152801] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xadc550) 00:26:32.151 [2024-11-26 07:36:16.152807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.151 [2024-11-26 07:36:16.152817] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3e700, cid 4, qid 0 00:26:32.151 [2024-11-26 07:36:16.156871] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:32.151 [2024-11-26 07:36:16.156879] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:32.151 [2024-11-26 07:36:16.156883] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:32.151 [2024-11-26 07:36:16.156886] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xadc550): datao=0, datal=4096, cccid=4 00:26:32.151 [2024-11-26 07:36:16.156891] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb3e700) on tqpair(0xadc550): expected_datao=0, payload_size=4096 00:26:32.151 [2024-11-26 07:36:16.156896] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.151 [2024-11-26 07:36:16.156902] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:32.151 [2024-11-26 07:36:16.156906] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:32.151 [2024-11-26 07:36:16.156912] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.151 [2024-11-26 07:36:16.156918] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.151 [2024-11-26 07:36:16.156921] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.151 [2024-11-26 07:36:16.156925] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3e700) on tqpair=0xadc550 00:26:32.151 [2024-11-26 07:36:16.156936] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:26:32.151 [2024-11-26 07:36:16.156957] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.151 [2024-11-26 07:36:16.156961] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xadc550) 00:26:32.151 [2024-11-26 07:36:16.156968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.151 [2024-11-26 07:36:16.156975] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.151 [2024-11-26 07:36:16.156979] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.151 [2024-11-26 07:36:16.156982] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xadc550) 00:26:32.151 [2024-11-26 07:36:16.156988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.151 [2024-11-26 07:36:16.157005] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3e700, cid 4, qid 0 00:26:32.151 [2024-11-26 07:36:16.157011] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3e880, cid 5, qid 0 00:26:32.151 [2024-11-26 07:36:16.157229] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:32.151 [2024-11-26 07:36:16.157236] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:32.151 [2024-11-26 07:36:16.157239] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:32.151 [2024-11-26 07:36:16.157243] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xadc550): datao=0, datal=1024, cccid=4 00:26:32.151 [2024-11-26 07:36:16.157248] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb3e700) on tqpair(0xadc550): expected_datao=0, payload_size=1024 00:26:32.151 [2024-11-26 07:36:16.157252] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.151 [2024-11-26 07:36:16.157258] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:32.151 [2024-11-26 07:36:16.157262] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:32.151 [2024-11-26 07:36:16.157268] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.151 [2024-11-26 07:36:16.157274] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.151 [2024-11-26 07:36:16.157277] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.151 [2024-11-26 07:36:16.157281] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3e880) on tqpair=0xadc550 00:26:32.151 [2024-11-26 07:36:16.198065] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.151 [2024-11-26 07:36:16.198075] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.151 [2024-11-26 07:36:16.198079] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.151 [2024-11-26 07:36:16.198082] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3e700) on tqpair=0xadc550 00:26:32.151 [2024-11-26 07:36:16.198094] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.151 [2024-11-26 07:36:16.198098] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xadc550) 00:26:32.151 [2024-11-26 07:36:16.198105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.151 [2024-11-26 07:36:16.198120] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3e700, cid 4, qid 0 00:26:32.151 [2024-11-26 07:36:16.198321] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:32.151 [2024-11-26 07:36:16.198328] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:32.151 [2024-11-26 07:36:16.198332] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:32.151 [2024-11-26 07:36:16.198335] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xadc550): datao=0, datal=3072, cccid=4 00:26:32.151 [2024-11-26 07:36:16.198340] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb3e700) on tqpair(0xadc550): expected_datao=0, payload_size=3072 00:26:32.151 [2024-11-26 07:36:16.198344] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.151 [2024-11-26 07:36:16.198351] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:32.151 [2024-11-26 07:36:16.198355] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:32.151 [2024-11-26 07:36:16.198532] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.151 [2024-11-26 07:36:16.198538] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.151 [2024-11-26 07:36:16.198542] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.151 [2024-11-26 07:36:16.198545] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3e700) on tqpair=0xadc550 00:26:32.151 [2024-11-26 07:36:16.198553] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.151 [2024-11-26 07:36:16.198557] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xadc550) 00:26:32.151 [2024-11-26 07:36:16.198564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.151 [2024-11-26 07:36:16.198580] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3e700, cid 4, qid 0 00:26:32.151 [2024-11-26 07:36:16.198807] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:32.151 [2024-11-26 07:36:16.198814] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:32.151 [2024-11-26 07:36:16.198817] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:32.151 [2024-11-26 07:36:16.198821] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xadc550): datao=0, datal=8, cccid=4 00:26:32.151 [2024-11-26 07:36:16.198825] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb3e700) on tqpair(0xadc550): expected_datao=0, payload_size=8 00:26:32.151 [2024-11-26 07:36:16.198829] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.151 [2024-11-26 07:36:16.198836] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:32.151 [2024-11-26 07:36:16.198839] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:32.151 [2024-11-26 07:36:16.243870] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.151 [2024-11-26 07:36:16.243878] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.151 [2024-11-26 07:36:16.243882] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.151 [2024-11-26 07:36:16.243886] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3e700) on tqpair=0xadc550 00:26:32.151 ===================================================== 00:26:32.151 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:32.151 ===================================================== 00:26:32.151 Controller Capabilities/Features 00:26:32.151 ================================ 00:26:32.151 Vendor ID: 0000 00:26:32.151 Subsystem Vendor ID: 0000 00:26:32.152 Serial Number: .................... 00:26:32.152 Model Number: ........................................ 00:26:32.152 Firmware Version: 25.01 00:26:32.152 Recommended Arb Burst: 0 00:26:32.152 IEEE OUI Identifier: 00 00 00 00:26:32.152 Multi-path I/O 00:26:32.152 May have multiple subsystem ports: No 00:26:32.152 May have multiple controllers: No 00:26:32.152 Associated with SR-IOV VF: No 00:26:32.152 Max Data Transfer Size: 131072 00:26:32.152 Max Number of Namespaces: 0 00:26:32.152 Max Number of I/O Queues: 1024 00:26:32.152 NVMe Specification Version (VS): 1.3 00:26:32.152 NVMe Specification Version (Identify): 1.3 00:26:32.152 Maximum Queue Entries: 128 00:26:32.152 Contiguous Queues Required: Yes 00:26:32.152 Arbitration Mechanisms Supported 00:26:32.152 Weighted Round Robin: Not Supported 00:26:32.152 Vendor Specific: Not Supported 00:26:32.152 Reset Timeout: 15000 ms 00:26:32.152 Doorbell Stride: 4 bytes 00:26:32.152 NVM Subsystem Reset: Not Supported 00:26:32.152 Command Sets Supported 00:26:32.152 NVM Command Set: Supported 00:26:32.152 Boot Partition: Not Supported 00:26:32.152 Memory Page Size Minimum: 4096 bytes 00:26:32.152 Memory Page Size Maximum: 4096 bytes 00:26:32.152 Persistent Memory Region: Not Supported 00:26:32.152 Optional Asynchronous Events Supported 00:26:32.152 Namespace Attribute Notices: Not Supported 00:26:32.152 Firmware Activation Notices: Not Supported 00:26:32.152 ANA Change Notices: Not Supported 00:26:32.152 PLE Aggregate Log Change Notices: Not Supported 00:26:32.152 LBA Status Info Alert Notices: Not Supported 00:26:32.152 EGE Aggregate Log Change Notices: Not Supported 00:26:32.152 Normal NVM Subsystem Shutdown event: Not Supported 00:26:32.152 Zone Descriptor Change Notices: Not Supported 00:26:32.152 Discovery Log Change Notices: Supported 00:26:32.152 Controller Attributes 00:26:32.152 128-bit Host Identifier: Not Supported 00:26:32.152 Non-Operational Permissive Mode: Not Supported 00:26:32.152 NVM Sets: Not Supported 00:26:32.152 Read Recovery Levels: Not Supported 00:26:32.152 Endurance Groups: Not Supported 00:26:32.152 Predictable Latency Mode: Not Supported 00:26:32.152 Traffic Based Keep ALive: Not Supported 00:26:32.152 Namespace Granularity: Not Supported 00:26:32.152 SQ Associations: Not Supported 00:26:32.152 UUID List: Not Supported 00:26:32.152 Multi-Domain Subsystem: Not Supported 00:26:32.152 Fixed Capacity Management: Not Supported 00:26:32.152 Variable Capacity Management: Not Supported 00:26:32.152 Delete Endurance Group: Not Supported 00:26:32.152 Delete NVM Set: Not Supported 00:26:32.152 Extended LBA Formats Supported: Not Supported 00:26:32.152 Flexible Data Placement Supported: Not Supported 00:26:32.152 00:26:32.152 Controller Memory Buffer Support 00:26:32.152 ================================ 00:26:32.152 Supported: No 00:26:32.152 00:26:32.152 Persistent Memory Region Support 00:26:32.152 ================================ 00:26:32.152 Supported: No 00:26:32.152 00:26:32.152 Admin Command Set Attributes 00:26:32.152 ============================ 00:26:32.152 Security Send/Receive: Not Supported 00:26:32.152 Format NVM: Not Supported 00:26:32.152 Firmware Activate/Download: Not Supported 00:26:32.152 Namespace Management: Not Supported 00:26:32.152 Device Self-Test: Not Supported 00:26:32.152 Directives: Not Supported 00:26:32.152 NVMe-MI: Not Supported 00:26:32.152 Virtualization Management: Not Supported 00:26:32.152 Doorbell Buffer Config: Not Supported 00:26:32.152 Get LBA Status Capability: Not Supported 00:26:32.152 Command & Feature Lockdown Capability: Not Supported 00:26:32.152 Abort Command Limit: 1 00:26:32.152 Async Event Request Limit: 4 00:26:32.152 Number of Firmware Slots: N/A 00:26:32.152 Firmware Slot 1 Read-Only: N/A 00:26:32.152 Firmware Activation Without Reset: N/A 00:26:32.152 Multiple Update Detection Support: N/A 00:26:32.152 Firmware Update Granularity: No Information Provided 00:26:32.152 Per-Namespace SMART Log: No 00:26:32.152 Asymmetric Namespace Access Log Page: Not Supported 00:26:32.152 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:32.152 Command Effects Log Page: Not Supported 00:26:32.152 Get Log Page Extended Data: Supported 00:26:32.152 Telemetry Log Pages: Not Supported 00:26:32.152 Persistent Event Log Pages: Not Supported 00:26:32.152 Supported Log Pages Log Page: May Support 00:26:32.152 Commands Supported & Effects Log Page: Not Supported 00:26:32.152 Feature Identifiers & Effects Log Page:May Support 00:26:32.152 NVMe-MI Commands & Effects Log Page: May Support 00:26:32.152 Data Area 4 for Telemetry Log: Not Supported 00:26:32.152 Error Log Page Entries Supported: 128 00:26:32.152 Keep Alive: Not Supported 00:26:32.152 00:26:32.152 NVM Command Set Attributes 00:26:32.152 ========================== 00:26:32.152 Submission Queue Entry Size 00:26:32.152 Max: 1 00:26:32.152 Min: 1 00:26:32.152 Completion Queue Entry Size 00:26:32.152 Max: 1 00:26:32.152 Min: 1 00:26:32.152 Number of Namespaces: 0 00:26:32.152 Compare Command: Not Supported 00:26:32.152 Write Uncorrectable Command: Not Supported 00:26:32.152 Dataset Management Command: Not Supported 00:26:32.152 Write Zeroes Command: Not Supported 00:26:32.152 Set Features Save Field: Not Supported 00:26:32.152 Reservations: Not Supported 00:26:32.152 Timestamp: Not Supported 00:26:32.152 Copy: Not Supported 00:26:32.152 Volatile Write Cache: Not Present 00:26:32.152 Atomic Write Unit (Normal): 1 00:26:32.152 Atomic Write Unit (PFail): 1 00:26:32.152 Atomic Compare & Write Unit: 1 00:26:32.152 Fused Compare & Write: Supported 00:26:32.152 Scatter-Gather List 00:26:32.152 SGL Command Set: Supported 00:26:32.152 SGL Keyed: Supported 00:26:32.152 SGL Bit Bucket Descriptor: Not Supported 00:26:32.152 SGL Metadata Pointer: Not Supported 00:26:32.152 Oversized SGL: Not Supported 00:26:32.152 SGL Metadata Address: Not Supported 00:26:32.152 SGL Offset: Supported 00:26:32.152 Transport SGL Data Block: Not Supported 00:26:32.152 Replay Protected Memory Block: Not Supported 00:26:32.152 00:26:32.152 Firmware Slot Information 00:26:32.152 ========================= 00:26:32.152 Active slot: 0 00:26:32.152 00:26:32.152 00:26:32.152 Error Log 00:26:32.152 ========= 00:26:32.152 00:26:32.152 Active Namespaces 00:26:32.152 ================= 00:26:32.152 Discovery Log Page 00:26:32.152 ================== 00:26:32.152 Generation Counter: 2 00:26:32.152 Number of Records: 2 00:26:32.152 Record Format: 0 00:26:32.152 00:26:32.152 Discovery Log Entry 0 00:26:32.152 ---------------------- 00:26:32.152 Transport Type: 3 (TCP) 00:26:32.152 Address Family: 1 (IPv4) 00:26:32.152 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:32.152 Entry Flags: 00:26:32.152 Duplicate Returned Information: 1 00:26:32.152 Explicit Persistent Connection Support for Discovery: 1 00:26:32.152 Transport Requirements: 00:26:32.152 Secure Channel: Not Required 00:26:32.152 Port ID: 0 (0x0000) 00:26:32.152 Controller ID: 65535 (0xffff) 00:26:32.152 Admin Max SQ Size: 128 00:26:32.152 Transport Service Identifier: 4420 00:26:32.152 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:32.152 Transport Address: 10.0.0.2 00:26:32.152 Discovery Log Entry 1 00:26:32.152 ---------------------- 00:26:32.152 Transport Type: 3 (TCP) 00:26:32.152 Address Family: 1 (IPv4) 00:26:32.152 Subsystem Type: 2 (NVM Subsystem) 00:26:32.152 Entry Flags: 00:26:32.152 Duplicate Returned Information: 0 00:26:32.152 Explicit Persistent Connection Support for Discovery: 0 00:26:32.152 Transport Requirements: 00:26:32.152 Secure Channel: Not Required 00:26:32.152 Port ID: 0 (0x0000) 00:26:32.152 Controller ID: 65535 (0xffff) 00:26:32.152 Admin Max SQ Size: 128 00:26:32.152 Transport Service Identifier: 4420 00:26:32.152 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:26:32.152 Transport Address: 10.0.0.2 [2024-11-26 07:36:16.243971] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:26:32.152 [2024-11-26 07:36:16.243983] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3e100) on tqpair=0xadc550 00:26:32.152 [2024-11-26 07:36:16.243989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.152 [2024-11-26 07:36:16.243994] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3e280) on tqpair=0xadc550 00:26:32.152 [2024-11-26 07:36:16.243999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.152 [2024-11-26 07:36:16.244004] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3e400) on tqpair=0xadc550 00:26:32.152 [2024-11-26 07:36:16.244008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.153 [2024-11-26 07:36:16.244013] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3e580) on tqpair=0xadc550 00:26:32.153 [2024-11-26 07:36:16.244018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.153 [2024-11-26 07:36:16.244028] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.153 [2024-11-26 07:36:16.244032] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.153 [2024-11-26 07:36:16.244036] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadc550) 00:26:32.153 [2024-11-26 07:36:16.244043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.153 [2024-11-26 07:36:16.244056] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3e580, cid 3, qid 0 00:26:32.153 [2024-11-26 07:36:16.244303] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.153 [2024-11-26 07:36:16.244309] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.153 [2024-11-26 07:36:16.244313] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.153 [2024-11-26 07:36:16.244317] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3e580) on tqpair=0xadc550 00:26:32.153 [2024-11-26 07:36:16.244323] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.153 [2024-11-26 07:36:16.244327] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.153 [2024-11-26 07:36:16.244331] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadc550) 00:26:32.153 [2024-11-26 07:36:16.244339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.153 [2024-11-26 07:36:16.244352] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3e580, cid 3, qid 0 00:26:32.153 [2024-11-26 07:36:16.244537] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.153 [2024-11-26 07:36:16.244543] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.153 [2024-11-26 07:36:16.244547] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.153 [2024-11-26 07:36:16.244550] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3e580) on tqpair=0xadc550 00:26:32.153 [2024-11-26 07:36:16.244555] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:26:32.153 [2024-11-26 07:36:16.244560] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:26:32.153 [2024-11-26 07:36:16.244569] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.153 [2024-11-26 07:36:16.244573] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.153 [2024-11-26 07:36:16.244577] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadc550) 00:26:32.153 [2024-11-26 07:36:16.244584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.153 [2024-11-26 07:36:16.244594] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3e580, cid 3, qid 0 00:26:32.153 [2024-11-26 07:36:16.244749] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.153 [2024-11-26 07:36:16.244755] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.153 [2024-11-26 07:36:16.244759] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.153 [2024-11-26 07:36:16.244763] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3e580) on tqpair=0xadc550 00:26:32.153 [2024-11-26 07:36:16.244772] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.153 [2024-11-26 07:36:16.244777] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.153 [2024-11-26 07:36:16.244780] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadc550) 00:26:32.153 [2024-11-26 07:36:16.244787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.153 [2024-11-26 07:36:16.244797] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3e580, cid 3, qid 0 00:26:32.153 [2024-11-26 07:36:16.244990] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.153 [2024-11-26 07:36:16.244997] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.153 [2024-11-26 07:36:16.245001] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.153 [2024-11-26 07:36:16.245004] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3e580) on tqpair=0xadc550 00:26:32.153 [2024-11-26 07:36:16.245014] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.153 [2024-11-26 07:36:16.245018] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.153 [2024-11-26 07:36:16.245021] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadc550) 00:26:32.153 [2024-11-26 07:36:16.245028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.153 [2024-11-26 07:36:16.245039] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3e580, cid 3, qid 0 00:26:32.153 [2024-11-26 07:36:16.245210] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.153 [2024-11-26 07:36:16.245216] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.153 [2024-11-26 07:36:16.245220] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.153 [2024-11-26 07:36:16.245224] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3e580) on tqpair=0xadc550 00:26:32.153 [2024-11-26 07:36:16.245233] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.153 [2024-11-26 07:36:16.245237] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.153 [2024-11-26 07:36:16.245243] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadc550) 00:26:32.153 [2024-11-26 07:36:16.245250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.153 [2024-11-26 07:36:16.245260] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3e580, cid 3, qid 0 00:26:32.153 [2024-11-26 07:36:16.245440] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.153 [2024-11-26 07:36:16.245447] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.153 [2024-11-26 07:36:16.245450] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.153 [2024-11-26 07:36:16.245454] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3e580) on tqpair=0xadc550 00:26:32.153 [2024-11-26 07:36:16.245464] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.153 [2024-11-26 07:36:16.245468] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.153 [2024-11-26 07:36:16.245471] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadc550) 00:26:32.153 [2024-11-26 07:36:16.245478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.153 [2024-11-26 07:36:16.245488] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3e580, cid 3, qid 0 00:26:32.153 [2024-11-26 07:36:16.245662] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.153 [2024-11-26 07:36:16.245669] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.153 [2024-11-26 07:36:16.245672] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.153 [2024-11-26 07:36:16.245676] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3e580) on tqpair=0xadc550 00:26:32.153 [2024-11-26 07:36:16.245686] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.153 [2024-11-26 07:36:16.245690] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.153 [2024-11-26 07:36:16.245693] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadc550) 00:26:32.153 [2024-11-26 07:36:16.245700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.153 [2024-11-26 07:36:16.245710] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3e580, cid 3, qid 0 00:26:32.153 [2024-11-26 07:36:16.245895] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.153 [2024-11-26 07:36:16.245901] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.153 [2024-11-26 07:36:16.245905] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.153 [2024-11-26 07:36:16.245909] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3e580) on tqpair=0xadc550 00:26:32.153 [2024-11-26 07:36:16.245918] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.153 [2024-11-26 07:36:16.245922] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.153 [2024-11-26 07:36:16.245926] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadc550) 00:26:32.153 [2024-11-26 07:36:16.245933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.153 [2024-11-26 07:36:16.245943] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3e580, cid 3, qid 0 00:26:32.153 [2024-11-26 07:36:16.246107] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.153 [2024-11-26 07:36:16.246114] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.153 [2024-11-26 07:36:16.246117] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.153 [2024-11-26 07:36:16.246121] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3e580) on tqpair=0xadc550 00:26:32.153 [2024-11-26 07:36:16.246131] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.153 [2024-11-26 07:36:16.246135] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.153 [2024-11-26 07:36:16.246138] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadc550) 00:26:32.153 [2024-11-26 07:36:16.246147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.153 [2024-11-26 07:36:16.246157] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3e580, cid 3, qid 0 00:26:32.153 [2024-11-26 07:36:16.246362] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.153 [2024-11-26 07:36:16.246369] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.153 [2024-11-26 07:36:16.246372] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.153 [2024-11-26 07:36:16.246376] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3e580) on tqpair=0xadc550 00:26:32.153 [2024-11-26 07:36:16.246386] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.153 [2024-11-26 07:36:16.246390] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.153 [2024-11-26 07:36:16.246393] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadc550) 00:26:32.153 [2024-11-26 07:36:16.246400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.153 [2024-11-26 07:36:16.246410] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3e580, cid 3, qid 0 00:26:32.153 [2024-11-26 07:36:16.246628] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.153 [2024-11-26 07:36:16.246634] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.153 [2024-11-26 07:36:16.246637] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.153 [2024-11-26 07:36:16.246641] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3e580) on tqpair=0xadc550 00:26:32.153 [2024-11-26 07:36:16.246651] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.153 [2024-11-26 07:36:16.246655] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.153 [2024-11-26 07:36:16.246659] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadc550) 00:26:32.154 [2024-11-26 07:36:16.246665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.154 [2024-11-26 07:36:16.246675] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3e580, cid 3, qid 0 00:26:32.154 [2024-11-26 07:36:16.246853] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.154 [2024-11-26 07:36:16.246859] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.154 [2024-11-26 07:36:16.246867] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.154 [2024-11-26 07:36:16.246871] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3e580) on tqpair=0xadc550 00:26:32.154 [2024-11-26 07:36:16.246880] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.154 [2024-11-26 07:36:16.246884] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.154 [2024-11-26 07:36:16.246888] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadc550) 00:26:32.154 [2024-11-26 07:36:16.246894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.154 [2024-11-26 07:36:16.246905] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3e580, cid 3, qid 0 00:26:32.154 [2024-11-26 07:36:16.247111] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.154 [2024-11-26 07:36:16.247117] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.154 [2024-11-26 07:36:16.247121] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.154 [2024-11-26 07:36:16.247125] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3e580) on tqpair=0xadc550 00:26:32.154 [2024-11-26 07:36:16.247134] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.154 [2024-11-26 07:36:16.247138] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.154 [2024-11-26 07:36:16.247142] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadc550) 00:26:32.154 [2024-11-26 07:36:16.247149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.154 [2024-11-26 07:36:16.247161] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3e580, cid 3, qid 0 00:26:32.154 [2024-11-26 07:36:16.247342] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.154 [2024-11-26 07:36:16.247348] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.154 [2024-11-26 07:36:16.247351] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.154 [2024-11-26 07:36:16.247355] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3e580) on tqpair=0xadc550 00:26:32.154 [2024-11-26 07:36:16.247365] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.154 [2024-11-26 07:36:16.247369] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.154 [2024-11-26 07:36:16.247373] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadc550) 00:26:32.154 [2024-11-26 07:36:16.247379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.154 [2024-11-26 07:36:16.247389] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3e580, cid 3, qid 0 00:26:32.154 [2024-11-26 07:36:16.247589] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.154 [2024-11-26 07:36:16.247595] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.154 [2024-11-26 07:36:16.247599] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.154 [2024-11-26 07:36:16.247602] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3e580) on tqpair=0xadc550 00:26:32.154 [2024-11-26 07:36:16.247612] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.154 [2024-11-26 07:36:16.247616] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.154 [2024-11-26 07:36:16.247619] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadc550) 00:26:32.154 [2024-11-26 07:36:16.247626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.154 [2024-11-26 07:36:16.247636] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3e580, cid 3, qid 0 00:26:32.154 [2024-11-26 07:36:16.247807] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.154 [2024-11-26 07:36:16.247814] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.154 [2024-11-26 07:36:16.247817] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.154 [2024-11-26 07:36:16.247821] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3e580) on tqpair=0xadc550 00:26:32.154 [2024-11-26 07:36:16.247830] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.154 [2024-11-26 07:36:16.247834] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.154 [2024-11-26 07:36:16.247838] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadc550) 00:26:32.154 [2024-11-26 07:36:16.247845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.154 [2024-11-26 07:36:16.247855] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3e580, cid 3, qid 0 00:26:32.154 [2024-11-26 07:36:16.251870] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.154 [2024-11-26 07:36:16.251878] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.154 [2024-11-26 07:36:16.251881] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.154 [2024-11-26 07:36:16.251885] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb3e580) on tqpair=0xadc550 00:26:32.154 [2024-11-26 07:36:16.251893] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:26:32.154 00:26:32.154 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:26:32.419 [2024-11-26 07:36:16.292660] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:26:32.419 [2024-11-26 07:36:16.292728] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2215353 ] 00:26:32.419 [2024-11-26 07:36:16.345278] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:26:32.419 [2024-11-26 07:36:16.345326] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:32.419 [2024-11-26 07:36:16.345331] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:32.419 [2024-11-26 07:36:16.345345] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:32.419 [2024-11-26 07:36:16.345355] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:32.419 [2024-11-26 07:36:16.349066] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:26:32.419 [2024-11-26 07:36:16.349093] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x94e550 0 00:26:32.419 [2024-11-26 07:36:16.356875] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:32.419 [2024-11-26 07:36:16.356887] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:32.419 [2024-11-26 07:36:16.356892] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:32.419 [2024-11-26 07:36:16.356895] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:32.419 [2024-11-26 07:36:16.356923] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.419 [2024-11-26 07:36:16.356928] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.419 [2024-11-26 07:36:16.356933] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x94e550) 00:26:32.419 [2024-11-26 07:36:16.356944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:32.419 [2024-11-26 07:36:16.356961] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b0100, cid 0, qid 0 00:26:32.419 [2024-11-26 07:36:16.364871] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.419 [2024-11-26 07:36:16.364880] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.419 [2024-11-26 07:36:16.364884] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.419 [2024-11-26 07:36:16.364889] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b0100) on tqpair=0x94e550 00:26:32.419 [2024-11-26 07:36:16.364897] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:32.419 [2024-11-26 07:36:16.364904] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:26:32.419 [2024-11-26 07:36:16.364909] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:26:32.419 [2024-11-26 07:36:16.364921] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.419 [2024-11-26 07:36:16.364926] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.419 [2024-11-26 07:36:16.364929] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x94e550) 00:26:32.419 [2024-11-26 07:36:16.364937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.419 [2024-11-26 07:36:16.364951] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b0100, cid 0, qid 0 00:26:32.419 [2024-11-26 07:36:16.365133] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.419 [2024-11-26 07:36:16.365140] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.419 [2024-11-26 07:36:16.365143] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.419 [2024-11-26 07:36:16.365150] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b0100) on tqpair=0x94e550 00:26:32.419 [2024-11-26 07:36:16.365156] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:26:32.419 [2024-11-26 07:36:16.365164] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:26:32.419 [2024-11-26 07:36:16.365171] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.419 [2024-11-26 07:36:16.365174] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.419 [2024-11-26 07:36:16.365178] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x94e550) 00:26:32.419 [2024-11-26 07:36:16.365185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.419 [2024-11-26 07:36:16.365196] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b0100, cid 0, qid 0 00:26:32.419 [2024-11-26 07:36:16.365387] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.419 [2024-11-26 07:36:16.365393] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.419 [2024-11-26 07:36:16.365397] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.419 [2024-11-26 07:36:16.365401] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b0100) on tqpair=0x94e550 00:26:32.419 [2024-11-26 07:36:16.365406] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:26:32.419 [2024-11-26 07:36:16.365413] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:26:32.419 [2024-11-26 07:36:16.365420] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.419 [2024-11-26 07:36:16.365424] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.420 [2024-11-26 07:36:16.365427] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x94e550) 00:26:32.420 [2024-11-26 07:36:16.365434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.420 [2024-11-26 07:36:16.365444] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b0100, cid 0, qid 0 00:26:32.420 [2024-11-26 07:36:16.365652] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.420 [2024-11-26 07:36:16.365658] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.420 [2024-11-26 07:36:16.365662] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.420 [2024-11-26 07:36:16.365666] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b0100) on tqpair=0x94e550 00:26:32.420 [2024-11-26 07:36:16.365670] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:32.420 [2024-11-26 07:36:16.365680] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.420 [2024-11-26 07:36:16.365684] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.420 [2024-11-26 07:36:16.365687] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x94e550) 00:26:32.420 [2024-11-26 07:36:16.365694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.420 [2024-11-26 07:36:16.365704] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b0100, cid 0, qid 0 00:26:32.420 [2024-11-26 07:36:16.365918] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.420 [2024-11-26 07:36:16.365925] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.420 [2024-11-26 07:36:16.365929] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.420 [2024-11-26 07:36:16.365933] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b0100) on tqpair=0x94e550 00:26:32.420 [2024-11-26 07:36:16.365937] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:26:32.420 [2024-11-26 07:36:16.365944] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:26:32.420 [2024-11-26 07:36:16.365952] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:32.420 [2024-11-26 07:36:16.366060] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:26:32.420 [2024-11-26 07:36:16.366064] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:32.420 [2024-11-26 07:36:16.366072] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.420 [2024-11-26 07:36:16.366076] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.420 [2024-11-26 07:36:16.366079] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x94e550) 00:26:32.420 [2024-11-26 07:36:16.366086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.420 [2024-11-26 07:36:16.366097] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b0100, cid 0, qid 0 00:26:32.420 [2024-11-26 07:36:16.366275] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.420 [2024-11-26 07:36:16.366282] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.420 [2024-11-26 07:36:16.366285] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.420 [2024-11-26 07:36:16.366289] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b0100) on tqpair=0x94e550 00:26:32.420 [2024-11-26 07:36:16.366294] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:32.420 [2024-11-26 07:36:16.366303] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.420 [2024-11-26 07:36:16.366307] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.420 [2024-11-26 07:36:16.366311] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x94e550) 00:26:32.420 [2024-11-26 07:36:16.366318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.420 [2024-11-26 07:36:16.366328] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b0100, cid 0, qid 0 00:26:32.420 [2024-11-26 07:36:16.366537] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.420 [2024-11-26 07:36:16.366544] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.420 [2024-11-26 07:36:16.366547] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.420 [2024-11-26 07:36:16.366551] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b0100) on tqpair=0x94e550 00:26:32.420 [2024-11-26 07:36:16.366556] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:32.420 [2024-11-26 07:36:16.366560] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:26:32.420 [2024-11-26 07:36:16.366568] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:26:32.420 [2024-11-26 07:36:16.366576] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:26:32.420 [2024-11-26 07:36:16.366584] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.420 [2024-11-26 07:36:16.366588] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x94e550) 00:26:32.420 [2024-11-26 07:36:16.366595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.420 [2024-11-26 07:36:16.366606] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b0100, cid 0, qid 0 00:26:32.420 [2024-11-26 07:36:16.366811] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:32.420 [2024-11-26 07:36:16.366822] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:32.420 [2024-11-26 07:36:16.366826] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:32.420 [2024-11-26 07:36:16.366830] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x94e550): datao=0, datal=4096, cccid=0 00:26:32.420 [2024-11-26 07:36:16.366835] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9b0100) on tqpair(0x94e550): expected_datao=0, payload_size=4096 00:26:32.420 [2024-11-26 07:36:16.366839] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.420 [2024-11-26 07:36:16.366855] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:32.420 [2024-11-26 07:36:16.366859] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:32.420 [2024-11-26 07:36:16.408036] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.420 [2024-11-26 07:36:16.408045] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.420 [2024-11-26 07:36:16.408049] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.420 [2024-11-26 07:36:16.408053] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b0100) on tqpair=0x94e550 00:26:32.420 [2024-11-26 07:36:16.408061] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:26:32.420 [2024-11-26 07:36:16.408066] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:26:32.420 [2024-11-26 07:36:16.408070] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:26:32.420 [2024-11-26 07:36:16.408078] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:26:32.420 [2024-11-26 07:36:16.408082] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:26:32.420 [2024-11-26 07:36:16.408087] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:26:32.420 [2024-11-26 07:36:16.408097] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:26:32.420 [2024-11-26 07:36:16.408104] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.420 [2024-11-26 07:36:16.408108] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.420 [2024-11-26 07:36:16.408112] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x94e550) 00:26:32.420 [2024-11-26 07:36:16.408119] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:32.420 [2024-11-26 07:36:16.408131] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b0100, cid 0, qid 0 00:26:32.420 [2024-11-26 07:36:16.408315] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.420 [2024-11-26 07:36:16.408322] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.420 [2024-11-26 07:36:16.408325] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.420 [2024-11-26 07:36:16.408329] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b0100) on tqpair=0x94e550 00:26:32.420 [2024-11-26 07:36:16.408336] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.420 [2024-11-26 07:36:16.408340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.420 [2024-11-26 07:36:16.408343] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x94e550) 00:26:32.420 [2024-11-26 07:36:16.408349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.420 [2024-11-26 07:36:16.408356] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.420 [2024-11-26 07:36:16.408359] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.420 [2024-11-26 07:36:16.408363] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x94e550) 00:26:32.420 [2024-11-26 07:36:16.408369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.420 [2024-11-26 07:36:16.408378] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.420 [2024-11-26 07:36:16.408382] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.420 [2024-11-26 07:36:16.408385] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x94e550) 00:26:32.420 [2024-11-26 07:36:16.408391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.420 [2024-11-26 07:36:16.408397] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.420 [2024-11-26 07:36:16.408401] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.420 [2024-11-26 07:36:16.408405] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x94e550) 00:26:32.420 [2024-11-26 07:36:16.408411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.420 [2024-11-26 07:36:16.408415] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:32.420 [2024-11-26 07:36:16.408424] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:32.420 [2024-11-26 07:36:16.408430] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.420 [2024-11-26 07:36:16.408434] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x94e550) 00:26:32.421 [2024-11-26 07:36:16.408441] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.421 [2024-11-26 07:36:16.408453] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b0100, cid 0, qid 0 00:26:32.421 [2024-11-26 07:36:16.408458] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b0280, cid 1, qid 0 00:26:32.421 [2024-11-26 07:36:16.408463] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b0400, cid 2, qid 0 00:26:32.421 [2024-11-26 07:36:16.408468] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b0580, cid 3, qid 0 00:26:32.421 [2024-11-26 07:36:16.408473] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b0700, cid 4, qid 0 00:26:32.421 [2024-11-26 07:36:16.408669] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.421 [2024-11-26 07:36:16.408675] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.421 [2024-11-26 07:36:16.408679] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.421 [2024-11-26 07:36:16.408683] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b0700) on tqpair=0x94e550 00:26:32.421 [2024-11-26 07:36:16.408689] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:26:32.421 [2024-11-26 07:36:16.408695] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:26:32.421 [2024-11-26 07:36:16.408703] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:26:32.421 [2024-11-26 07:36:16.408709] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:26:32.421 [2024-11-26 07:36:16.408715] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.421 [2024-11-26 07:36:16.408719] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.421 [2024-11-26 07:36:16.408723] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x94e550) 00:26:32.421 [2024-11-26 07:36:16.408729] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:32.421 [2024-11-26 07:36:16.408739] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b0700, cid 4, qid 0 00:26:32.421 [2024-11-26 07:36:16.412869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.421 [2024-11-26 07:36:16.412877] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.421 [2024-11-26 07:36:16.412881] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.421 [2024-11-26 07:36:16.412885] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b0700) on tqpair=0x94e550 00:26:32.421 [2024-11-26 07:36:16.412950] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:26:32.421 [2024-11-26 07:36:16.412960] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:26:32.421 [2024-11-26 07:36:16.412967] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.421 [2024-11-26 07:36:16.412971] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x94e550) 00:26:32.421 [2024-11-26 07:36:16.412977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.421 [2024-11-26 07:36:16.412989] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b0700, cid 4, qid 0 00:26:32.421 [2024-11-26 07:36:16.413160] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:32.421 [2024-11-26 07:36:16.413167] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:32.421 [2024-11-26 07:36:16.413170] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:32.421 [2024-11-26 07:36:16.413174] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x94e550): datao=0, datal=4096, cccid=4 00:26:32.421 [2024-11-26 07:36:16.413179] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9b0700) on tqpair(0x94e550): expected_datao=0, payload_size=4096 00:26:32.421 [2024-11-26 07:36:16.413183] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.421 [2024-11-26 07:36:16.413190] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:32.421 [2024-11-26 07:36:16.413194] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:32.421 [2024-11-26 07:36:16.413385] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.421 [2024-11-26 07:36:16.413391] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.421 [2024-11-26 07:36:16.413395] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.421 [2024-11-26 07:36:16.413398] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b0700) on tqpair=0x94e550 00:26:32.421 [2024-11-26 07:36:16.413407] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:26:32.421 [2024-11-26 07:36:16.413420] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:26:32.421 [2024-11-26 07:36:16.413429] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:26:32.421 [2024-11-26 07:36:16.413436] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.421 [2024-11-26 07:36:16.413440] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x94e550) 00:26:32.421 [2024-11-26 07:36:16.413447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.421 [2024-11-26 07:36:16.413457] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b0700, cid 4, qid 0 00:26:32.421 [2024-11-26 07:36:16.413679] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:32.421 [2024-11-26 07:36:16.413685] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:32.421 [2024-11-26 07:36:16.413689] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:32.421 [2024-11-26 07:36:16.413692] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x94e550): datao=0, datal=4096, cccid=4 00:26:32.421 [2024-11-26 07:36:16.413697] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9b0700) on tqpair(0x94e550): expected_datao=0, payload_size=4096 00:26:32.421 [2024-11-26 07:36:16.413704] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.421 [2024-11-26 07:36:16.413718] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:32.421 [2024-11-26 07:36:16.413722] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:32.421 [2024-11-26 07:36:16.413872] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.421 [2024-11-26 07:36:16.413879] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.421 [2024-11-26 07:36:16.413882] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.421 [2024-11-26 07:36:16.413886] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b0700) on tqpair=0x94e550 00:26:32.421 [2024-11-26 07:36:16.413897] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:26:32.421 [2024-11-26 07:36:16.413907] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:26:32.421 [2024-11-26 07:36:16.413914] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.421 [2024-11-26 07:36:16.413918] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x94e550) 00:26:32.421 [2024-11-26 07:36:16.413925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.421 [2024-11-26 07:36:16.413936] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b0700, cid 4, qid 0 00:26:32.421 [2024-11-26 07:36:16.414166] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:32.421 [2024-11-26 07:36:16.414173] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:32.421 [2024-11-26 07:36:16.414176] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:32.421 [2024-11-26 07:36:16.414180] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x94e550): datao=0, datal=4096, cccid=4 00:26:32.421 [2024-11-26 07:36:16.414184] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9b0700) on tqpair(0x94e550): expected_datao=0, payload_size=4096 00:26:32.421 [2024-11-26 07:36:16.414189] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.421 [2024-11-26 07:36:16.414195] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:32.421 [2024-11-26 07:36:16.414199] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:32.421 [2024-11-26 07:36:16.414364] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.421 [2024-11-26 07:36:16.414370] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.421 [2024-11-26 07:36:16.414373] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.421 [2024-11-26 07:36:16.414377] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b0700) on tqpair=0x94e550 00:26:32.421 [2024-11-26 07:36:16.414384] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:26:32.421 [2024-11-26 07:36:16.414392] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:26:32.421 [2024-11-26 07:36:16.414400] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:26:32.421 [2024-11-26 07:36:16.414407] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:26:32.421 [2024-11-26 07:36:16.414412] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:26:32.421 [2024-11-26 07:36:16.414417] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:26:32.421 [2024-11-26 07:36:16.414423] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:26:32.421 [2024-11-26 07:36:16.414429] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:26:32.421 [2024-11-26 07:36:16.414435] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:26:32.421 [2024-11-26 07:36:16.414448] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.421 [2024-11-26 07:36:16.414452] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x94e550) 00:26:32.421 [2024-11-26 07:36:16.414459] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.421 [2024-11-26 07:36:16.414465] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.421 [2024-11-26 07:36:16.414469] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.421 [2024-11-26 07:36:16.414473] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x94e550) 00:26:32.421 [2024-11-26 07:36:16.414479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.421 [2024-11-26 07:36:16.414492] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b0700, cid 4, qid 0 00:26:32.422 [2024-11-26 07:36:16.414498] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b0880, cid 5, qid 0 00:26:32.422 [2024-11-26 07:36:16.414668] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.422 [2024-11-26 07:36:16.414675] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.422 [2024-11-26 07:36:16.414678] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.422 [2024-11-26 07:36:16.414682] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b0700) on tqpair=0x94e550 00:26:32.422 [2024-11-26 07:36:16.414689] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.422 [2024-11-26 07:36:16.414695] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.422 [2024-11-26 07:36:16.414698] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.422 [2024-11-26 07:36:16.414702] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b0880) on tqpair=0x94e550 00:26:32.422 [2024-11-26 07:36:16.414711] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.422 [2024-11-26 07:36:16.414715] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x94e550) 00:26:32.422 [2024-11-26 07:36:16.414721] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.422 [2024-11-26 07:36:16.414731] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b0880, cid 5, qid 0 00:26:32.422 [2024-11-26 07:36:16.414917] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.422 [2024-11-26 07:36:16.414924] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.422 [2024-11-26 07:36:16.414927] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.422 [2024-11-26 07:36:16.414931] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b0880) on tqpair=0x94e550 00:26:32.422 [2024-11-26 07:36:16.414940] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.422 [2024-11-26 07:36:16.414944] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x94e550) 00:26:32.422 [2024-11-26 07:36:16.414951] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.422 [2024-11-26 07:36:16.414961] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b0880, cid 5, qid 0 00:26:32.422 [2024-11-26 07:36:16.415189] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.422 [2024-11-26 07:36:16.415196] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.422 [2024-11-26 07:36:16.415199] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.422 [2024-11-26 07:36:16.415203] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b0880) on tqpair=0x94e550 00:26:32.422 [2024-11-26 07:36:16.415212] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.422 [2024-11-26 07:36:16.415218] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x94e550) 00:26:32.422 [2024-11-26 07:36:16.415224] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.422 [2024-11-26 07:36:16.415234] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b0880, cid 5, qid 0 00:26:32.422 [2024-11-26 07:36:16.415437] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.422 [2024-11-26 07:36:16.415443] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.422 [2024-11-26 07:36:16.415447] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.422 [2024-11-26 07:36:16.415451] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b0880) on tqpair=0x94e550 00:26:32.422 [2024-11-26 07:36:16.415464] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.422 [2024-11-26 07:36:16.415468] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x94e550) 00:26:32.422 [2024-11-26 07:36:16.415475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.422 [2024-11-26 07:36:16.415483] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.422 [2024-11-26 07:36:16.415486] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x94e550) 00:26:32.422 [2024-11-26 07:36:16.415493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.422 [2024-11-26 07:36:16.415500] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.422 [2024-11-26 07:36:16.415503] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x94e550) 00:26:32.422 [2024-11-26 07:36:16.415510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.422 [2024-11-26 07:36:16.415517] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.422 [2024-11-26 07:36:16.415521] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x94e550) 00:26:32.422 [2024-11-26 07:36:16.415527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.422 [2024-11-26 07:36:16.415538] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b0880, cid 5, qid 0 00:26:32.422 [2024-11-26 07:36:16.415543] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b0700, cid 4, qid 0 00:26:32.422 [2024-11-26 07:36:16.415548] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b0a00, cid 6, qid 0 00:26:32.422 [2024-11-26 07:36:16.415553] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b0b80, cid 7, qid 0 00:26:32.422 [2024-11-26 07:36:16.415783] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:32.422 [2024-11-26 07:36:16.415789] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:32.422 [2024-11-26 07:36:16.415793] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:32.422 [2024-11-26 07:36:16.415797] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x94e550): datao=0, datal=8192, cccid=5 00:26:32.422 [2024-11-26 07:36:16.415801] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9b0880) on tqpair(0x94e550): expected_datao=0, payload_size=8192 00:26:32.422 [2024-11-26 07:36:16.415805] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.422 [2024-11-26 07:36:16.415912] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:32.422 [2024-11-26 07:36:16.415917] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:32.422 [2024-11-26 07:36:16.415923] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:32.422 [2024-11-26 07:36:16.415928] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:32.422 [2024-11-26 07:36:16.415936] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:32.422 [2024-11-26 07:36:16.415939] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x94e550): datao=0, datal=512, cccid=4 00:26:32.422 [2024-11-26 07:36:16.415944] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9b0700) on tqpair(0x94e550): expected_datao=0, payload_size=512 00:26:32.422 [2024-11-26 07:36:16.415948] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.422 [2024-11-26 07:36:16.415955] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:32.422 [2024-11-26 07:36:16.415959] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:32.422 [2024-11-26 07:36:16.415964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:32.422 [2024-11-26 07:36:16.415970] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:32.422 [2024-11-26 07:36:16.415973] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:32.422 [2024-11-26 07:36:16.415977] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x94e550): datao=0, datal=512, cccid=6 00:26:32.422 [2024-11-26 07:36:16.415982] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9b0a00) on tqpair(0x94e550): expected_datao=0, payload_size=512 00:26:32.422 [2024-11-26 07:36:16.415986] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.422 [2024-11-26 07:36:16.415993] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:32.422 [2024-11-26 07:36:16.415996] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:32.422 [2024-11-26 07:36:16.416002] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:32.422 [2024-11-26 07:36:16.416008] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:32.422 [2024-11-26 07:36:16.416011] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:32.422 [2024-11-26 07:36:16.416015] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x94e550): datao=0, datal=4096, cccid=7 00:26:32.422 [2024-11-26 07:36:16.416019] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9b0b80) on tqpair(0x94e550): expected_datao=0, payload_size=4096 00:26:32.422 [2024-11-26 07:36:16.416023] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.422 [2024-11-26 07:36:16.416035] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:32.422 [2024-11-26 07:36:16.416038] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:32.422 [2024-11-26 07:36:16.416049] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.422 [2024-11-26 07:36:16.416055] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.422 [2024-11-26 07:36:16.416059] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.422 [2024-11-26 07:36:16.416063] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b0880) on tqpair=0x94e550 00:26:32.422 [2024-11-26 07:36:16.416075] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.422 [2024-11-26 07:36:16.416081] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.422 [2024-11-26 07:36:16.416084] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.422 [2024-11-26 07:36:16.416088] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b0700) on tqpair=0x94e550 00:26:32.422 [2024-11-26 07:36:16.416098] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.422 [2024-11-26 07:36:16.416104] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.422 [2024-11-26 07:36:16.416108] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.422 [2024-11-26 07:36:16.416111] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b0a00) on tqpair=0x94e550 00:26:32.422 [2024-11-26 07:36:16.416119] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.422 [2024-11-26 07:36:16.416125] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.422 [2024-11-26 07:36:16.416128] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.422 [2024-11-26 07:36:16.416132] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b0b80) on tqpair=0x94e550 00:26:32.422 ===================================================== 00:26:32.422 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:32.422 ===================================================== 00:26:32.422 Controller Capabilities/Features 00:26:32.422 ================================ 00:26:32.422 Vendor ID: 8086 00:26:32.422 Subsystem Vendor ID: 8086 00:26:32.422 Serial Number: SPDK00000000000001 00:26:32.422 Model Number: SPDK bdev Controller 00:26:32.422 Firmware Version: 25.01 00:26:32.422 Recommended Arb Burst: 6 00:26:32.422 IEEE OUI Identifier: e4 d2 5c 00:26:32.422 Multi-path I/O 00:26:32.422 May have multiple subsystem ports: Yes 00:26:32.422 May have multiple controllers: Yes 00:26:32.422 Associated with SR-IOV VF: No 00:26:32.422 Max Data Transfer Size: 131072 00:26:32.423 Max Number of Namespaces: 32 00:26:32.423 Max Number of I/O Queues: 127 00:26:32.423 NVMe Specification Version (VS): 1.3 00:26:32.423 NVMe Specification Version (Identify): 1.3 00:26:32.423 Maximum Queue Entries: 128 00:26:32.423 Contiguous Queues Required: Yes 00:26:32.423 Arbitration Mechanisms Supported 00:26:32.423 Weighted Round Robin: Not Supported 00:26:32.423 Vendor Specific: Not Supported 00:26:32.423 Reset Timeout: 15000 ms 00:26:32.423 Doorbell Stride: 4 bytes 00:26:32.423 NVM Subsystem Reset: Not Supported 00:26:32.423 Command Sets Supported 00:26:32.423 NVM Command Set: Supported 00:26:32.423 Boot Partition: Not Supported 00:26:32.423 Memory Page Size Minimum: 4096 bytes 00:26:32.423 Memory Page Size Maximum: 4096 bytes 00:26:32.423 Persistent Memory Region: Not Supported 00:26:32.423 Optional Asynchronous Events Supported 00:26:32.423 Namespace Attribute Notices: Supported 00:26:32.423 Firmware Activation Notices: Not Supported 00:26:32.423 ANA Change Notices: Not Supported 00:26:32.423 PLE Aggregate Log Change Notices: Not Supported 00:26:32.423 LBA Status Info Alert Notices: Not Supported 00:26:32.423 EGE Aggregate Log Change Notices: Not Supported 00:26:32.423 Normal NVM Subsystem Shutdown event: Not Supported 00:26:32.423 Zone Descriptor Change Notices: Not Supported 00:26:32.423 Discovery Log Change Notices: Not Supported 00:26:32.423 Controller Attributes 00:26:32.423 128-bit Host Identifier: Supported 00:26:32.423 Non-Operational Permissive Mode: Not Supported 00:26:32.423 NVM Sets: Not Supported 00:26:32.423 Read Recovery Levels: Not Supported 00:26:32.423 Endurance Groups: Not Supported 00:26:32.423 Predictable Latency Mode: Not Supported 00:26:32.423 Traffic Based Keep ALive: Not Supported 00:26:32.423 Namespace Granularity: Not Supported 00:26:32.423 SQ Associations: Not Supported 00:26:32.423 UUID List: Not Supported 00:26:32.423 Multi-Domain Subsystem: Not Supported 00:26:32.423 Fixed Capacity Management: Not Supported 00:26:32.423 Variable Capacity Management: Not Supported 00:26:32.423 Delete Endurance Group: Not Supported 00:26:32.423 Delete NVM Set: Not Supported 00:26:32.423 Extended LBA Formats Supported: Not Supported 00:26:32.423 Flexible Data Placement Supported: Not Supported 00:26:32.423 00:26:32.423 Controller Memory Buffer Support 00:26:32.423 ================================ 00:26:32.423 Supported: No 00:26:32.423 00:26:32.423 Persistent Memory Region Support 00:26:32.423 ================================ 00:26:32.423 Supported: No 00:26:32.423 00:26:32.423 Admin Command Set Attributes 00:26:32.423 ============================ 00:26:32.423 Security Send/Receive: Not Supported 00:26:32.423 Format NVM: Not Supported 00:26:32.423 Firmware Activate/Download: Not Supported 00:26:32.423 Namespace Management: Not Supported 00:26:32.423 Device Self-Test: Not Supported 00:26:32.423 Directives: Not Supported 00:26:32.423 NVMe-MI: Not Supported 00:26:32.423 Virtualization Management: Not Supported 00:26:32.423 Doorbell Buffer Config: Not Supported 00:26:32.423 Get LBA Status Capability: Not Supported 00:26:32.423 Command & Feature Lockdown Capability: Not Supported 00:26:32.423 Abort Command Limit: 4 00:26:32.423 Async Event Request Limit: 4 00:26:32.423 Number of Firmware Slots: N/A 00:26:32.423 Firmware Slot 1 Read-Only: N/A 00:26:32.423 Firmware Activation Without Reset: N/A 00:26:32.423 Multiple Update Detection Support: N/A 00:26:32.423 Firmware Update Granularity: No Information Provided 00:26:32.423 Per-Namespace SMART Log: No 00:26:32.423 Asymmetric Namespace Access Log Page: Not Supported 00:26:32.423 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:26:32.423 Command Effects Log Page: Supported 00:26:32.423 Get Log Page Extended Data: Supported 00:26:32.423 Telemetry Log Pages: Not Supported 00:26:32.423 Persistent Event Log Pages: Not Supported 00:26:32.423 Supported Log Pages Log Page: May Support 00:26:32.423 Commands Supported & Effects Log Page: Not Supported 00:26:32.423 Feature Identifiers & Effects Log Page:May Support 00:26:32.423 NVMe-MI Commands & Effects Log Page: May Support 00:26:32.423 Data Area 4 for Telemetry Log: Not Supported 00:26:32.423 Error Log Page Entries Supported: 128 00:26:32.423 Keep Alive: Supported 00:26:32.423 Keep Alive Granularity: 10000 ms 00:26:32.423 00:26:32.423 NVM Command Set Attributes 00:26:32.423 ========================== 00:26:32.423 Submission Queue Entry Size 00:26:32.423 Max: 64 00:26:32.423 Min: 64 00:26:32.423 Completion Queue Entry Size 00:26:32.423 Max: 16 00:26:32.423 Min: 16 00:26:32.423 Number of Namespaces: 32 00:26:32.423 Compare Command: Supported 00:26:32.423 Write Uncorrectable Command: Not Supported 00:26:32.423 Dataset Management Command: Supported 00:26:32.423 Write Zeroes Command: Supported 00:26:32.423 Set Features Save Field: Not Supported 00:26:32.423 Reservations: Supported 00:26:32.423 Timestamp: Not Supported 00:26:32.423 Copy: Supported 00:26:32.423 Volatile Write Cache: Present 00:26:32.423 Atomic Write Unit (Normal): 1 00:26:32.423 Atomic Write Unit (PFail): 1 00:26:32.423 Atomic Compare & Write Unit: 1 00:26:32.423 Fused Compare & Write: Supported 00:26:32.423 Scatter-Gather List 00:26:32.423 SGL Command Set: Supported 00:26:32.423 SGL Keyed: Supported 00:26:32.423 SGL Bit Bucket Descriptor: Not Supported 00:26:32.423 SGL Metadata Pointer: Not Supported 00:26:32.423 Oversized SGL: Not Supported 00:26:32.423 SGL Metadata Address: Not Supported 00:26:32.423 SGL Offset: Supported 00:26:32.423 Transport SGL Data Block: Not Supported 00:26:32.423 Replay Protected Memory Block: Not Supported 00:26:32.423 00:26:32.423 Firmware Slot Information 00:26:32.423 ========================= 00:26:32.423 Active slot: 1 00:26:32.423 Slot 1 Firmware Revision: 25.01 00:26:32.423 00:26:32.423 00:26:32.423 Commands Supported and Effects 00:26:32.423 ============================== 00:26:32.423 Admin Commands 00:26:32.423 -------------- 00:26:32.423 Get Log Page (02h): Supported 00:26:32.423 Identify (06h): Supported 00:26:32.423 Abort (08h): Supported 00:26:32.423 Set Features (09h): Supported 00:26:32.423 Get Features (0Ah): Supported 00:26:32.423 Asynchronous Event Request (0Ch): Supported 00:26:32.423 Keep Alive (18h): Supported 00:26:32.423 I/O Commands 00:26:32.423 ------------ 00:26:32.423 Flush (00h): Supported LBA-Change 00:26:32.423 Write (01h): Supported LBA-Change 00:26:32.423 Read (02h): Supported 00:26:32.423 Compare (05h): Supported 00:26:32.423 Write Zeroes (08h): Supported LBA-Change 00:26:32.423 Dataset Management (09h): Supported LBA-Change 00:26:32.423 Copy (19h): Supported LBA-Change 00:26:32.423 00:26:32.423 Error Log 00:26:32.423 ========= 00:26:32.423 00:26:32.423 Arbitration 00:26:32.423 =========== 00:26:32.423 Arbitration Burst: 1 00:26:32.423 00:26:32.423 Power Management 00:26:32.423 ================ 00:26:32.423 Number of Power States: 1 00:26:32.423 Current Power State: Power State #0 00:26:32.423 Power State #0: 00:26:32.423 Max Power: 0.00 W 00:26:32.423 Non-Operational State: Operational 00:26:32.423 Entry Latency: Not Reported 00:26:32.423 Exit Latency: Not Reported 00:26:32.423 Relative Read Throughput: 0 00:26:32.423 Relative Read Latency: 0 00:26:32.423 Relative Write Throughput: 0 00:26:32.423 Relative Write Latency: 0 00:26:32.423 Idle Power: Not Reported 00:26:32.423 Active Power: Not Reported 00:26:32.423 Non-Operational Permissive Mode: Not Supported 00:26:32.423 00:26:32.423 Health Information 00:26:32.423 ================== 00:26:32.423 Critical Warnings: 00:26:32.423 Available Spare Space: OK 00:26:32.423 Temperature: OK 00:26:32.423 Device Reliability: OK 00:26:32.423 Read Only: No 00:26:32.423 Volatile Memory Backup: OK 00:26:32.423 Current Temperature: 0 Kelvin (-273 Celsius) 00:26:32.423 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:26:32.423 Available Spare: 0% 00:26:32.423 Available Spare Threshold: 0% 00:26:32.423 Life Percentage Used:[2024-11-26 07:36:16.416227] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.423 [2024-11-26 07:36:16.416233] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x94e550) 00:26:32.423 [2024-11-26 07:36:16.416240] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.423 [2024-11-26 07:36:16.416252] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b0b80, cid 7, qid 0 00:26:32.423 [2024-11-26 07:36:16.416442] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.423 [2024-11-26 07:36:16.416448] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.423 [2024-11-26 07:36:16.416452] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.423 [2024-11-26 07:36:16.416456] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b0b80) on tqpair=0x94e550 00:26:32.424 [2024-11-26 07:36:16.416484] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:26:32.424 [2024-11-26 07:36:16.416493] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b0100) on tqpair=0x94e550 00:26:32.424 [2024-11-26 07:36:16.416499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.424 [2024-11-26 07:36:16.416504] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b0280) on tqpair=0x94e550 00:26:32.424 [2024-11-26 07:36:16.416509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.424 [2024-11-26 07:36:16.416514] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b0400) on tqpair=0x94e550 00:26:32.424 [2024-11-26 07:36:16.416518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.424 [2024-11-26 07:36:16.416523] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b0580) on tqpair=0x94e550 00:26:32.424 [2024-11-26 07:36:16.416528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.424 [2024-11-26 07:36:16.416536] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.424 [2024-11-26 07:36:16.416539] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.424 [2024-11-26 07:36:16.416543] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x94e550) 00:26:32.424 [2024-11-26 07:36:16.416550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.424 [2024-11-26 07:36:16.416562] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b0580, cid 3, qid 0 00:26:32.424 [2024-11-26 07:36:16.416762] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.424 [2024-11-26 07:36:16.416769] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.424 [2024-11-26 07:36:16.416772] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.424 [2024-11-26 07:36:16.416776] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b0580) on tqpair=0x94e550 00:26:32.424 [2024-11-26 07:36:16.416783] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.424 [2024-11-26 07:36:16.416787] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.424 [2024-11-26 07:36:16.416790] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x94e550) 00:26:32.424 [2024-11-26 07:36:16.416797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.424 [2024-11-26 07:36:16.416809] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b0580, cid 3, qid 0 00:26:32.424 [2024-11-26 07:36:16.420870] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.424 [2024-11-26 07:36:16.420879] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.424 [2024-11-26 07:36:16.420883] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.424 [2024-11-26 07:36:16.420887] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b0580) on tqpair=0x94e550 00:26:32.424 [2024-11-26 07:36:16.420892] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:26:32.424 [2024-11-26 07:36:16.420901] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:26:32.424 [2024-11-26 07:36:16.420910] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.424 [2024-11-26 07:36:16.420914] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.424 [2024-11-26 07:36:16.420918] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x94e550) 00:26:32.424 [2024-11-26 07:36:16.420925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.424 [2024-11-26 07:36:16.420936] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b0580, cid 3, qid 0 00:26:32.424 [2024-11-26 07:36:16.421094] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.424 [2024-11-26 07:36:16.421100] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.424 [2024-11-26 07:36:16.421104] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.424 [2024-11-26 07:36:16.421108] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b0580) on tqpair=0x94e550 00:26:32.424 [2024-11-26 07:36:16.421115] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 0 milliseconds 00:26:32.424 0% 00:26:32.424 Data Units Read: 0 00:26:32.424 Data Units Written: 0 00:26:32.424 Host Read Commands: 0 00:26:32.424 Host Write Commands: 0 00:26:32.424 Controller Busy Time: 0 minutes 00:26:32.424 Power Cycles: 0 00:26:32.424 Power On Hours: 0 hours 00:26:32.424 Unsafe Shutdowns: 0 00:26:32.424 Unrecoverable Media Errors: 0 00:26:32.424 Lifetime Error Log Entries: 0 00:26:32.424 Warning Temperature Time: 0 minutes 00:26:32.424 Critical Temperature Time: 0 minutes 00:26:32.424 00:26:32.424 Number of Queues 00:26:32.424 ================ 00:26:32.424 Number of I/O Submission Queues: 127 00:26:32.424 Number of I/O Completion Queues: 127 00:26:32.424 00:26:32.424 Active Namespaces 00:26:32.424 ================= 00:26:32.424 Namespace ID:1 00:26:32.424 Error Recovery Timeout: Unlimited 00:26:32.424 Command Set Identifier: NVM (00h) 00:26:32.424 Deallocate: Supported 00:26:32.424 Deallocated/Unwritten Error: Not Supported 00:26:32.424 Deallocated Read Value: Unknown 00:26:32.424 Deallocate in Write Zeroes: Not Supported 00:26:32.424 Deallocated Guard Field: 0xFFFF 00:26:32.424 Flush: Supported 00:26:32.424 Reservation: Supported 00:26:32.424 Namespace Sharing Capabilities: Multiple Controllers 00:26:32.424 Size (in LBAs): 131072 (0GiB) 00:26:32.424 Capacity (in LBAs): 131072 (0GiB) 00:26:32.424 Utilization (in LBAs): 131072 (0GiB) 00:26:32.424 NGUID: ABCDEF0123456789ABCDEF0123456789 00:26:32.424 EUI64: ABCDEF0123456789 00:26:32.424 UUID: 73286024-dbad-4358-a508-8bfd1ced173f 00:26:32.424 Thin Provisioning: Not Supported 00:26:32.424 Per-NS Atomic Units: Yes 00:26:32.424 Atomic Boundary Size (Normal): 0 00:26:32.424 Atomic Boundary Size (PFail): 0 00:26:32.424 Atomic Boundary Offset: 0 00:26:32.424 Maximum Single Source Range Length: 65535 00:26:32.424 Maximum Copy Length: 65535 00:26:32.424 Maximum Source Range Count: 1 00:26:32.424 NGUID/EUI64 Never Reused: No 00:26:32.424 Namespace Write Protected: No 00:26:32.424 Number of LBA Formats: 1 00:26:32.424 Current LBA Format: LBA Format #00 00:26:32.424 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:32.424 00:26:32.424 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:26:32.424 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:32.424 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.424 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:32.424 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.424 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:26:32.424 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:26:32.424 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:32.424 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:26:32.424 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:32.424 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:26:32.424 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:32.424 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:32.424 rmmod nvme_tcp 00:26:32.424 rmmod nvme_fabrics 00:26:32.424 rmmod nvme_keyring 00:26:32.424 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:32.424 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:26:32.424 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:26:32.424 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2215069 ']' 00:26:32.424 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2215069 00:26:32.424 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2215069 ']' 00:26:32.424 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2215069 00:26:32.424 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:26:32.424 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:32.424 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2215069 00:26:32.686 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:32.686 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:32.686 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2215069' 00:26:32.686 killing process with pid 2215069 00:26:32.686 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2215069 00:26:32.686 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2215069 00:26:32.686 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:32.686 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:32.686 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:32.686 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:26:32.686 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:26:32.686 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:32.686 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:26:32.686 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:32.686 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:32.686 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.686 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:32.686 07:36:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.250 07:36:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:35.250 00:26:35.250 real 0m12.480s 00:26:35.250 user 0m8.605s 00:26:35.250 sys 0m6.818s 00:26:35.250 07:36:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:35.250 07:36:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:35.250 ************************************ 00:26:35.250 END TEST nvmf_identify 00:26:35.250 ************************************ 00:26:35.250 07:36:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:35.250 07:36:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:35.250 07:36:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:35.250 07:36:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.250 ************************************ 00:26:35.250 START TEST nvmf_perf 00:26:35.251 ************************************ 00:26:35.251 07:36:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:35.251 * Looking for test storage... 00:26:35.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:35.251 07:36:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:35.251 07:36:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:26:35.251 07:36:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:35.251 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:35.251 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:35.251 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:35.251 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:35.251 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:26:35.251 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:26:35.251 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:26:35.251 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:26:35.251 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:26:35.251 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:26:35.251 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:26:35.251 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:35.251 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:26:35.251 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:26:35.251 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:35.251 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:35.251 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:26:35.251 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:26:35.251 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:35.251 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:26:35.251 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:35.251 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:26:35.251 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:26:35.251 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:35.251 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:26:35.251 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:35.251 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:35.251 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:35.251 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:26:35.252 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:35.252 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:35.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.252 --rc genhtml_branch_coverage=1 00:26:35.252 --rc genhtml_function_coverage=1 00:26:35.252 --rc genhtml_legend=1 00:26:35.252 --rc geninfo_all_blocks=1 00:26:35.252 --rc geninfo_unexecuted_blocks=1 00:26:35.252 00:26:35.252 ' 00:26:35.252 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:35.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.252 --rc genhtml_branch_coverage=1 00:26:35.252 --rc genhtml_function_coverage=1 00:26:35.252 --rc genhtml_legend=1 00:26:35.252 --rc geninfo_all_blocks=1 00:26:35.252 --rc geninfo_unexecuted_blocks=1 00:26:35.252 00:26:35.252 ' 00:26:35.252 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:35.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.252 --rc genhtml_branch_coverage=1 00:26:35.252 --rc genhtml_function_coverage=1 00:26:35.252 --rc genhtml_legend=1 00:26:35.252 --rc geninfo_all_blocks=1 00:26:35.252 --rc geninfo_unexecuted_blocks=1 00:26:35.252 00:26:35.252 ' 00:26:35.252 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:35.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.252 --rc genhtml_branch_coverage=1 00:26:35.252 --rc genhtml_function_coverage=1 00:26:35.252 --rc genhtml_legend=1 00:26:35.252 --rc geninfo_all_blocks=1 00:26:35.252 --rc geninfo_unexecuted_blocks=1 00:26:35.252 00:26:35.252 ' 00:26:35.252 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:35.252 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:26:35.253 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:35.253 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:35.253 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:35.253 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:35.253 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:35.253 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:35.253 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:35.253 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:35.253 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:35.253 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:35.253 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:35.253 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:35.253 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:35.253 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:35.253 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:35.253 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:35.253 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:35.253 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:35.253 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:35.253 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:35.254 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:35.254 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.254 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.254 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.254 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:26:35.254 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.254 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:26:35.254 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:35.254 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:35.255 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:35.255 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:35.255 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:35.255 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:35.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:35.255 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:35.255 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:35.255 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:35.255 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:35.255 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:35.255 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:35.255 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:26:35.255 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:35.255 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:35.255 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:35.255 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:35.255 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:35.255 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:35.255 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:35.255 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.255 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:35.256 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:35.256 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:35.256 07:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:43.401 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:43.401 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:43.401 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:43.401 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:43.401 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:43.401 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:43.401 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:43.401 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:26:43.401 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:43.401 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:26:43.401 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:26:43.401 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:26:43.401 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:26:43.401 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:26:43.401 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:43.401 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:43.401 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:43.401 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:43.402 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:43.402 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:43.402 Found net devices under 0000:31:00.0: cvl_0_0 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:43.402 Found net devices under 0000:31:00.1: cvl_0_1 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:43.402 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:43.664 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:43.664 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:43.664 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:43.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:43.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.705 ms 00:26:43.664 00:26:43.664 --- 10.0.0.2 ping statistics --- 00:26:43.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:43.664 rtt min/avg/max/mdev = 0.705/0.705/0.705/0.000 ms 00:26:43.664 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:43.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:43.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:26:43.664 00:26:43.664 --- 10.0.0.1 ping statistics --- 00:26:43.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:43.664 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:26:43.664 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:43.664 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:26:43.664 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:43.664 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:43.664 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:43.664 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:43.664 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:43.664 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:43.664 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:43.664 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:26:43.664 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:43.664 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:43.664 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:43.664 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2220108 00:26:43.664 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2220108 00:26:43.664 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:43.664 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2220108 ']' 00:26:43.664 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:43.664 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:43.664 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:43.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:43.664 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:43.664 07:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:43.664 [2024-11-26 07:36:27.667042] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:26:43.664 [2024-11-26 07:36:27.667110] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:43.664 [2024-11-26 07:36:27.761604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:43.925 [2024-11-26 07:36:27.803692] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:43.926 [2024-11-26 07:36:27.803732] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:43.926 [2024-11-26 07:36:27.803740] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:43.926 [2024-11-26 07:36:27.803746] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:43.926 [2024-11-26 07:36:27.803752] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:43.926 [2024-11-26 07:36:27.805658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:43.926 [2024-11-26 07:36:27.805792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:43.926 [2024-11-26 07:36:27.805982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:43.926 [2024-11-26 07:36:27.806104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.497 07:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:44.497 07:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:26:44.497 07:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:44.497 07:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:44.497 07:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:44.497 07:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:44.497 07:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:44.497 07:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:26:45.068 07:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:26:45.068 07:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:26:45.328 07:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:26:45.328 07:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:45.328 07:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:26:45.328 07:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:26:45.328 07:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:26:45.328 07:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:26:45.328 07:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:45.598 [2024-11-26 07:36:29.572906] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:45.598 07:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:45.859 07:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:45.859 07:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:45.859 07:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:45.859 07:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:46.163 07:36:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:46.463 [2024-11-26 07:36:30.299602] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:46.463 07:36:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:46.463 07:36:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:26:46.463 07:36:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:26:46.463 07:36:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:26:46.463 07:36:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:26:47.916 Initializing NVMe Controllers 00:26:47.916 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:26:47.916 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:26:47.916 Initialization complete. Launching workers. 00:26:47.916 ======================================================== 00:26:47.916 Latency(us) 00:26:47.916 Device Information : IOPS MiB/s Average min max 00:26:47.916 PCIE (0000:65:00.0) NSID 1 from core 0: 79678.60 311.24 400.94 13.30 4962.31 00:26:47.916 ======================================================== 00:26:47.916 Total : 79678.60 311.24 400.94 13.30 4962.31 00:26:47.916 00:26:47.916 07:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:49.301 Initializing NVMe Controllers 00:26:49.301 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:49.301 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:49.301 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:49.301 Initialization complete. Launching workers. 00:26:49.301 ======================================================== 00:26:49.301 Latency(us) 00:26:49.301 Device Information : IOPS MiB/s Average min max 00:26:49.301 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 107.00 0.42 9401.16 241.45 45987.61 00:26:49.301 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 66.00 0.26 15221.18 6997.29 48887.02 00:26:49.301 ======================================================== 00:26:49.301 Total : 173.00 0.68 11621.51 241.45 48887.02 00:26:49.301 00:26:49.301 07:36:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:50.686 Initializing NVMe Controllers 00:26:50.686 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:50.686 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:50.686 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:50.686 Initialization complete. Launching workers. 00:26:50.686 ======================================================== 00:26:50.686 Latency(us) 00:26:50.686 Device Information : IOPS MiB/s Average min max 00:26:50.686 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10429.99 40.74 3068.39 454.76 6490.08 00:26:50.686 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3831.00 14.96 8398.24 6981.20 16037.44 00:26:50.686 ======================================================== 00:26:50.686 Total : 14260.99 55.71 4500.17 454.76 16037.44 00:26:50.686 00:26:50.686 07:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:26:50.686 07:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:26:50.686 07:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:53.229 Initializing NVMe Controllers 00:26:53.229 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:53.229 Controller IO queue size 128, less than required. 00:26:53.229 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:53.229 Controller IO queue size 128, less than required. 00:26:53.229 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:53.229 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:53.229 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:53.229 Initialization complete. Launching workers. 00:26:53.229 ======================================================== 00:26:53.229 Latency(us) 00:26:53.229 Device Information : IOPS MiB/s Average min max 00:26:53.229 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1842.93 460.73 70320.06 49443.85 107155.63 00:26:53.229 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 624.48 156.12 215520.81 85784.63 324085.99 00:26:53.229 ======================================================== 00:26:53.229 Total : 2467.40 616.85 107068.94 49443.85 324085.99 00:26:53.229 00:26:53.229 07:36:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:26:53.490 No valid NVMe controllers or AIO or URING devices found 00:26:53.490 Initializing NVMe Controllers 00:26:53.490 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:53.490 Controller IO queue size 128, less than required. 00:26:53.490 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:53.490 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:26:53.490 Controller IO queue size 128, less than required. 00:26:53.490 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:53.490 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:26:53.490 WARNING: Some requested NVMe devices were skipped 00:26:53.490 07:36:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:26:56.034 Initializing NVMe Controllers 00:26:56.034 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:56.034 Controller IO queue size 128, less than required. 00:26:56.034 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:56.034 Controller IO queue size 128, less than required. 00:26:56.034 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:56.034 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:56.034 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:56.034 Initialization complete. Launching workers. 00:26:56.034 00:26:56.034 ==================== 00:26:56.034 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:26:56.034 TCP transport: 00:26:56.034 polls: 20954 00:26:56.034 idle_polls: 13048 00:26:56.034 sock_completions: 7906 00:26:56.034 nvme_completions: 5881 00:26:56.034 submitted_requests: 8836 00:26:56.034 queued_requests: 1 00:26:56.034 00:26:56.034 ==================== 00:26:56.034 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:26:56.034 TCP transport: 00:26:56.034 polls: 20411 00:26:56.034 idle_polls: 10921 00:26:56.034 sock_completions: 9490 00:26:56.034 nvme_completions: 8865 00:26:56.034 submitted_requests: 13322 00:26:56.034 queued_requests: 1 00:26:56.034 ======================================================== 00:26:56.034 Latency(us) 00:26:56.034 Device Information : IOPS MiB/s Average min max 00:26:56.034 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1469.97 367.49 88417.06 48051.21 145379.49 00:26:56.034 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2215.96 553.99 58256.83 30000.31 90044.72 00:26:56.034 ======================================================== 00:26:56.034 Total : 3685.93 921.48 70284.92 30000.31 145379.49 00:26:56.034 00:26:56.034 07:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:26:56.034 07:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:56.294 07:36:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:26:56.294 07:36:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:26:56.294 07:36:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:26:56.295 07:36:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:56.295 07:36:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:26:56.295 07:36:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:56.295 07:36:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:26:56.295 07:36:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:56.295 07:36:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:56.295 rmmod nvme_tcp 00:26:56.295 rmmod nvme_fabrics 00:26:56.295 rmmod nvme_keyring 00:26:56.295 07:36:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:56.295 07:36:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:26:56.295 07:36:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:26:56.295 07:36:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2220108 ']' 00:26:56.295 07:36:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2220108 00:26:56.295 07:36:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2220108 ']' 00:26:56.295 07:36:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2220108 00:26:56.295 07:36:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:26:56.295 07:36:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:56.295 07:36:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2220108 00:26:56.295 07:36:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:56.295 07:36:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:56.295 07:36:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2220108' 00:26:56.295 killing process with pid 2220108 00:26:56.295 07:36:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2220108 00:26:56.295 07:36:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2220108 00:26:58.206 07:36:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:58.206 07:36:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:58.206 07:36:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:58.206 07:36:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:26:58.206 07:36:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:26:58.206 07:36:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:58.206 07:36:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:26:58.206 07:36:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:58.206 07:36:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:58.206 07:36:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.206 07:36:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:58.206 07:36:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:00.752 00:27:00.752 real 0m25.461s 00:27:00.752 user 0m59.257s 00:27:00.752 sys 0m9.365s 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:00.752 ************************************ 00:27:00.752 END TEST nvmf_perf 00:27:00.752 ************************************ 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.752 ************************************ 00:27:00.752 START TEST nvmf_fio_host 00:27:00.752 ************************************ 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:00.752 * Looking for test storage... 00:27:00.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:00.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.752 --rc genhtml_branch_coverage=1 00:27:00.752 --rc genhtml_function_coverage=1 00:27:00.752 --rc genhtml_legend=1 00:27:00.752 --rc geninfo_all_blocks=1 00:27:00.752 --rc geninfo_unexecuted_blocks=1 00:27:00.752 00:27:00.752 ' 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:00.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.752 --rc genhtml_branch_coverage=1 00:27:00.752 --rc genhtml_function_coverage=1 00:27:00.752 --rc genhtml_legend=1 00:27:00.752 --rc geninfo_all_blocks=1 00:27:00.752 --rc geninfo_unexecuted_blocks=1 00:27:00.752 00:27:00.752 ' 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:00.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.752 --rc genhtml_branch_coverage=1 00:27:00.752 --rc genhtml_function_coverage=1 00:27:00.752 --rc genhtml_legend=1 00:27:00.752 --rc geninfo_all_blocks=1 00:27:00.752 --rc geninfo_unexecuted_blocks=1 00:27:00.752 00:27:00.752 ' 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:00.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.752 --rc genhtml_branch_coverage=1 00:27:00.752 --rc genhtml_function_coverage=1 00:27:00.752 --rc genhtml_legend=1 00:27:00.752 --rc geninfo_all_blocks=1 00:27:00.752 --rc geninfo_unexecuted_blocks=1 00:27:00.752 00:27:00.752 ' 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:27:00.752 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:00.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:00.753 07:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.900 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:08.900 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:08.900 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:08.900 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:08.900 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:08.900 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:08.900 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:08.900 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:08.900 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:08.900 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:27:08.900 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:08.900 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:27:08.900 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:08.900 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:27:08.900 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:08.900 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:08.900 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:08.900 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:08.900 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:08.900 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:08.900 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:08.900 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:08.900 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:08.900 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:08.900 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:08.900 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:08.900 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:08.900 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:08.900 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:08.900 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:08.900 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:08.900 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:08.900 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:08.900 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:08.900 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:08.900 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:08.901 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:08.901 Found net devices under 0000:31:00.0: cvl_0_0 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:08.901 Found net devices under 0000:31:00.1: cvl_0_1 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:08.901 07:36:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:09.163 07:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:09.163 07:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:09.163 07:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:09.163 07:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:09.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:09.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:27:09.163 00:27:09.163 --- 10.0.0.2 ping statistics --- 00:27:09.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.163 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:27:09.163 07:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:09.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:09.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:27:09.163 00:27:09.163 --- 10.0.0.1 ping statistics --- 00:27:09.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.163 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:27:09.163 07:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:09.163 07:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:27:09.163 07:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:09.163 07:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:09.163 07:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:09.164 07:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:09.164 07:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:09.164 07:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:09.164 07:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:09.164 07:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:27:09.164 07:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:27:09.164 07:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:09.164 07:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.164 07:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2227792 00:27:09.164 07:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:09.164 07:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:09.164 07:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2227792 00:27:09.164 07:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2227792 ']' 00:27:09.164 07:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:09.164 07:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:09.164 07:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:09.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:09.164 07:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:09.164 07:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.164 [2024-11-26 07:36:53.174218] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:27:09.164 [2024-11-26 07:36:53.174288] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:09.164 [2024-11-26 07:36:53.264750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:09.424 [2024-11-26 07:36:53.306134] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:09.424 [2024-11-26 07:36:53.306171] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:09.424 [2024-11-26 07:36:53.306179] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:09.424 [2024-11-26 07:36:53.306185] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:09.424 [2024-11-26 07:36:53.306191] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:09.424 [2024-11-26 07:36:53.307930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:09.424 [2024-11-26 07:36:53.308059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:09.424 [2024-11-26 07:36:53.308202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.424 [2024-11-26 07:36:53.308202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:09.995 07:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:09.995 07:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:27:09.995 07:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:10.256 [2024-11-26 07:36:54.143114] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:10.256 07:36:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:27:10.256 07:36:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:10.256 07:36:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.256 07:36:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:27:10.517 Malloc1 00:27:10.517 07:36:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:10.517 07:36:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:10.780 07:36:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:11.041 [2024-11-26 07:36:54.949549] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:11.041 07:36:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:11.041 07:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:11.041 07:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:11.041 07:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:11.041 07:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:11.041 07:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:11.041 07:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:11.041 07:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:11.041 07:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:27:11.041 07:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:11.041 07:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:11.041 07:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:11.041 07:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:27:11.041 07:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:11.321 07:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:11.321 07:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:11.321 07:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:11.321 07:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:11.321 07:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:27:11.321 07:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:11.321 07:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:11.321 07:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:11.321 07:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:11.322 07:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:11.583 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:11.583 fio-3.35 00:27:11.583 Starting 1 thread 00:27:14.125 00:27:14.125 test: (groupid=0, jobs=1): err= 0: pid=2228364: Tue Nov 26 07:36:58 2024 00:27:14.125 read: IOPS=13.9k, BW=54.3MiB/s (57.0MB/s)(109MiB/2004msec) 00:27:14.125 slat (usec): min=2, max=281, avg= 2.16, stdev= 2.45 00:27:14.125 clat (usec): min=3369, max=9026, avg=5077.41, stdev=387.94 00:27:14.125 lat (usec): min=3371, max=9033, avg=5079.57, stdev=388.21 00:27:14.125 clat percentiles (usec): 00:27:14.125 | 1.00th=[ 4228], 5.00th=[ 4555], 10.00th=[ 4621], 20.00th=[ 4817], 00:27:14.125 | 30.00th=[ 4883], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5145], 00:27:14.125 | 70.00th=[ 5211], 80.00th=[ 5342], 90.00th=[ 5473], 95.00th=[ 5604], 00:27:14.125 | 99.00th=[ 5932], 99.50th=[ 7046], 99.90th=[ 8356], 99.95th=[ 8717], 00:27:14.125 | 99.99th=[ 8848] 00:27:14.125 bw ( KiB/s): min=54632, max=55952, per=99.94%, avg=55584.00, stdev=635.89, samples=4 00:27:14.125 iops : min=13658, max=13988, avg=13896.00, stdev=158.97, samples=4 00:27:14.125 write: IOPS=13.9k, BW=54.3MiB/s (57.0MB/s)(109MiB/2004msec); 0 zone resets 00:27:14.125 slat (usec): min=2, max=270, avg= 2.22, stdev= 1.79 00:27:14.125 clat (usec): min=2503, max=8033, avg=4102.35, stdev=338.78 00:27:14.125 lat (usec): min=2505, max=8036, avg=4104.57, stdev=339.11 00:27:14.125 clat percentiles (usec): 00:27:14.125 | 1.00th=[ 3392], 5.00th=[ 3654], 10.00th=[ 3752], 20.00th=[ 3884], 00:27:14.125 | 30.00th=[ 3949], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4146], 00:27:14.125 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4555], 00:27:14.125 | 99.00th=[ 4817], 99.50th=[ 6194], 99.90th=[ 7046], 99.95th=[ 7308], 00:27:14.125 | 99.99th=[ 7832] 00:27:14.125 bw ( KiB/s): min=55008, max=55920, per=100.00%, avg=55640.00, stdev=426.63, samples=4 00:27:14.125 iops : min=13752, max=13980, avg=13910.00, stdev=106.66, samples=4 00:27:14.125 lat (msec) : 4=18.29%, 10=81.71% 00:27:14.125 cpu : usr=75.24%, sys=23.36%, ctx=16, majf=0, minf=17 00:27:14.125 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:27:14.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.125 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:14.125 issued rwts: total=27865,27874,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.125 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:14.125 00:27:14.125 Run status group 0 (all jobs): 00:27:14.125 READ: bw=54.3MiB/s (57.0MB/s), 54.3MiB/s-54.3MiB/s (57.0MB/s-57.0MB/s), io=109MiB (114MB), run=2004-2004msec 00:27:14.125 WRITE: bw=54.3MiB/s (57.0MB/s), 54.3MiB/s-54.3MiB/s (57.0MB/s-57.0MB/s), io=109MiB (114MB), run=2004-2004msec 00:27:14.125 07:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:14.125 07:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:14.125 07:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:14.125 07:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:14.125 07:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:14.125 07:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:14.125 07:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:27:14.125 07:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:14.125 07:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:14.125 07:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:14.125 07:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:27:14.125 07:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:14.125 07:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:14.125 07:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:14.125 07:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:14.125 07:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:14.125 07:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:27:14.125 07:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:14.125 07:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:14.125 07:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:14.125 07:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:14.126 07:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:14.387 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:27:14.387 fio-3.35 00:27:14.387 Starting 1 thread 00:27:16.931 00:27:16.931 test: (groupid=0, jobs=1): err= 0: pid=2229156: Tue Nov 26 07:37:00 2024 00:27:16.931 read: IOPS=9247, BW=144MiB/s (152MB/s)(290MiB/2007msec) 00:27:16.931 slat (usec): min=3, max=114, avg= 3.59, stdev= 1.60 00:27:16.931 clat (usec): min=2163, max=17785, avg=8458.92, stdev=2065.06 00:27:16.931 lat (usec): min=2166, max=17789, avg=8462.51, stdev=2065.14 00:27:16.931 clat percentiles (usec): 00:27:16.931 | 1.00th=[ 4293], 5.00th=[ 5407], 10.00th=[ 5866], 20.00th=[ 6587], 00:27:16.931 | 30.00th=[ 7177], 40.00th=[ 7767], 50.00th=[ 8356], 60.00th=[ 8979], 00:27:16.931 | 70.00th=[ 9634], 80.00th=[10290], 90.00th=[10945], 95.00th=[11600], 00:27:16.931 | 99.00th=[13960], 99.50th=[15270], 99.90th=[16712], 99.95th=[17171], 00:27:16.931 | 99.99th=[17695] 00:27:16.931 bw ( KiB/s): min=65536, max=81760, per=49.32%, avg=72976.00, stdev=6729.36, samples=4 00:27:16.931 iops : min= 4096, max= 5110, avg=4561.00, stdev=420.59, samples=4 00:27:16.931 write: IOPS=5523, BW=86.3MiB/s (90.5MB/s)(150MiB/1733msec); 0 zone resets 00:27:16.931 slat (usec): min=39, max=345, avg=40.82, stdev= 6.69 00:27:16.931 clat (usec): min=2039, max=16417, avg=9474.57, stdev=1590.23 00:27:16.931 lat (usec): min=2079, max=16457, avg=9515.39, stdev=1591.07 00:27:16.931 clat percentiles (usec): 00:27:16.931 | 1.00th=[ 6390], 5.00th=[ 7242], 10.00th=[ 7635], 20.00th=[ 8160], 00:27:16.931 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9634], 00:27:16.931 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11600], 95.00th=[12387], 00:27:16.931 | 99.00th=[13960], 99.50th=[14353], 99.90th=[15270], 99.95th=[15533], 00:27:16.931 | 99.99th=[16450] 00:27:16.931 bw ( KiB/s): min=67616, max=84896, per=86.03%, avg=76024.00, stdev=7090.99, samples=4 00:27:16.931 iops : min= 4226, max= 5306, avg=4751.50, stdev=443.19, samples=4 00:27:16.931 lat (msec) : 4=0.50%, 10=72.75%, 20=26.74% 00:27:16.931 cpu : usr=84.85%, sys=13.41%, ctx=17, majf=0, minf=45 00:27:16.931 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:27:16.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:16.931 issued rwts: total=18560,9572,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.931 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.931 00:27:16.931 Run status group 0 (all jobs): 00:27:16.931 READ: bw=144MiB/s (152MB/s), 144MiB/s-144MiB/s (152MB/s-152MB/s), io=290MiB (304MB), run=2007-2007msec 00:27:16.931 WRITE: bw=86.3MiB/s (90.5MB/s), 86.3MiB/s-86.3MiB/s (90.5MB/s-90.5MB/s), io=150MiB (157MB), run=1733-1733msec 00:27:16.931 07:37:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:16.931 07:37:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:27:16.931 07:37:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:16.931 07:37:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:27:16.931 07:37:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:27:16.931 07:37:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:16.931 07:37:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:27:16.931 07:37:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:16.931 07:37:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:27:16.931 07:37:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:16.931 07:37:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:16.931 rmmod nvme_tcp 00:27:16.931 rmmod nvme_fabrics 00:27:16.931 rmmod nvme_keyring 00:27:16.931 07:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:16.931 07:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:27:16.931 07:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:27:16.931 07:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2227792 ']' 00:27:16.931 07:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2227792 00:27:16.931 07:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2227792 ']' 00:27:16.931 07:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2227792 00:27:16.931 07:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:27:16.931 07:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:16.931 07:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2227792 00:27:17.193 07:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:17.193 07:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:17.193 07:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2227792' 00:27:17.193 killing process with pid 2227792 00:27:17.193 07:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2227792 00:27:17.193 07:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2227792 00:27:17.193 07:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:17.193 07:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:17.193 07:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:17.193 07:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:27:17.193 07:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:27:17.193 07:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:27:17.193 07:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:17.193 07:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:17.193 07:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:17.193 07:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.193 07:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:17.193 07:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.741 07:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:19.742 00:27:19.742 real 0m18.912s 00:27:19.742 user 1m6.981s 00:27:19.742 sys 0m8.266s 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.742 ************************************ 00:27:19.742 END TEST nvmf_fio_host 00:27:19.742 ************************************ 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.742 ************************************ 00:27:19.742 START TEST nvmf_failover 00:27:19.742 ************************************ 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:19.742 * Looking for test storage... 00:27:19.742 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:19.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.742 --rc genhtml_branch_coverage=1 00:27:19.742 --rc genhtml_function_coverage=1 00:27:19.742 --rc genhtml_legend=1 00:27:19.742 --rc geninfo_all_blocks=1 00:27:19.742 --rc geninfo_unexecuted_blocks=1 00:27:19.742 00:27:19.742 ' 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:19.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.742 --rc genhtml_branch_coverage=1 00:27:19.742 --rc genhtml_function_coverage=1 00:27:19.742 --rc genhtml_legend=1 00:27:19.742 --rc geninfo_all_blocks=1 00:27:19.742 --rc geninfo_unexecuted_blocks=1 00:27:19.742 00:27:19.742 ' 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:19.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.742 --rc genhtml_branch_coverage=1 00:27:19.742 --rc genhtml_function_coverage=1 00:27:19.742 --rc genhtml_legend=1 00:27:19.742 --rc geninfo_all_blocks=1 00:27:19.742 --rc geninfo_unexecuted_blocks=1 00:27:19.742 00:27:19.742 ' 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:19.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.742 --rc genhtml_branch_coverage=1 00:27:19.742 --rc genhtml_function_coverage=1 00:27:19.742 --rc genhtml_legend=1 00:27:19.742 --rc geninfo_all_blocks=1 00:27:19.742 --rc geninfo_unexecuted_blocks=1 00:27:19.742 00:27:19.742 ' 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.742 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:27:19.743 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.743 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:27:19.743 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:19.743 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:19.743 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:19.743 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:19.743 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:19.743 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:19.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:19.743 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:19.743 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:19.743 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:19.743 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:19.743 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:19.743 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:19.743 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:19.743 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:27:19.743 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:19.743 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:19.743 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:19.743 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:19.743 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:19.743 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:19.743 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:19.743 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.743 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:19.743 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:19.743 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:27:19.743 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:27.882 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:27.882 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:27.882 Found net devices under 0000:31:00.0: cvl_0_0 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:27.882 Found net devices under 0000:31:00.1: cvl_0_1 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:27.882 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:27.882 07:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:28.144 07:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:28.144 07:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:28.144 07:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:28.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:28.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:27:28.144 00:27:28.144 --- 10.0.0.2 ping statistics --- 00:27:28.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.144 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:27:28.144 07:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:28.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:28.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:27:28.144 00:27:28.144 --- 10.0.0.1 ping statistics --- 00:27:28.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.144 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:27:28.144 07:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:28.144 07:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:27:28.144 07:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:28.144 07:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:28.144 07:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:28.144 07:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:28.144 07:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:28.144 07:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:28.144 07:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:28.144 07:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:27:28.144 07:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:28.144 07:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:28.144 07:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:28.144 07:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2234355 00:27:28.144 07:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2234355 00:27:28.144 07:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:28.144 07:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2234355 ']' 00:27:28.144 07:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:28.144 07:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:28.144 07:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:28.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:28.144 07:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:28.144 07:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:28.144 [2024-11-26 07:37:12.143091] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:27:28.144 [2024-11-26 07:37:12.143161] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:28.144 [2024-11-26 07:37:12.251736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:28.404 [2024-11-26 07:37:12.304066] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:28.404 [2024-11-26 07:37:12.304118] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:28.404 [2024-11-26 07:37:12.304126] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:28.404 [2024-11-26 07:37:12.304133] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:28.404 [2024-11-26 07:37:12.304140] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:28.404 [2024-11-26 07:37:12.305991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:28.404 [2024-11-26 07:37:12.306332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:28.404 [2024-11-26 07:37:12.306334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:28.975 07:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:28.975 07:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:27:28.975 07:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:28.975 07:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:28.975 07:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:28.975 07:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:28.975 07:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:29.236 [2024-11-26 07:37:13.150223] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:29.236 07:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:29.236 Malloc0 00:27:29.497 07:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:29.497 07:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:29.759 07:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:30.020 [2024-11-26 07:37:13.899336] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:30.020 07:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:30.020 [2024-11-26 07:37:14.083816] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:30.020 07:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:30.282 [2024-11-26 07:37:14.264408] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:30.282 07:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:27:30.282 07:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2234862 00:27:30.282 07:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:30.282 07:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2234862 /var/tmp/bdevperf.sock 00:27:30.282 07:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2234862 ']' 00:27:30.282 07:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:30.282 07:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:30.282 07:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:30.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:30.282 07:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:30.282 07:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:31.226 07:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:31.226 07:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:27:31.226 07:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:31.486 NVMe0n1 00:27:31.486 07:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:31.747 00:27:31.747 07:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2235107 00:27:31.747 07:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:31.747 07:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:27:32.689 07:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:32.949 [2024-11-26 07:37:16.844457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54e390 is same with the state(6) to be set 00:27:32.949 [2024-11-26 07:37:16.844495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54e390 is same with the state(6) to be set 00:27:32.949 [2024-11-26 07:37:16.844504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54e390 is same with the state(6) to be set 00:27:32.949 [2024-11-26 07:37:16.844511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54e390 is same with the state(6) to be set 00:27:32.949 [2024-11-26 07:37:16.844517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54e390 is same with the state(6) to be set 00:27:32.950 [2024-11-26 07:37:16.844523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54e390 is same with the state(6) to be set 00:27:32.950 [2024-11-26 07:37:16.844530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54e390 is same with the state(6) to be set 00:27:32.950 [2024-11-26 07:37:16.844536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54e390 is same with the state(6) to be set 00:27:32.950 [2024-11-26 07:37:16.844542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54e390 is same with the state(6) to be set 00:27:32.950 [2024-11-26 07:37:16.844548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54e390 is same with the state(6) to be set 00:27:32.950 [2024-11-26 07:37:16.844561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54e390 is same with the state(6) to be set 00:27:32.950 [2024-11-26 07:37:16.844567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54e390 is same with the state(6) to be set 00:27:32.950 [2024-11-26 07:37:16.844573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54e390 is same with the state(6) to be set 00:27:32.950 [2024-11-26 07:37:16.844580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54e390 is same with the state(6) to be set 00:27:32.950 [2024-11-26 07:37:16.844586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54e390 is same with the state(6) to be set 00:27:32.950 [2024-11-26 07:37:16.844592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54e390 is same with the state(6) to be set 00:27:32.950 [2024-11-26 07:37:16.844598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54e390 is same with the state(6) to be set 00:27:32.950 [2024-11-26 07:37:16.844605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54e390 is same with the state(6) to be set 00:27:32.950 [2024-11-26 07:37:16.844612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54e390 is same with the state(6) to be set 00:27:32.950 [2024-11-26 07:37:16.844618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54e390 is same with the state(6) to be set 00:27:32.950 [2024-11-26 07:37:16.844625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54e390 is same with the state(6) to be set 00:27:32.950 [2024-11-26 07:37:16.844632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54e390 is same with the state(6) to be set 00:27:32.950 [2024-11-26 07:37:16.844639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54e390 is same with the state(6) to be set 00:27:32.950 [2024-11-26 07:37:16.844646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54e390 is same with the state(6) to be set 00:27:32.950 07:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:27:36.250 07:37:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:36.250 00:27:36.250 07:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:36.250 [2024-11-26 07:37:20.328088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54f140 is same with the state(6) to be set 00:27:36.250 [2024-11-26 07:37:20.328122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54f140 is same with the state(6) to be set 00:27:36.250 [2024-11-26 07:37:20.328128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54f140 is same with the state(6) to be set 00:27:36.250 [2024-11-26 07:37:20.328133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54f140 is same with the state(6) to be set 00:27:36.250 [2024-11-26 07:37:20.328138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54f140 is same with the state(6) to be set 00:27:36.250 [2024-11-26 07:37:20.328143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54f140 is same with the state(6) to be set 00:27:36.250 [2024-11-26 07:37:20.328147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54f140 is same with the state(6) to be set 00:27:36.250 [2024-11-26 07:37:20.328152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54f140 is same with the state(6) to be set 00:27:36.250 [2024-11-26 07:37:20.328157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54f140 is same with the state(6) to be set 00:27:36.250 [2024-11-26 07:37:20.328173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54f140 is same with the state(6) to be set 00:27:36.250 [2024-11-26 07:37:20.328178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54f140 is same with the state(6) to be set 00:27:36.250 [2024-11-26 07:37:20.328182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54f140 is same with the state(6) to be set 00:27:36.250 [2024-11-26 07:37:20.328187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54f140 is same with the state(6) to be set 00:27:36.250 [2024-11-26 07:37:20.328192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54f140 is same with the state(6) to be set 00:27:36.250 [2024-11-26 07:37:20.328196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54f140 is same with the state(6) to be set 00:27:36.250 [2024-11-26 07:37:20.328201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54f140 is same with the state(6) to be set 00:27:36.250 [2024-11-26 07:37:20.328206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54f140 is same with the state(6) to be set 00:27:36.250 [2024-11-26 07:37:20.328210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54f140 is same with the state(6) to be set 00:27:36.250 [2024-11-26 07:37:20.328214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54f140 is same with the state(6) to be set 00:27:36.250 [2024-11-26 07:37:20.328220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54f140 is same with the state(6) to be set 00:27:36.250 [2024-11-26 07:37:20.328224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54f140 is same with the state(6) to be set 00:27:36.250 [2024-11-26 07:37:20.328229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54f140 is same with the state(6) to be set 00:27:36.250 [2024-11-26 07:37:20.328234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54f140 is same with the state(6) to be set 00:27:36.250 [2024-11-26 07:37:20.328238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54f140 is same with the state(6) to be set 00:27:36.250 [2024-11-26 07:37:20.328243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54f140 is same with the state(6) to be set 00:27:36.250 [2024-11-26 07:37:20.328247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54f140 is same with the state(6) to be set 00:27:36.250 [2024-11-26 07:37:20.328251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54f140 is same with the state(6) to be set 00:27:36.250 [2024-11-26 07:37:20.328256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54f140 is same with the state(6) to be set 00:27:36.250 [2024-11-26 07:37:20.328261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54f140 is same with the state(6) to be set 00:27:36.250 [2024-11-26 07:37:20.328265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54f140 is same with the state(6) to be set 00:27:36.250 [2024-11-26 07:37:20.328270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54f140 is same with the state(6) to be set 00:27:36.250 [2024-11-26 07:37:20.328274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54f140 is same with the state(6) to be set 00:27:36.250 07:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:27:39.551 07:37:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:39.551 [2024-11-26 07:37:23.519436] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:39.551 07:37:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:27:40.494 07:37:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:40.756 [2024-11-26 07:37:24.715096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.756 [2024-11-26 07:37:24.715433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 [2024-11-26 07:37:24.715646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550090 is same with the state(6) to be set 00:27:40.757 07:37:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2235107 00:27:47.351 { 00:27:47.351 "results": [ 00:27:47.351 { 00:27:47.351 "job": "NVMe0n1", 00:27:47.351 "core_mask": "0x1", 00:27:47.351 "workload": "verify", 00:27:47.351 "status": "finished", 00:27:47.351 "verify_range": { 00:27:47.351 "start": 0, 00:27:47.351 "length": 16384 00:27:47.351 }, 00:27:47.351 "queue_depth": 128, 00:27:47.351 "io_size": 4096, 00:27:47.351 "runtime": 15.006575, 00:27:47.351 "iops": 11144.848174883342, 00:27:47.351 "mibps": 43.534563183138054, 00:27:47.351 "io_failed": 5173, 00:27:47.351 "io_timeout": 0, 00:27:47.351 "avg_latency_us": 11112.783883292057, 00:27:47.351 "min_latency_us": 535.8933333333333, 00:27:47.351 "max_latency_us": 50244.26666666667 00:27:47.351 } 00:27:47.351 ], 00:27:47.351 "core_count": 1 00:27:47.351 } 00:27:47.351 07:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2234862 00:27:47.351 07:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2234862 ']' 00:27:47.351 07:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2234862 00:27:47.351 07:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:27:47.351 07:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:47.351 07:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2234862 00:27:47.351 07:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:47.351 07:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:47.351 07:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2234862' 00:27:47.351 killing process with pid 2234862 00:27:47.351 07:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2234862 00:27:47.351 07:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2234862 00:27:47.351 07:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:47.351 [2024-11-26 07:37:14.342449] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:27:47.351 [2024-11-26 07:37:14.342525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2234862 ] 00:27:47.351 [2024-11-26 07:37:14.422394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.351 [2024-11-26 07:37:14.458383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.351 Running I/O for 15 seconds... 00:27:47.351 11190.00 IOPS, 43.71 MiB/s [2024-11-26T06:37:31.488Z] [2024-11-26 07:37:16.845068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.351 [2024-11-26 07:37:16.845101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.351 [2024-11-26 07:37:16.845117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.351 [2024-11-26 07:37:16.845126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.351 [2024-11-26 07:37:16.845136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.351 [2024-11-26 07:37:16.845144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.351 [2024-11-26 07:37:16.845154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.351 [2024-11-26 07:37:16.845162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.351 [2024-11-26 07:37:16.845171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.351 [2024-11-26 07:37:16.845178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.351 [2024-11-26 07:37:16.845188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.351 [2024-11-26 07:37:16.845195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.351 [2024-11-26 07:37:16.845205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.351 [2024-11-26 07:37:16.845213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.351 [2024-11-26 07:37:16.845222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.351 [2024-11-26 07:37:16.845229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.351 [2024-11-26 07:37:16.845238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.351 [2024-11-26 07:37:16.845245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.351 [2024-11-26 07:37:16.845255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.351 [2024-11-26 07:37:16.845263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.351 [2024-11-26 07:37:16.845272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.351 [2024-11-26 07:37:16.845280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.351 [2024-11-26 07:37:16.845294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.351 [2024-11-26 07:37:16.845302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.351 [2024-11-26 07:37:16.845312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.351 [2024-11-26 07:37:16.845319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.351 [2024-11-26 07:37:16.845328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.351 [2024-11-26 07:37:16.845336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.351 [2024-11-26 07:37:16.845345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.351 [2024-11-26 07:37:16.845353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.351 [2024-11-26 07:37:16.845362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.351 [2024-11-26 07:37:16.845369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.351 [2024-11-26 07:37:16.845378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.351 [2024-11-26 07:37:16.845386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.351 [2024-11-26 07:37:16.845395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.845402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.845411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.845419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.845428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.845435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.845444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.845452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.845462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.845469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.845479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.845486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.845496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.845506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.845515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.845522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.845531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.845539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.845548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.845555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.845564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.845571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.845581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.845588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.845597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.845604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.845613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.845621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.845631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.845638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.845647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.845654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.845664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.845671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.845681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.845688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.845697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.845704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.845715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.845723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.845733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.845740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.845750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.845757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.845766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.845774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.845783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.845790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.845799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.845807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.845816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.845823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.845832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.845839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.845849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.845856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.845869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.845876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.845886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.845893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.845902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.845910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.845919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.845926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.845941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.845948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.845957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.845964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.845974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.845981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.845990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.845998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.846007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.846014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.846023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.846030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.846040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.846047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.352 [2024-11-26 07:37:16.846056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.352 [2024-11-26 07:37:16.846063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.353 [2024-11-26 07:37:16.846711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.353 [2024-11-26 07:37:16.846720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.354 [2024-11-26 07:37:16.846727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.354 [2024-11-26 07:37:16.846737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.354 [2024-11-26 07:37:16.846744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.354 [2024-11-26 07:37:16.846754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.354 [2024-11-26 07:37:16.846761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.354 [2024-11-26 07:37:16.846770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.354 [2024-11-26 07:37:16.846779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.354 [2024-11-26 07:37:16.846788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.354 [2024-11-26 07:37:16.846796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.354 [2024-11-26 07:37:16.846805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.354 [2024-11-26 07:37:16.846813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.354 [2024-11-26 07:37:16.846822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.354 [2024-11-26 07:37:16.846830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.354 [2024-11-26 07:37:16.846839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.354 [2024-11-26 07:37:16.846846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.354 [2024-11-26 07:37:16.846856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.354 [2024-11-26 07:37:16.846868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.354 [2024-11-26 07:37:16.846877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.354 [2024-11-26 07:37:16.846885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.354 [2024-11-26 07:37:16.846894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.354 [2024-11-26 07:37:16.846901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.354 [2024-11-26 07:37:16.846911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.354 [2024-11-26 07:37:16.846918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.354 [2024-11-26 07:37:16.846927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.354 [2024-11-26 07:37:16.846934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.354 [2024-11-26 07:37:16.846944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.354 [2024-11-26 07:37:16.846951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.354 [2024-11-26 07:37:16.846960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.354 [2024-11-26 07:37:16.846967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.354 [2024-11-26 07:37:16.846976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.354 [2024-11-26 07:37:16.846984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.354 [2024-11-26 07:37:16.846994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.354 [2024-11-26 07:37:16.847002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.354 [2024-11-26 07:37:16.847012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.354 [2024-11-26 07:37:16.847019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.354 [2024-11-26 07:37:16.847028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.354 [2024-11-26 07:37:16.847035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.354 [2024-11-26 07:37:16.847045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.354 [2024-11-26 07:37:16.847052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.354 [2024-11-26 07:37:16.847061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.354 [2024-11-26 07:37:16.847068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.354 [2024-11-26 07:37:16.847077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.354 [2024-11-26 07:37:16.847084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.354 [2024-11-26 07:37:16.847093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.354 [2024-11-26 07:37:16.847101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.354 [2024-11-26 07:37:16.847110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.354 [2024-11-26 07:37:16.847117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.354 [2024-11-26 07:37:16.847126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.354 [2024-11-26 07:37:16.847133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.354 [2024-11-26 07:37:16.847143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.354 [2024-11-26 07:37:16.847150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.354 [2024-11-26 07:37:16.847159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.354 [2024-11-26 07:37:16.847166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.354 [2024-11-26 07:37:16.847175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.354 [2024-11-26 07:37:16.847183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.354 [2024-11-26 07:37:16.847192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.354 [2024-11-26 07:37:16.847201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.354 [2024-11-26 07:37:16.847210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.354 [2024-11-26 07:37:16.847217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.354 [2024-11-26 07:37:16.847226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.354 [2024-11-26 07:37:16.847233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.354 [2024-11-26 07:37:16.847256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.354 [2024-11-26 07:37:16.847263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.354 [2024-11-26 07:37:16.847270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96784 len:8 PRP1 0x0 PRP2 0x0 00:27:47.354 [2024-11-26 07:37:16.847277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.354 [2024-11-26 07:37:16.847316] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:47.354 [2024-11-26 07:37:16.847338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.354 [2024-11-26 07:37:16.847347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.354 [2024-11-26 07:37:16.847355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.354 [2024-11-26 07:37:16.847362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.354 [2024-11-26 07:37:16.847371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.354 [2024-11-26 07:37:16.847378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.354 [2024-11-26 07:37:16.847386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.354 [2024-11-26 07:37:16.847394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.354 [2024-11-26 07:37:16.847402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:27:47.354 [2024-11-26 07:37:16.847429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad8d80 (9): Bad file descriptor 00:27:47.354 [2024-11-26 07:37:16.850936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:27:47.354 [2024-11-26 07:37:16.873741] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:27:47.354 11196.50 IOPS, 43.74 MiB/s [2024-11-26T06:37:31.491Z] 11235.67 IOPS, 43.89 MiB/s [2024-11-26T06:37:31.491Z] 11232.00 IOPS, 43.88 MiB/s [2024-11-26T06:37:31.491Z] [2024-11-26 07:37:20.328982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.354 [2024-11-26 07:37:20.329018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.355 [2024-11-26 07:37:20.329033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.355 [2024-11-26 07:37:20.329042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.355 [2024-11-26 07:37:20.329057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.355 [2024-11-26 07:37:20.329065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.355 [2024-11-26 07:37:20.329075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.355 [2024-11-26 07:37:20.329083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.355 [2024-11-26 07:37:20.329093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.355 [2024-11-26 07:37:20.329100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.355 [2024-11-26 07:37:20.329110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.355 [2024-11-26 07:37:20.329117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.355 [2024-11-26 07:37:20.329127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.355 [2024-11-26 07:37:20.329135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.355 [2024-11-26 07:37:20.329145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.355 [2024-11-26 07:37:20.329152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.355 [2024-11-26 07:37:20.329161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.355 [2024-11-26 07:37:20.329169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.355 [2024-11-26 07:37:20.329178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.355 [2024-11-26 07:37:20.329185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.355 [2024-11-26 07:37:20.329194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.355 [2024-11-26 07:37:20.329202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.355 [2024-11-26 07:37:20.329211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.355 [2024-11-26 07:37:20.329218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.355 [2024-11-26 07:37:20.329227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.355 [2024-11-26 07:37:20.329235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.355 [2024-11-26 07:37:20.329244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.355 [2024-11-26 07:37:20.329251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.355 [2024-11-26 07:37:20.329260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.355 [2024-11-26 07:37:20.329269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.355 [2024-11-26 07:37:20.329278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.355 [2024-11-26 07:37:20.329286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.355 [2024-11-26 07:37:20.329295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.355 [2024-11-26 07:37:20.329302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.355 [2024-11-26 07:37:20.329312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.355 [2024-11-26 07:37:20.329319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.355 [2024-11-26 07:37:20.329329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.355 [2024-11-26 07:37:20.329336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.355 [2024-11-26 07:37:20.329346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.355 [2024-11-26 07:37:20.329353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.355 [2024-11-26 07:37:20.329362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.355 [2024-11-26 07:37:20.329370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.355 [2024-11-26 07:37:20.329379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.355 [2024-11-26 07:37:20.329387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.355 [2024-11-26 07:37:20.329396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.355 [2024-11-26 07:37:20.329403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.355 [2024-11-26 07:37:20.329412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.355 [2024-11-26 07:37:20.329419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.355 [2024-11-26 07:37:20.329429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.355 [2024-11-26 07:37:20.329436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.355 [2024-11-26 07:37:20.329445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.355 [2024-11-26 07:37:20.329452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.355 [2024-11-26 07:37:20.329462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.355 [2024-11-26 07:37:20.329469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.355 [2024-11-26 07:37:20.329478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.355 [2024-11-26 07:37:20.329487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.355 [2024-11-26 07:37:20.329496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.355 [2024-11-26 07:37:20.329503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.355 [2024-11-26 07:37:20.329513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.355 [2024-11-26 07:37:20.329520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.355 [2024-11-26 07:37:20.329529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.355 [2024-11-26 07:37:20.329536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.355 [2024-11-26 07:37:20.329546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.355 [2024-11-26 07:37:20.329553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.355 [2024-11-26 07:37:20.329562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.355 [2024-11-26 07:37:20.329569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.355 [2024-11-26 07:37:20.329579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.355 [2024-11-26 07:37:20.329586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.355 [2024-11-26 07:37:20.329595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.356 [2024-11-26 07:37:20.329602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.329611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.356 [2024-11-26 07:37:20.329618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.329628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.356 [2024-11-26 07:37:20.329635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.329644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.356 [2024-11-26 07:37:20.329651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.329660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.356 [2024-11-26 07:37:20.329668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.329677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.356 [2024-11-26 07:37:20.329684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.329695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.356 [2024-11-26 07:37:20.329702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.329711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.356 [2024-11-26 07:37:20.329718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.329728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.356 [2024-11-26 07:37:20.329735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.329744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.356 [2024-11-26 07:37:20.329751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.329760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.356 [2024-11-26 07:37:20.329768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.329777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.356 [2024-11-26 07:37:20.329785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.329794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.356 [2024-11-26 07:37:20.329802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.329811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.356 [2024-11-26 07:37:20.329818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.329828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.356 [2024-11-26 07:37:20.329835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.329845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.356 [2024-11-26 07:37:20.329853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.329868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.356 [2024-11-26 07:37:20.329876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.329886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.356 [2024-11-26 07:37:20.329893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.329903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.356 [2024-11-26 07:37:20.329912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.329921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.356 [2024-11-26 07:37:20.329928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.329938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.356 [2024-11-26 07:37:20.329945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.329955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.356 [2024-11-26 07:37:20.329962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.329972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.356 [2024-11-26 07:37:20.329979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.329988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.356 [2024-11-26 07:37:20.329995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.330004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.356 [2024-11-26 07:37:20.330012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.330021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.356 [2024-11-26 07:37:20.330029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.330038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.356 [2024-11-26 07:37:20.330045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.330054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.356 [2024-11-26 07:37:20.330061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.330070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.356 [2024-11-26 07:37:20.330077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.330086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.356 [2024-11-26 07:37:20.330094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.330103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.356 [2024-11-26 07:37:20.330110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.330121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.356 [2024-11-26 07:37:20.330129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.330138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.356 [2024-11-26 07:37:20.330145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.330155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.356 [2024-11-26 07:37:20.330162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.330171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.356 [2024-11-26 07:37:20.330178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.330187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.356 [2024-11-26 07:37:20.330195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.330204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.356 [2024-11-26 07:37:20.330211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.330220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.356 [2024-11-26 07:37:20.330227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.330237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.356 [2024-11-26 07:37:20.330244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.356 [2024-11-26 07:37:20.330254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.356 [2024-11-26 07:37:20.330262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.357 [2024-11-26 07:37:20.330271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.357 [2024-11-26 07:37:20.330278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.357 [2024-11-26 07:37:20.330288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.357 [2024-11-26 07:37:20.330295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.357 [2024-11-26 07:37:20.330304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.357 [2024-11-26 07:37:20.330312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.357 [2024-11-26 07:37:20.330321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.357 [2024-11-26 07:37:20.330328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.357 [2024-11-26 07:37:20.330339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.357 [2024-11-26 07:37:20.330346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.357 [2024-11-26 07:37:20.330356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.357 [2024-11-26 07:37:20.330363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.357 [2024-11-26 07:37:20.330372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.357 [2024-11-26 07:37:20.330380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.357 [2024-11-26 07:37:20.330390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.357 [2024-11-26 07:37:20.330397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.357 [2024-11-26 07:37:20.330407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.357 [2024-11-26 07:37:20.330414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.357 [2024-11-26 07:37:20.330423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.357 [2024-11-26 07:37:20.330430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.357 [2024-11-26 07:37:20.330440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.357 [2024-11-26 07:37:20.330447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.357 [2024-11-26 07:37:20.330456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.357 [2024-11-26 07:37:20.330463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.357 [2024-11-26 07:37:20.330472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.357 [2024-11-26 07:37:20.330479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.357 [2024-11-26 07:37:20.330489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.357 [2024-11-26 07:37:20.330496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.357 [2024-11-26 07:37:20.330505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.357 [2024-11-26 07:37:20.330512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.357 [2024-11-26 07:37:20.330522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.357 [2024-11-26 07:37:20.330529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.357 [2024-11-26 07:37:20.330538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.357 [2024-11-26 07:37:20.330547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.357 [2024-11-26 07:37:20.330556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.357 [2024-11-26 07:37:20.330563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.357 [2024-11-26 07:37:20.330573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.357 [2024-11-26 07:37:20.330580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.357 [2024-11-26 07:37:20.330589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.357 [2024-11-26 07:37:20.330596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.357 [2024-11-26 07:37:20.330605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.357 [2024-11-26 07:37:20.330612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.357 [2024-11-26 07:37:20.330622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.357 [2024-11-26 07:37:20.330629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.357 [2024-11-26 07:37:20.330638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.357 [2024-11-26 07:37:20.330645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.357 [2024-11-26 07:37:20.330654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.357 [2024-11-26 07:37:20.330661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.357 [2024-11-26 07:37:20.330670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.357 [2024-11-26 07:37:20.330678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.357 [2024-11-26 07:37:20.330687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:23160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.357 [2024-11-26 07:37:20.330694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.357 [2024-11-26 07:37:20.330703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.357 [2024-11-26 07:37:20.330710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.357 [2024-11-26 07:37:20.330719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.357 [2024-11-26 07:37:20.330726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.357 [2024-11-26 07:37:20.330736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.357 [2024-11-26 07:37:20.330743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.357 [2024-11-26 07:37:20.330753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.357 [2024-11-26 07:37:20.330760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.357 [2024-11-26 07:37:20.330780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.357 [2024-11-26 07:37:20.330788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:8 PRP1 0x0 PRP2 0x0 00:27:47.357 [2024-11-26 07:37:20.330796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.357 [2024-11-26 07:37:20.330831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.357 [2024-11-26 07:37:20.330841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.357 [2024-11-26 07:37:20.330850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.357 [2024-11-26 07:37:20.330858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.357 [2024-11-26 07:37:20.330872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.357 [2024-11-26 07:37:20.330880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.357 [2024-11-26 07:37:20.330888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.357 [2024-11-26 07:37:20.330895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.357 [2024-11-26 07:37:20.330903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad8d80 is same with the state(6) to be set 00:27:47.357 [2024-11-26 07:37:20.331130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.357 [2024-11-26 07:37:20.331138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.357 [2024-11-26 07:37:20.331144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23208 len:8 PRP1 0x0 PRP2 0x0 00:27:47.357 [2024-11-26 07:37:20.331152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.357 [2024-11-26 07:37:20.331160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.357 [2024-11-26 07:37:20.331166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.357 [2024-11-26 07:37:20.331173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23216 len:8 PRP1 0x0 PRP2 0x0 00:27:47.357 [2024-11-26 07:37:20.331180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.357 [2024-11-26 07:37:20.331188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.358 [2024-11-26 07:37:20.331193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.358 [2024-11-26 07:37:20.331200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23224 len:8 PRP1 0x0 PRP2 0x0 00:27:47.358 [2024-11-26 07:37:20.331207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.358 [2024-11-26 07:37:20.331215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.358 [2024-11-26 07:37:20.331220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.358 [2024-11-26 07:37:20.331226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:8 PRP1 0x0 PRP2 0x0 00:27:47.358 [2024-11-26 07:37:20.331236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.358 [2024-11-26 07:37:20.331244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.358 [2024-11-26 07:37:20.331249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.358 [2024-11-26 07:37:20.331255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23240 len:8 PRP1 0x0 PRP2 0x0 00:27:47.358 [2024-11-26 07:37:20.331263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.358 [2024-11-26 07:37:20.331270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.358 [2024-11-26 07:37:20.331276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.358 [2024-11-26 07:37:20.331282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23248 len:8 PRP1 0x0 PRP2 0x0 00:27:47.358 [2024-11-26 07:37:20.331289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.358 [2024-11-26 07:37:20.331297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.358 [2024-11-26 07:37:20.331303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.358 [2024-11-26 07:37:20.331309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22360 len:8 PRP1 0x0 PRP2 0x0 00:27:47.358 [2024-11-26 07:37:20.331317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.358 [2024-11-26 07:37:20.331324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.358 [2024-11-26 07:37:20.331330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.358 [2024-11-26 07:37:20.331336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22368 len:8 PRP1 0x0 PRP2 0x0 00:27:47.358 [2024-11-26 07:37:20.331343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.358 [2024-11-26 07:37:20.331351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.358 [2024-11-26 07:37:20.331356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.358 [2024-11-26 07:37:20.331362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22376 len:8 PRP1 0x0 PRP2 0x0 00:27:47.358 [2024-11-26 07:37:20.331369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.358 [2024-11-26 07:37:20.331377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.358 [2024-11-26 07:37:20.331383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.358 [2024-11-26 07:37:20.331389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22384 len:8 PRP1 0x0 PRP2 0x0 00:27:47.358 [2024-11-26 07:37:20.331396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.358 [2024-11-26 07:37:20.331404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.358 [2024-11-26 07:37:20.331409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.358 [2024-11-26 07:37:20.331416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22392 len:8 PRP1 0x0 PRP2 0x0 00:27:47.358 [2024-11-26 07:37:20.331423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.358 [2024-11-26 07:37:20.331431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.358 [2024-11-26 07:37:20.331436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.358 [2024-11-26 07:37:20.331444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22400 len:8 PRP1 0x0 PRP2 0x0 00:27:47.358 [2024-11-26 07:37:20.331451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.358 [2024-11-26 07:37:20.331460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.358 [2024-11-26 07:37:20.331465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.358 [2024-11-26 07:37:20.331471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22408 len:8 PRP1 0x0 PRP2 0x0 00:27:47.358 [2024-11-26 07:37:20.331479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.358 [2024-11-26 07:37:20.331486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.358 [2024-11-26 07:37:20.331492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.358 [2024-11-26 07:37:20.331498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22416 len:8 PRP1 0x0 PRP2 0x0 00:27:47.358 [2024-11-26 07:37:20.331505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.358 [2024-11-26 07:37:20.331512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.358 [2024-11-26 07:37:20.331518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.358 [2024-11-26 07:37:20.331524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22424 len:8 PRP1 0x0 PRP2 0x0 00:27:47.358 [2024-11-26 07:37:20.331531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.358 [2024-11-26 07:37:20.331539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.358 [2024-11-26 07:37:20.331544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.358 [2024-11-26 07:37:20.331550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22432 len:8 PRP1 0x0 PRP2 0x0 00:27:47.358 [2024-11-26 07:37:20.331557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.358 [2024-11-26 07:37:20.331565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.358 [2024-11-26 07:37:20.331571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.358 [2024-11-26 07:37:20.331577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22440 len:8 PRP1 0x0 PRP2 0x0 00:27:47.358 [2024-11-26 07:37:20.331584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.358 [2024-11-26 07:37:20.331591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.358 [2024-11-26 07:37:20.331597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.358 [2024-11-26 07:37:20.331604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22448 len:8 PRP1 0x0 PRP2 0x0 00:27:47.358 [2024-11-26 07:37:20.331612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.358 [2024-11-26 07:37:20.331619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.358 [2024-11-26 07:37:20.331625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.358 [2024-11-26 07:37:20.331631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22456 len:8 PRP1 0x0 PRP2 0x0 00:27:47.358 [2024-11-26 07:37:20.331639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.358 [2024-11-26 07:37:20.331648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.358 [2024-11-26 07:37:20.331653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.358 [2024-11-26 07:37:20.331659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22464 len:8 PRP1 0x0 PRP2 0x0 00:27:47.358 [2024-11-26 07:37:20.331667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.358 [2024-11-26 07:37:20.331675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.358 [2024-11-26 07:37:20.331680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.358 [2024-11-26 07:37:20.331687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22472 len:8 PRP1 0x0 PRP2 0x0 00:27:47.358 [2024-11-26 07:37:20.331694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.358 [2024-11-26 07:37:20.331702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.358 [2024-11-26 07:37:20.331707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.358 [2024-11-26 07:37:20.331714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22480 len:8 PRP1 0x0 PRP2 0x0 00:27:47.358 [2024-11-26 07:37:20.331721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.358 [2024-11-26 07:37:20.331728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.358 [2024-11-26 07:37:20.331733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.358 [2024-11-26 07:37:20.331739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22488 len:8 PRP1 0x0 PRP2 0x0 00:27:47.358 [2024-11-26 07:37:20.331747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.358 [2024-11-26 07:37:20.331754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.358 [2024-11-26 07:37:20.331760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.358 [2024-11-26 07:37:20.331766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22232 len:8 PRP1 0x0 PRP2 0x0 00:27:47.358 [2024-11-26 07:37:20.331773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.358 [2024-11-26 07:37:20.331781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.358 [2024-11-26 07:37:20.331787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.358 [2024-11-26 07:37:20.331793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:8 PRP1 0x0 PRP2 0x0 00:27:47.358 [2024-11-26 07:37:20.331800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.358 [2024-11-26 07:37:20.331807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.358 [2024-11-26 07:37:20.331813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.358 [2024-11-26 07:37:20.331819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22504 len:8 PRP1 0x0 PRP2 0x0 00:27:47.359 [2024-11-26 07:37:20.331826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.359 [2024-11-26 07:37:20.331834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.359 [2024-11-26 07:37:20.331839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.359 [2024-11-26 07:37:20.331845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22512 len:8 PRP1 0x0 PRP2 0x0 00:27:47.359 [2024-11-26 07:37:20.331854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.359 [2024-11-26 07:37:20.331865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.359 [2024-11-26 07:37:20.331871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.359 [2024-11-26 07:37:20.331877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22520 len:8 PRP1 0x0 PRP2 0x0 00:27:47.359 [2024-11-26 07:37:20.331884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.359 [2024-11-26 07:37:20.331892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.359 [2024-11-26 07:37:20.331897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.359 [2024-11-26 07:37:20.331904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:8 PRP1 0x0 PRP2 0x0 00:27:47.359 [2024-11-26 07:37:20.331911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.359 [2024-11-26 07:37:20.331918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.359 [2024-11-26 07:37:20.331924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.359 [2024-11-26 07:37:20.331930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22536 len:8 PRP1 0x0 PRP2 0x0 00:27:47.359 [2024-11-26 07:37:20.331938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.359 [2024-11-26 07:37:20.331945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.359 [2024-11-26 07:37:20.331951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.359 [2024-11-26 07:37:20.331957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22544 len:8 PRP1 0x0 PRP2 0x0 00:27:47.359 [2024-11-26 07:37:20.331964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.359 [2024-11-26 07:37:20.331972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.359 [2024-11-26 07:37:20.331977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.359 [2024-11-26 07:37:20.331983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22552 len:8 PRP1 0x0 PRP2 0x0 00:27:47.359 [2024-11-26 07:37:20.331990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.359 [2024-11-26 07:37:20.331997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.359 [2024-11-26 07:37:20.332003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.359 [2024-11-26 07:37:20.332009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:8 PRP1 0x0 PRP2 0x0 00:27:47.359 [2024-11-26 07:37:20.332016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.359 [2024-11-26 07:37:20.332024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.359 [2024-11-26 07:37:20.332029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.359 [2024-11-26 07:37:20.332035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22568 len:8 PRP1 0x0 PRP2 0x0 00:27:47.359 [2024-11-26 07:37:20.332043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.359 [2024-11-26 07:37:20.332051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.359 [2024-11-26 07:37:20.332056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.359 [2024-11-26 07:37:20.332066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22576 len:8 PRP1 0x0 PRP2 0x0 00:27:47.359 [2024-11-26 07:37:20.332074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.359 [2024-11-26 07:37:20.332081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.359 [2024-11-26 07:37:20.332087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.359 [2024-11-26 07:37:20.332093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22584 len:8 PRP1 0x0 PRP2 0x0 00:27:47.359 [2024-11-26 07:37:20.332100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.359 [2024-11-26 07:37:20.332108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.359 [2024-11-26 07:37:20.332113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.359 [2024-11-26 07:37:20.332119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:8 PRP1 0x0 PRP2 0x0 00:27:47.359 [2024-11-26 07:37:20.332126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.359 [2024-11-26 07:37:20.332134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.359 [2024-11-26 07:37:20.332139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.359 [2024-11-26 07:37:20.332146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22600 len:8 PRP1 0x0 PRP2 0x0 00:27:47.359 [2024-11-26 07:37:20.332153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.359 [2024-11-26 07:37:20.332160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.359 [2024-11-26 07:37:20.332166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.359 [2024-11-26 07:37:20.332172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22608 len:8 PRP1 0x0 PRP2 0x0 00:27:47.359 [2024-11-26 07:37:20.332179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.359 [2024-11-26 07:37:20.332187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.359 [2024-11-26 07:37:20.332192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.359 [2024-11-26 07:37:20.332199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22616 len:8 PRP1 0x0 PRP2 0x0 00:27:47.359 [2024-11-26 07:37:20.332206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.359 [2024-11-26 07:37:20.332213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.359 [2024-11-26 07:37:20.332219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.359 [2024-11-26 07:37:20.332225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:8 PRP1 0x0 PRP2 0x0 00:27:47.359 [2024-11-26 07:37:20.332232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.359 [2024-11-26 07:37:20.332240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.359 [2024-11-26 07:37:20.332245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.359 [2024-11-26 07:37:20.332251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22632 len:8 PRP1 0x0 PRP2 0x0 00:27:47.359 [2024-11-26 07:37:20.332258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.359 [2024-11-26 07:37:20.332266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.359 [2024-11-26 07:37:20.332273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.359 [2024-11-26 07:37:20.332279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22640 len:8 PRP1 0x0 PRP2 0x0 00:27:47.359 [2024-11-26 07:37:20.332286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.359 [2024-11-26 07:37:20.332294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.359 [2024-11-26 07:37:20.332299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.359 [2024-11-26 07:37:20.332305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22648 len:8 PRP1 0x0 PRP2 0x0 00:27:47.359 [2024-11-26 07:37:20.332312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.359 [2024-11-26 07:37:20.332320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.359 [2024-11-26 07:37:20.332325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.359 [2024-11-26 07:37:20.332331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:8 PRP1 0x0 PRP2 0x0 00:27:47.359 [2024-11-26 07:37:20.332339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.359 [2024-11-26 07:37:20.332347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.359 [2024-11-26 07:37:20.332352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.359 [2024-11-26 07:37:20.332358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22664 len:8 PRP1 0x0 PRP2 0x0 00:27:47.359 [2024-11-26 07:37:20.332365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.359 [2024-11-26 07:37:20.332373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.359 [2024-11-26 07:37:20.332378] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.359 [2024-11-26 07:37:20.332384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22672 len:8 PRP1 0x0 PRP2 0x0 00:27:47.359 [2024-11-26 07:37:20.332391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.359 [2024-11-26 07:37:20.332399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.359 [2024-11-26 07:37:20.332404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.359 [2024-11-26 07:37:20.332410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22680 len:8 PRP1 0x0 PRP2 0x0 00:27:47.359 [2024-11-26 07:37:20.332417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.359 [2024-11-26 07:37:20.332425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.359 [2024-11-26 07:37:20.332430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.359 [2024-11-26 07:37:20.332436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:8 PRP1 0x0 PRP2 0x0 00:27:47.359 [2024-11-26 07:37:20.332443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.359 [2024-11-26 07:37:20.332451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.360 [2024-11-26 07:37:20.332456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.360 [2024-11-26 07:37:20.332463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22696 len:8 PRP1 0x0 PRP2 0x0 00:27:47.360 [2024-11-26 07:37:20.332470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.360 [2024-11-26 07:37:20.332479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.360 [2024-11-26 07:37:20.332485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.360 [2024-11-26 07:37:20.332491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22704 len:8 PRP1 0x0 PRP2 0x0 00:27:47.360 [2024-11-26 07:37:20.332498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.360 [2024-11-26 07:37:20.332506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.360 [2024-11-26 07:37:20.332511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.360 [2024-11-26 07:37:20.332518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22712 len:8 PRP1 0x0 PRP2 0x0 00:27:47.360 [2024-11-26 07:37:20.332525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.360 [2024-11-26 07:37:20.332532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.360 [2024-11-26 07:37:20.332538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.360 [2024-11-26 07:37:20.332544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:8 PRP1 0x0 PRP2 0x0 00:27:47.360 [2024-11-26 07:37:20.332551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.360 [2024-11-26 07:37:20.332559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.360 [2024-11-26 07:37:20.332564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.360 [2024-11-26 07:37:20.332570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22728 len:8 PRP1 0x0 PRP2 0x0 00:27:47.360 [2024-11-26 07:37:20.332578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.360 [2024-11-26 07:37:20.332585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.360 [2024-11-26 07:37:20.332591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.360 [2024-11-26 07:37:20.332597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22736 len:8 PRP1 0x0 PRP2 0x0 00:27:47.360 [2024-11-26 07:37:20.332604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.360 [2024-11-26 07:37:20.332611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.360 [2024-11-26 07:37:20.332617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.360 [2024-11-26 07:37:20.332623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22744 len:8 PRP1 0x0 PRP2 0x0 00:27:47.360 [2024-11-26 07:37:20.332630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.360 [2024-11-26 07:37:20.332638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.360 [2024-11-26 07:37:20.332643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.360 [2024-11-26 07:37:20.332649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:8 PRP1 0x0 PRP2 0x0 00:27:47.360 [2024-11-26 07:37:20.332657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.360 [2024-11-26 07:37:20.332664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.360 [2024-11-26 07:37:20.332669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.360 [2024-11-26 07:37:20.332675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22760 len:8 PRP1 0x0 PRP2 0x0 00:27:47.360 [2024-11-26 07:37:20.332685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.360 [2024-11-26 07:37:20.332692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.360 [2024-11-26 07:37:20.332698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.360 [2024-11-26 07:37:20.332704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22768 len:8 PRP1 0x0 PRP2 0x0 00:27:47.360 [2024-11-26 07:37:20.332711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.360 [2024-11-26 07:37:20.336827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.360 [2024-11-26 07:37:20.336854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.360 [2024-11-26 07:37:20.336871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22776 len:8 PRP1 0x0 PRP2 0x0 00:27:47.360 [2024-11-26 07:37:20.336881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.360 [2024-11-26 07:37:20.336888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.360 [2024-11-26 07:37:20.336894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.360 [2024-11-26 07:37:20.336901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:8 PRP1 0x0 PRP2 0x0 00:27:47.360 [2024-11-26 07:37:20.336908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.360 [2024-11-26 07:37:20.336916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.360 [2024-11-26 07:37:20.336921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.360 [2024-11-26 07:37:20.336927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22792 len:8 PRP1 0x0 PRP2 0x0 00:27:47.360 [2024-11-26 07:37:20.336935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.360 [2024-11-26 07:37:20.336942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.360 [2024-11-26 07:37:20.336948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.360 [2024-11-26 07:37:20.336954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22800 len:8 PRP1 0x0 PRP2 0x0 00:27:47.360 [2024-11-26 07:37:20.336961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.360 [2024-11-26 07:37:20.336968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.360 [2024-11-26 07:37:20.336974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.360 [2024-11-26 07:37:20.336980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22808 len:8 PRP1 0x0 PRP2 0x0 00:27:47.360 [2024-11-26 07:37:20.336987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.360 [2024-11-26 07:37:20.336995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.360 [2024-11-26 07:37:20.337000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.360 [2024-11-26 07:37:20.337006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:8 PRP1 0x0 PRP2 0x0 00:27:47.360 [2024-11-26 07:37:20.337014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.360 [2024-11-26 07:37:20.337021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.360 [2024-11-26 07:37:20.337026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.360 [2024-11-26 07:37:20.337038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22824 len:8 PRP1 0x0 PRP2 0x0 00:27:47.360 [2024-11-26 07:37:20.337045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.360 [2024-11-26 07:37:20.337053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.360 [2024-11-26 07:37:20.337058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.360 [2024-11-26 07:37:20.337064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22832 len:8 PRP1 0x0 PRP2 0x0 00:27:47.360 [2024-11-26 07:37:20.337071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.360 [2024-11-26 07:37:20.337079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.360 [2024-11-26 07:37:20.337084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.360 [2024-11-26 07:37:20.337090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22840 len:8 PRP1 0x0 PRP2 0x0 00:27:47.360 [2024-11-26 07:37:20.337097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.360 [2024-11-26 07:37:20.337105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.360 [2024-11-26 07:37:20.337110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.360 [2024-11-26 07:37:20.337116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:8 PRP1 0x0 PRP2 0x0 00:27:47.360 [2024-11-26 07:37:20.337123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.360 [2024-11-26 07:37:20.337131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.360 [2024-11-26 07:37:20.337136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.360 [2024-11-26 07:37:20.337142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22856 len:8 PRP1 0x0 PRP2 0x0 00:27:47.361 [2024-11-26 07:37:20.337149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.361 [2024-11-26 07:37:20.337157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.361 [2024-11-26 07:37:20.337162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.361 [2024-11-26 07:37:20.337168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22864 len:8 PRP1 0x0 PRP2 0x0 00:27:47.361 [2024-11-26 07:37:20.337175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.361 [2024-11-26 07:37:20.337183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.361 [2024-11-26 07:37:20.337189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.361 [2024-11-26 07:37:20.337194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22872 len:8 PRP1 0x0 PRP2 0x0 00:27:47.361 [2024-11-26 07:37:20.337201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.361 [2024-11-26 07:37:20.337209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.361 [2024-11-26 07:37:20.337214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.361 [2024-11-26 07:37:20.337220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:8 PRP1 0x0 PRP2 0x0 00:27:47.361 [2024-11-26 07:37:20.337228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.361 [2024-11-26 07:37:20.337237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.361 [2024-11-26 07:37:20.337242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.361 [2024-11-26 07:37:20.337248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22240 len:8 PRP1 0x0 PRP2 0x0 00:27:47.361 [2024-11-26 07:37:20.337255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.361 [2024-11-26 07:37:20.337263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.361 [2024-11-26 07:37:20.337268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.361 [2024-11-26 07:37:20.337274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22248 len:8 PRP1 0x0 PRP2 0x0 00:27:47.361 [2024-11-26 07:37:20.337281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.361 [2024-11-26 07:37:20.337289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.361 [2024-11-26 07:37:20.337294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.361 [2024-11-26 07:37:20.337300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22256 len:8 PRP1 0x0 PRP2 0x0 00:27:47.361 [2024-11-26 07:37:20.337308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.361 [2024-11-26 07:37:20.337315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.361 [2024-11-26 07:37:20.337321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.361 [2024-11-26 07:37:20.337327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22264 len:8 PRP1 0x0 PRP2 0x0 00:27:47.361 [2024-11-26 07:37:20.337334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.361 [2024-11-26 07:37:20.337341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.361 [2024-11-26 07:37:20.337346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.361 [2024-11-26 07:37:20.337352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22272 len:8 PRP1 0x0 PRP2 0x0 00:27:47.361 [2024-11-26 07:37:20.337360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.361 [2024-11-26 07:37:20.337367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.361 [2024-11-26 07:37:20.337372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.361 [2024-11-26 07:37:20.337378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22280 len:8 PRP1 0x0 PRP2 0x0 00:27:47.361 [2024-11-26 07:37:20.337386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.361 [2024-11-26 07:37:20.337393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.361 [2024-11-26 07:37:20.337398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.361 [2024-11-26 07:37:20.337404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22288 len:8 PRP1 0x0 PRP2 0x0 00:27:47.361 [2024-11-26 07:37:20.337412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.361 [2024-11-26 07:37:20.337419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.361 [2024-11-26 07:37:20.337424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.361 [2024-11-26 07:37:20.337431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22888 len:8 PRP1 0x0 PRP2 0x0 00:27:47.361 [2024-11-26 07:37:20.337439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.361 [2024-11-26 07:37:20.337447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.361 [2024-11-26 07:37:20.337452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.361 [2024-11-26 07:37:20.337458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22896 len:8 PRP1 0x0 PRP2 0x0 00:27:47.361 [2024-11-26 07:37:20.337466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.361 [2024-11-26 07:37:20.337473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.361 [2024-11-26 07:37:20.337478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.361 [2024-11-26 07:37:20.337484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22904 len:8 PRP1 0x0 PRP2 0x0 00:27:47.361 [2024-11-26 07:37:20.337492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.361 [2024-11-26 07:37:20.337499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.361 [2024-11-26 07:37:20.337505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.361 [2024-11-26 07:37:20.337511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:8 PRP1 0x0 PRP2 0x0 00:27:47.361 [2024-11-26 07:37:20.337518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.361 [2024-11-26 07:37:20.337526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.361 [2024-11-26 07:37:20.337531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.361 [2024-11-26 07:37:20.337537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22920 len:8 PRP1 0x0 PRP2 0x0 00:27:47.361 [2024-11-26 07:37:20.337544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.361 [2024-11-26 07:37:20.337552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.361 [2024-11-26 07:37:20.337557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.361 [2024-11-26 07:37:20.337563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22928 len:8 PRP1 0x0 PRP2 0x0 00:27:47.361 [2024-11-26 07:37:20.337571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.361 [2024-11-26 07:37:20.337578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.361 [2024-11-26 07:37:20.337583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.361 [2024-11-26 07:37:20.337589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22936 len:8 PRP1 0x0 PRP2 0x0 00:27:47.361 [2024-11-26 07:37:20.337597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.361 [2024-11-26 07:37:20.337604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.361 [2024-11-26 07:37:20.337610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.361 [2024-11-26 07:37:20.337615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:8 PRP1 0x0 PRP2 0x0 00:27:47.361 [2024-11-26 07:37:20.337623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.361 [2024-11-26 07:37:20.337630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.361 [2024-11-26 07:37:20.337635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.361 [2024-11-26 07:37:20.337643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22952 len:8 PRP1 0x0 PRP2 0x0 00:27:47.361 [2024-11-26 07:37:20.337650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.361 [2024-11-26 07:37:20.337658] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.361 [2024-11-26 07:37:20.337663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.361 [2024-11-26 07:37:20.337669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22960 len:8 PRP1 0x0 PRP2 0x0 00:27:47.361 [2024-11-26 07:37:20.337676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.361 [2024-11-26 07:37:20.337684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.361 [2024-11-26 07:37:20.337689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.361 [2024-11-26 07:37:20.337695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22968 len:8 PRP1 0x0 PRP2 0x0 00:27:47.361 [2024-11-26 07:37:20.337702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.361 [2024-11-26 07:37:20.337710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.361 [2024-11-26 07:37:20.337715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.361 [2024-11-26 07:37:20.337721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:8 PRP1 0x0 PRP2 0x0 00:27:47.361 [2024-11-26 07:37:20.337728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.361 [2024-11-26 07:37:20.337736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.361 [2024-11-26 07:37:20.337741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.361 [2024-11-26 07:37:20.337747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22984 len:8 PRP1 0x0 PRP2 0x0 00:27:47.361 [2024-11-26 07:37:20.337754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.361 [2024-11-26 07:37:20.337762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.362 [2024-11-26 07:37:20.337767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.362 [2024-11-26 07:37:20.337773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22992 len:8 PRP1 0x0 PRP2 0x0 00:27:47.362 [2024-11-26 07:37:20.337780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.362 [2024-11-26 07:37:20.337787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.362 [2024-11-26 07:37:20.337793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.362 [2024-11-26 07:37:20.337799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23000 len:8 PRP1 0x0 PRP2 0x0 00:27:47.362 [2024-11-26 07:37:20.337806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.362 [2024-11-26 07:37:20.337813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.362 [2024-11-26 07:37:20.337819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.362 [2024-11-26 07:37:20.337825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:8 PRP1 0x0 PRP2 0x0 00:27:47.362 [2024-11-26 07:37:20.337832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.362 [2024-11-26 07:37:20.337839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.362 [2024-11-26 07:37:20.337846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.362 [2024-11-26 07:37:20.337852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23016 len:8 PRP1 0x0 PRP2 0x0 00:27:47.362 [2024-11-26 07:37:20.337859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.362 [2024-11-26 07:37:20.337914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.362 [2024-11-26 07:37:20.337921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.362 [2024-11-26 07:37:20.337927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23024 len:8 PRP1 0x0 PRP2 0x0 00:27:47.362 [2024-11-26 07:37:20.337934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.362 [2024-11-26 07:37:20.337942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.362 [2024-11-26 07:37:20.337947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.362 [2024-11-26 07:37:20.337954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23032 len:8 PRP1 0x0 PRP2 0x0 00:27:47.362 [2024-11-26 07:37:20.337961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.362 [2024-11-26 07:37:20.337968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.362 [2024-11-26 07:37:20.337973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.362 [2024-11-26 07:37:20.337979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:8 PRP1 0x0 PRP2 0x0 00:27:47.362 [2024-11-26 07:37:20.337986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.362 [2024-11-26 07:37:20.337994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.362 [2024-11-26 07:37:20.337999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.362 [2024-11-26 07:37:20.338005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23048 len:8 PRP1 0x0 PRP2 0x0 00:27:47.362 [2024-11-26 07:37:20.338013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.362 [2024-11-26 07:37:20.338020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.362 [2024-11-26 07:37:20.338026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.362 [2024-11-26 07:37:20.338032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23056 len:8 PRP1 0x0 PRP2 0x0 00:27:47.362 [2024-11-26 07:37:20.338039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.362 [2024-11-26 07:37:20.338046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.362 [2024-11-26 07:37:20.338052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.362 [2024-11-26 07:37:20.338058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23064 len:8 PRP1 0x0 PRP2 0x0 00:27:47.362 [2024-11-26 07:37:20.338065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.362 [2024-11-26 07:37:20.338072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.362 [2024-11-26 07:37:20.338078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.362 [2024-11-26 07:37:20.338084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:8 PRP1 0x0 PRP2 0x0 00:27:47.362 [2024-11-26 07:37:20.338091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.362 [2024-11-26 07:37:20.338101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.362 [2024-11-26 07:37:20.338106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.362 [2024-11-26 07:37:20.338112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23080 len:8 PRP1 0x0 PRP2 0x0 00:27:47.362 [2024-11-26 07:37:20.338119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.362 [2024-11-26 07:37:20.338127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.362 [2024-11-26 07:37:20.338133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.362 [2024-11-26 07:37:20.338139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23088 len:8 PRP1 0x0 PRP2 0x0 00:27:47.362 [2024-11-26 07:37:20.338146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.362 [2024-11-26 07:37:20.338153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.362 [2024-11-26 07:37:20.338159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.362 [2024-11-26 07:37:20.338165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23096 len:8 PRP1 0x0 PRP2 0x0 00:27:47.362 [2024-11-26 07:37:20.338172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.362 [2024-11-26 07:37:20.338180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.362 [2024-11-26 07:37:20.338185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.362 [2024-11-26 07:37:20.338191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:8 PRP1 0x0 PRP2 0x0 00:27:47.362 [2024-11-26 07:37:20.338198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.362 [2024-11-26 07:37:20.338206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.362 [2024-11-26 07:37:20.338211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.362 [2024-11-26 07:37:20.338217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23112 len:8 PRP1 0x0 PRP2 0x0 00:27:47.362 [2024-11-26 07:37:20.338224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.362 [2024-11-26 07:37:20.338232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.362 [2024-11-26 07:37:20.338237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.362 [2024-11-26 07:37:20.338243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23120 len:8 PRP1 0x0 PRP2 0x0 00:27:47.362 [2024-11-26 07:37:20.338250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.362 [2024-11-26 07:37:20.338258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.362 [2024-11-26 07:37:20.338263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.362 [2024-11-26 07:37:20.338269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23128 len:8 PRP1 0x0 PRP2 0x0 00:27:47.362 [2024-11-26 07:37:20.338276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.362 [2024-11-26 07:37:20.338284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.362 [2024-11-26 07:37:20.338289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.362 [2024-11-26 07:37:20.338295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:8 PRP1 0x0 PRP2 0x0 00:27:47.362 [2024-11-26 07:37:20.338304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.362 [2024-11-26 07:37:20.338312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.362 [2024-11-26 07:37:20.338317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.362 [2024-11-26 07:37:20.338323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22296 len:8 PRP1 0x0 PRP2 0x0 00:27:47.362 [2024-11-26 07:37:20.338330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.362 [2024-11-26 07:37:20.338338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.362 [2024-11-26 07:37:20.338343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.362 [2024-11-26 07:37:20.338350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22304 len:8 PRP1 0x0 PRP2 0x0 00:27:47.362 [2024-11-26 07:37:20.338358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.362 [2024-11-26 07:37:20.338366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.362 [2024-11-26 07:37:20.338371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.362 [2024-11-26 07:37:20.338377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22312 len:8 PRP1 0x0 PRP2 0x0 00:27:47.362 [2024-11-26 07:37:20.338384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.362 [2024-11-26 07:37:20.338392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.362 [2024-11-26 07:37:20.338398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.362 [2024-11-26 07:37:20.338404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22320 len:8 PRP1 0x0 PRP2 0x0 00:27:47.362 [2024-11-26 07:37:20.338411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.362 [2024-11-26 07:37:20.338418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.362 [2024-11-26 07:37:20.338424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.362 [2024-11-26 07:37:20.338430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22328 len:8 PRP1 0x0 PRP2 0x0 00:27:47.363 [2024-11-26 07:37:20.338437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.363 [2024-11-26 07:37:20.338445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.363 [2024-11-26 07:37:20.338450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.363 [2024-11-26 07:37:20.338456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22336 len:8 PRP1 0x0 PRP2 0x0 00:27:47.363 [2024-11-26 07:37:20.338463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.363 [2024-11-26 07:37:20.338471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.363 [2024-11-26 07:37:20.338476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.363 [2024-11-26 07:37:20.338482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22344 len:8 PRP1 0x0 PRP2 0x0 00:27:47.363 [2024-11-26 07:37:20.338489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.363 [2024-11-26 07:37:20.338497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.363 [2024-11-26 07:37:20.338502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.363 [2024-11-26 07:37:20.338510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22352 len:8 PRP1 0x0 PRP2 0x0 00:27:47.363 [2024-11-26 07:37:20.338517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.363 [2024-11-26 07:37:20.338524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.363 [2024-11-26 07:37:20.338529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.363 [2024-11-26 07:37:20.338535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23144 len:8 PRP1 0x0 PRP2 0x0 00:27:47.363 [2024-11-26 07:37:20.338542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.363 [2024-11-26 07:37:20.338550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.363 [2024-11-26 07:37:20.338555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.363 [2024-11-26 07:37:20.338561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23152 len:8 PRP1 0x0 PRP2 0x0 00:27:47.363 [2024-11-26 07:37:20.338568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.363 [2024-11-26 07:37:20.338576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.363 [2024-11-26 07:37:20.338581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.363 [2024-11-26 07:37:20.338587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23160 len:8 PRP1 0x0 PRP2 0x0 00:27:47.363 [2024-11-26 07:37:20.338594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.363 [2024-11-26 07:37:20.338601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.363 [2024-11-26 07:37:20.338607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.363 [2024-11-26 07:37:20.338613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:8 PRP1 0x0 PRP2 0x0 00:27:47.363 [2024-11-26 07:37:20.338620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.363 [2024-11-26 07:37:20.338627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.363 [2024-11-26 07:37:20.338633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.363 [2024-11-26 07:37:20.338639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23176 len:8 PRP1 0x0 PRP2 0x0 00:27:47.363 [2024-11-26 07:37:20.338646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.363 [2024-11-26 07:37:20.338654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.363 [2024-11-26 07:37:20.338659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.363 [2024-11-26 07:37:20.338665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23184 len:8 PRP1 0x0 PRP2 0x0 00:27:47.363 [2024-11-26 07:37:20.338672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.363 [2024-11-26 07:37:20.338679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.363 [2024-11-26 07:37:20.338685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.363 [2024-11-26 07:37:20.338691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23192 len:8 PRP1 0x0 PRP2 0x0 00:27:47.363 [2024-11-26 07:37:20.338698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.363 [2024-11-26 07:37:20.338707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.363 [2024-11-26 07:37:20.338712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.363 [2024-11-26 07:37:20.338719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:8 PRP1 0x0 PRP2 0x0 00:27:47.363 [2024-11-26 07:37:20.338726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.363 [2024-11-26 07:37:20.338765] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:27:47.363 [2024-11-26 07:37:20.338775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:27:47.363 [2024-11-26 07:37:20.342317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:27:47.363 [2024-11-26 07:37:20.342345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad8d80 (9): Bad file descriptor 00:27:47.363 [2024-11-26 07:37:20.411271] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:27:47.363 11047.60 IOPS, 43.15 MiB/s [2024-11-26T06:37:31.500Z] 11051.00 IOPS, 43.17 MiB/s [2024-11-26T06:37:31.500Z] 11063.86 IOPS, 43.22 MiB/s [2024-11-26T06:37:31.500Z] 11032.12 IOPS, 43.09 MiB/s [2024-11-26T06:37:31.500Z] [2024-11-26 07:37:24.717438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:33584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.363 [2024-11-26 07:37:24.717475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.363 [2024-11-26 07:37:24.717493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:33592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.363 [2024-11-26 07:37:24.717502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.363 [2024-11-26 07:37:24.717513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.363 [2024-11-26 07:37:24.717521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.363 [2024-11-26 07:37:24.717531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:33608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.363 [2024-11-26 07:37:24.717539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.363 [2024-11-26 07:37:24.717549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.363 [2024-11-26 07:37:24.717557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.363 [2024-11-26 07:37:24.717566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:33624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.363 [2024-11-26 07:37:24.717574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.363 [2024-11-26 07:37:24.717583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:33632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.363 [2024-11-26 07:37:24.717590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.363 [2024-11-26 07:37:24.717600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:33640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.363 [2024-11-26 07:37:24.717607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.363 [2024-11-26 07:37:24.717617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:33648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.363 [2024-11-26 07:37:24.717629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.363 [2024-11-26 07:37:24.717639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:33656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.363 [2024-11-26 07:37:24.717646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.363 [2024-11-26 07:37:24.717655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:33664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.363 [2024-11-26 07:37:24.717663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.363 [2024-11-26 07:37:24.717672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.363 [2024-11-26 07:37:24.717680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.363 [2024-11-26 07:37:24.717690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:33680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.363 [2024-11-26 07:37:24.717697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.363 [2024-11-26 07:37:24.717707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.363 [2024-11-26 07:37:24.717714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.363 [2024-11-26 07:37:24.717723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.363 [2024-11-26 07:37:24.717731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.363 [2024-11-26 07:37:24.717741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:33704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.363 [2024-11-26 07:37:24.717748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.363 [2024-11-26 07:37:24.717757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.363 [2024-11-26 07:37:24.717764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.363 [2024-11-26 07:37:24.717774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:33720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.364 [2024-11-26 07:37:24.717781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.364 [2024-11-26 07:37:24.717791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:33728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.364 [2024-11-26 07:37:24.717798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.364 [2024-11-26 07:37:24.717807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:33736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.364 [2024-11-26 07:37:24.717814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.364 [2024-11-26 07:37:24.717824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:33744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.364 [2024-11-26 07:37:24.717832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.364 [2024-11-26 07:37:24.717843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:33752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.364 [2024-11-26 07:37:24.717850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.364 [2024-11-26 07:37:24.717860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.364 [2024-11-26 07:37:24.717873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.364 [2024-11-26 07:37:24.717882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:33768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.364 [2024-11-26 07:37:24.717889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.364 [2024-11-26 07:37:24.717899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.364 [2024-11-26 07:37:24.717906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.364 [2024-11-26 07:37:24.717915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:33784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.364 [2024-11-26 07:37:24.717923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.364 [2024-11-26 07:37:24.717932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:33792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.364 [2024-11-26 07:37:24.717939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.364 [2024-11-26 07:37:24.717949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:33800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.364 [2024-11-26 07:37:24.717956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.364 [2024-11-26 07:37:24.717966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:33808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.364 [2024-11-26 07:37:24.717973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.364 [2024-11-26 07:37:24.717982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.364 [2024-11-26 07:37:24.717990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.364 [2024-11-26 07:37:24.717999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:33824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.364 [2024-11-26 07:37:24.718006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.364 [2024-11-26 07:37:24.718016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:33832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.364 [2024-11-26 07:37:24.718023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.364 [2024-11-26 07:37:24.718032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.364 [2024-11-26 07:37:24.718040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.364 [2024-11-26 07:37:24.718049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:33848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.364 [2024-11-26 07:37:24.718063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.364 [2024-11-26 07:37:24.718072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:33856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.364 [2024-11-26 07:37:24.718079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.364 [2024-11-26 07:37:24.718088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:33864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.364 [2024-11-26 07:37:24.718096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.364 [2024-11-26 07:37:24.718105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:33872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.364 [2024-11-26 07:37:24.718113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.364 [2024-11-26 07:37:24.718122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:33880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.364 [2024-11-26 07:37:24.718129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.364 [2024-11-26 07:37:24.718138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:33888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.364 [2024-11-26 07:37:24.718145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.364 [2024-11-26 07:37:24.718155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:33896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.364 [2024-11-26 07:37:24.718162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.364 [2024-11-26 07:37:24.718171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:33904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.364 [2024-11-26 07:37:24.718178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.364 [2024-11-26 07:37:24.718188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:33912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.364 [2024-11-26 07:37:24.718195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.364 [2024-11-26 07:37:24.718204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.364 [2024-11-26 07:37:24.718212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.364 [2024-11-26 07:37:24.718221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:33928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.364 [2024-11-26 07:37:24.718228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.364 [2024-11-26 07:37:24.718238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.364 [2024-11-26 07:37:24.718245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.364 [2024-11-26 07:37:24.718254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.364 [2024-11-26 07:37:24.718261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.364 [2024-11-26 07:37:24.718271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:33952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.364 [2024-11-26 07:37:24.718280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.364 [2024-11-26 07:37:24.718289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:33960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.364 [2024-11-26 07:37:24.718296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.364 [2024-11-26 07:37:24.718305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:33968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.364 [2024-11-26 07:37:24.718313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.364 [2024-11-26 07:37:24.718322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:33976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.364 [2024-11-26 07:37:24.718329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.364 [2024-11-26 07:37:24.718339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.364 [2024-11-26 07:37:24.718345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.364 [2024-11-26 07:37:24.718355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.364 [2024-11-26 07:37:24.718362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.364 [2024-11-26 07:37:24.718372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:34000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.364 [2024-11-26 07:37:24.718380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.718389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:34008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.365 [2024-11-26 07:37:24.718396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.718406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:34072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.365 [2024-11-26 07:37:24.718414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.718423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.365 [2024-11-26 07:37:24.718430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.718439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:34088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.365 [2024-11-26 07:37:24.718446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.718456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.365 [2024-11-26 07:37:24.718463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.718472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:34104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.365 [2024-11-26 07:37:24.718480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.718490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:34112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.365 [2024-11-26 07:37:24.718498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.718507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:34120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.365 [2024-11-26 07:37:24.718514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.718523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:34128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.365 [2024-11-26 07:37:24.718530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.718540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.365 [2024-11-26 07:37:24.718547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.718556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:34024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.365 [2024-11-26 07:37:24.718563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.718573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:34032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.365 [2024-11-26 07:37:24.718580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.718589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:34040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.365 [2024-11-26 07:37:24.718596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.718605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:34048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.365 [2024-11-26 07:37:24.718612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.718622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:34056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.365 [2024-11-26 07:37:24.718629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.718638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:34064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.365 [2024-11-26 07:37:24.718645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.718655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:34136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.365 [2024-11-26 07:37:24.718662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.718671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.365 [2024-11-26 07:37:24.718679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.718688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.365 [2024-11-26 07:37:24.718696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.718706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:34160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.365 [2024-11-26 07:37:24.718714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.718723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.365 [2024-11-26 07:37:24.718730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.718739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:34176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.365 [2024-11-26 07:37:24.718746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.718756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:34184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.365 [2024-11-26 07:37:24.718763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.718772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:34192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.365 [2024-11-26 07:37:24.718779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.718788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:34200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.365 [2024-11-26 07:37:24.718795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.718805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:34208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.365 [2024-11-26 07:37:24.718812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.718821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.365 [2024-11-26 07:37:24.718828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.718837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:34224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.365 [2024-11-26 07:37:24.718844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.718853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.365 [2024-11-26 07:37:24.718861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.718873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:34240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.365 [2024-11-26 07:37:24.718880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.718889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:34248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.365 [2024-11-26 07:37:24.718896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.718907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:34256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.365 [2024-11-26 07:37:24.718915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.718925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.365 [2024-11-26 07:37:24.718932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.718942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:34272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.365 [2024-11-26 07:37:24.718949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.718958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:34280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.365 [2024-11-26 07:37:24.718965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.718975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:34288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.365 [2024-11-26 07:37:24.718982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.718991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.365 [2024-11-26 07:37:24.718999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.719008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:34304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.365 [2024-11-26 07:37:24.719015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.719025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:34312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.365 [2024-11-26 07:37:24.719032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.365 [2024-11-26 07:37:24.719041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.365 [2024-11-26 07:37:24.719048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.366 [2024-11-26 07:37:24.719058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:34328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.366 [2024-11-26 07:37:24.719065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.366 [2024-11-26 07:37:24.719075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.366 [2024-11-26 07:37:24.719082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.366 [2024-11-26 07:37:24.719091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.366 [2024-11-26 07:37:24.719098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.366 [2024-11-26 07:37:24.719108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:34352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.366 [2024-11-26 07:37:24.719115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.366 [2024-11-26 07:37:24.719125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:34360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.366 [2024-11-26 07:37:24.719133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.366 [2024-11-26 07:37:24.719142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.366 [2024-11-26 07:37:24.719149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.366 [2024-11-26 07:37:24.719158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:34376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.366 [2024-11-26 07:37:24.719166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.366 [2024-11-26 07:37:24.719175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:34384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.366 [2024-11-26 07:37:24.719182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.366 [2024-11-26 07:37:24.719206] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.366 [2024-11-26 07:37:24.719214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34392 len:8 PRP1 0x0 PRP2 0x0 00:27:47.366 [2024-11-26 07:37:24.719222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.366 [2024-11-26 07:37:24.719232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.366 [2024-11-26 07:37:24.719238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.366 [2024-11-26 07:37:24.719245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34400 len:8 PRP1 0x0 PRP2 0x0 00:27:47.366 [2024-11-26 07:37:24.719252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.366 [2024-11-26 07:37:24.719259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.366 [2024-11-26 07:37:24.719265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.366 [2024-11-26 07:37:24.719271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34408 len:8 PRP1 0x0 PRP2 0x0 00:27:47.366 [2024-11-26 07:37:24.719278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.366 [2024-11-26 07:37:24.719286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.366 [2024-11-26 07:37:24.719291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.366 [2024-11-26 07:37:24.719297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34416 len:8 PRP1 0x0 PRP2 0x0 00:27:47.366 [2024-11-26 07:37:24.719304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.366 [2024-11-26 07:37:24.719312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.366 [2024-11-26 07:37:24.719317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.366 [2024-11-26 07:37:24.719323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34424 len:8 PRP1 0x0 PRP2 0x0 00:27:47.366 [2024-11-26 07:37:24.719330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.366 [2024-11-26 07:37:24.719338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.366 [2024-11-26 07:37:24.719344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.366 [2024-11-26 07:37:24.719352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34432 len:8 PRP1 0x0 PRP2 0x0 00:27:47.366 [2024-11-26 07:37:24.719359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.366 [2024-11-26 07:37:24.719366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.366 [2024-11-26 07:37:24.719372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.366 [2024-11-26 07:37:24.719378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34440 len:8 PRP1 0x0 PRP2 0x0 00:27:47.366 [2024-11-26 07:37:24.719385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.366 [2024-11-26 07:37:24.719392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.366 [2024-11-26 07:37:24.719398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.366 [2024-11-26 07:37:24.719404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34448 len:8 PRP1 0x0 PRP2 0x0 00:27:47.366 [2024-11-26 07:37:24.719411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.366 [2024-11-26 07:37:24.719419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.366 [2024-11-26 07:37:24.719424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.366 [2024-11-26 07:37:24.719430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34456 len:8 PRP1 0x0 PRP2 0x0 00:27:47.366 [2024-11-26 07:37:24.719438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.366 [2024-11-26 07:37:24.719445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.366 [2024-11-26 07:37:24.719451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.366 [2024-11-26 07:37:24.719457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34464 len:8 PRP1 0x0 PRP2 0x0 00:27:47.366 [2024-11-26 07:37:24.719464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.366 [2024-11-26 07:37:24.719472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.366 [2024-11-26 07:37:24.719478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.366 [2024-11-26 07:37:24.719484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34472 len:8 PRP1 0x0 PRP2 0x0 00:27:47.366 [2024-11-26 07:37:24.719491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.366 [2024-11-26 07:37:24.719499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.366 [2024-11-26 07:37:24.719504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.366 [2024-11-26 07:37:24.719510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34480 len:8 PRP1 0x0 PRP2 0x0 00:27:47.366 [2024-11-26 07:37:24.719517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.366 [2024-11-26 07:37:24.719525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.366 [2024-11-26 07:37:24.719530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.366 [2024-11-26 07:37:24.719536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34488 len:8 PRP1 0x0 PRP2 0x0 00:27:47.366 [2024-11-26 07:37:24.719543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.366 [2024-11-26 07:37:24.719553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.366 [2024-11-26 07:37:24.719558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.366 [2024-11-26 07:37:24.719564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34496 len:8 PRP1 0x0 PRP2 0x0 00:27:47.366 [2024-11-26 07:37:24.719571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.366 [2024-11-26 07:37:24.719579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.366 [2024-11-26 07:37:24.719584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.366 [2024-11-26 07:37:24.719590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34504 len:8 PRP1 0x0 PRP2 0x0 00:27:47.366 [2024-11-26 07:37:24.719597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.366 [2024-11-26 07:37:24.719605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.366 [2024-11-26 07:37:24.719610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.366 [2024-11-26 07:37:24.719616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34512 len:8 PRP1 0x0 PRP2 0x0 00:27:47.366 [2024-11-26 07:37:24.719623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.366 [2024-11-26 07:37:24.719631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.366 [2024-11-26 07:37:24.719636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.366 [2024-11-26 07:37:24.719642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34520 len:8 PRP1 0x0 PRP2 0x0 00:27:47.366 [2024-11-26 07:37:24.719649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.366 [2024-11-26 07:37:24.719657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.366 [2024-11-26 07:37:24.719662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.366 [2024-11-26 07:37:24.719668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34528 len:8 PRP1 0x0 PRP2 0x0 00:27:47.366 [2024-11-26 07:37:24.719675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.366 [2024-11-26 07:37:24.719682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.367 [2024-11-26 07:37:24.719688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.367 [2024-11-26 07:37:24.719694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34536 len:8 PRP1 0x0 PRP2 0x0 00:27:47.367 [2024-11-26 07:37:24.719701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.367 [2024-11-26 07:37:24.719708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.367 [2024-11-26 07:37:24.719713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.367 [2024-11-26 07:37:24.719720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34544 len:8 PRP1 0x0 PRP2 0x0 00:27:47.367 [2024-11-26 07:37:24.719727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.367 [2024-11-26 07:37:24.719734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.367 [2024-11-26 07:37:24.719740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.367 [2024-11-26 07:37:24.719746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34552 len:8 PRP1 0x0 PRP2 0x0 00:27:47.367 [2024-11-26 07:37:24.719754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.367 [2024-11-26 07:37:24.719762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.367 [2024-11-26 07:37:24.719767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.367 [2024-11-26 07:37:24.719773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34560 len:8 PRP1 0x0 PRP2 0x0 00:27:47.367 [2024-11-26 07:37:24.719780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.367 [2024-11-26 07:37:24.719788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.367 [2024-11-26 07:37:24.719793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.367 [2024-11-26 07:37:24.719799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34568 len:8 PRP1 0x0 PRP2 0x0 00:27:47.367 [2024-11-26 07:37:24.719806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.367 [2024-11-26 07:37:24.719813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.367 [2024-11-26 07:37:24.719819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.367 [2024-11-26 07:37:24.719825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34576 len:8 PRP1 0x0 PRP2 0x0 00:27:47.367 [2024-11-26 07:37:24.719832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.367 [2024-11-26 07:37:24.719840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.367 [2024-11-26 07:37:24.719845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.367 [2024-11-26 07:37:24.719851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34584 len:8 PRP1 0x0 PRP2 0x0 00:27:47.367 [2024-11-26 07:37:24.719858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.367 [2024-11-26 07:37:24.719869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.367 [2024-11-26 07:37:24.719875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.367 [2024-11-26 07:37:24.719881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34592 len:8 PRP1 0x0 PRP2 0x0 00:27:47.367 [2024-11-26 07:37:24.719888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.367 [2024-11-26 07:37:24.719896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.367 [2024-11-26 07:37:24.719902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.367 [2024-11-26 07:37:24.719908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34600 len:8 PRP1 0x0 PRP2 0x0 00:27:47.367 [2024-11-26 07:37:24.719915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.367 [2024-11-26 07:37:24.719954] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:27:47.367 [2024-11-26 07:37:24.719976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.367 [2024-11-26 07:37:24.719985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.367 [2024-11-26 07:37:24.719993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.367 [2024-11-26 07:37:24.720000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.367 [2024-11-26 07:37:24.720011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.367 [2024-11-26 07:37:24.720018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.367 [2024-11-26 07:37:24.720027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.367 [2024-11-26 07:37:24.720034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.367 [2024-11-26 07:37:24.720042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:27:47.367 [2024-11-26 07:37:24.720074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad8d80 (9): Bad file descriptor 00:27:47.367 [2024-11-26 07:37:24.723572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:27:47.367 [2024-11-26 07:37:24.755361] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:27:47.367 11016.33 IOPS, 43.03 MiB/s [2024-11-26T06:37:31.504Z] 11081.00 IOPS, 43.29 MiB/s [2024-11-26T06:37:31.504Z] 11097.45 IOPS, 43.35 MiB/s [2024-11-26T06:37:31.504Z] 11118.42 IOPS, 43.43 MiB/s [2024-11-26T06:37:31.504Z] 11139.08 IOPS, 43.51 MiB/s [2024-11-26T06:37:31.504Z] 11140.29 IOPS, 43.52 MiB/s 00:27:47.367 Latency(us) 00:27:47.367 [2024-11-26T06:37:31.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.367 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:47.367 Verification LBA range: start 0x0 length 0x4000 00:27:47.367 NVMe0n1 : 15.01 11144.85 43.53 344.72 0.00 11112.78 535.89 50244.27 00:27:47.367 [2024-11-26T06:37:31.504Z] =================================================================================================================== 00:27:47.367 [2024-11-26T06:37:31.504Z] Total : 11144.85 43.53 344.72 0.00 11112.78 535.89 50244.27 00:27:47.367 Received shutdown signal, test time was about 15.000000 seconds 00:27:47.367 00:27:47.367 Latency(us) 00:27:47.367 [2024-11-26T06:37:31.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.367 [2024-11-26T06:37:31.504Z] =================================================================================================================== 00:27:47.367 [2024-11-26T06:37:31.504Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:47.367 07:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:27:47.367 07:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:27:47.367 07:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:27:47.367 07:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2237950 00:27:47.367 07:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2237950 /var/tmp/bdevperf.sock 00:27:47.367 07:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:27:47.367 07:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2237950 ']' 00:27:47.367 07:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:47.367 07:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:47.367 07:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:47.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:47.367 07:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:47.367 07:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:48.066 07:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:48.066 07:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:27:48.067 07:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:48.067 [2024-11-26 07:37:32.033614] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:48.067 07:37:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:48.327 [2024-11-26 07:37:32.210037] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:48.327 07:37:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:48.587 NVMe0n1 00:27:48.587 07:37:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:48.848 00:27:48.848 07:37:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:49.108 00:27:49.108 07:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:49.108 07:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:27:49.368 07:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:49.628 07:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:27:52.928 07:37:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:52.928 07:37:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:27:52.928 07:37:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2239181 00:27:52.929 07:37:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:52.929 07:37:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2239181 00:27:53.869 { 00:27:53.869 "results": [ 00:27:53.869 { 00:27:53.869 "job": "NVMe0n1", 00:27:53.869 "core_mask": "0x1", 00:27:53.869 "workload": "verify", 00:27:53.869 "status": "finished", 00:27:53.869 "verify_range": { 00:27:53.869 "start": 0, 00:27:53.869 "length": 16384 00:27:53.869 }, 00:27:53.869 "queue_depth": 128, 00:27:53.869 "io_size": 4096, 00:27:53.869 "runtime": 1.048893, 00:27:53.869 "iops": 11184.172265426501, 00:27:53.869 "mibps": 43.68817291182227, 00:27:53.869 "io_failed": 0, 00:27:53.869 "io_timeout": 0, 00:27:53.869 "avg_latency_us": 10955.86628704572, 00:27:53.869 "min_latency_us": 1495.04, 00:27:53.869 "max_latency_us": 41287.68 00:27:53.869 } 00:27:53.869 ], 00:27:53.869 "core_count": 1 00:27:53.869 } 00:27:53.869 07:37:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:53.869 [2024-11-26 07:37:31.086814] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:27:53.869 [2024-11-26 07:37:31.086880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2237950 ] 00:27:53.869 [2024-11-26 07:37:31.165232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.869 [2024-11-26 07:37:31.200899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.869 [2024-11-26 07:37:33.480353] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:53.869 [2024-11-26 07:37:33.480400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.869 [2024-11-26 07:37:33.480412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.869 [2024-11-26 07:37:33.480422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.869 [2024-11-26 07:37:33.480430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.869 [2024-11-26 07:37:33.480438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.869 [2024-11-26 07:37:33.480446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.869 [2024-11-26 07:37:33.480455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.869 [2024-11-26 07:37:33.480462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.870 [2024-11-26 07:37:33.480470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:27:53.870 [2024-11-26 07:37:33.480497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:27:53.870 [2024-11-26 07:37:33.480511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199dd80 (9): Bad file descriptor 00:27:53.870 [2024-11-26 07:37:33.575087] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:27:53.870 Running I/O for 1 seconds... 00:27:53.870 11602.00 IOPS, 45.32 MiB/s 00:27:53.870 Latency(us) 00:27:53.870 [2024-11-26T06:37:38.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:53.870 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:53.870 Verification LBA range: start 0x0 length 0x4000 00:27:53.870 NVMe0n1 : 1.05 11184.17 43.69 0.00 0.00 10955.87 1495.04 41287.68 00:27:53.870 [2024-11-26T06:37:38.007Z] =================================================================================================================== 00:27:53.870 [2024-11-26T06:37:38.007Z] Total : 11184.17 43.69 0.00 0.00 10955.87 1495.04 41287.68 00:27:53.870 07:37:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:53.870 07:37:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:27:54.131 07:37:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:54.131 07:37:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:54.131 07:37:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:27:54.391 07:37:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:54.652 07:37:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:27:57.955 07:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:57.955 07:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:27:57.955 07:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2237950 00:27:57.955 07:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2237950 ']' 00:27:57.955 07:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2237950 00:27:57.955 07:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:27:57.955 07:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:57.955 07:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2237950 00:27:57.955 07:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:57.955 07:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:57.955 07:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2237950' 00:27:57.955 killing process with pid 2237950 00:27:57.955 07:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2237950 00:27:57.955 07:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2237950 00:27:57.955 07:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:27:57.955 07:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:58.216 07:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:27:58.216 07:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:58.216 07:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:27:58.216 07:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:58.216 07:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:27:58.216 07:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:58.216 07:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:27:58.216 07:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:58.216 07:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:58.216 rmmod nvme_tcp 00:27:58.216 rmmod nvme_fabrics 00:27:58.216 rmmod nvme_keyring 00:27:58.216 07:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:58.216 07:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:27:58.216 07:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:27:58.216 07:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2234355 ']' 00:27:58.216 07:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2234355 00:27:58.216 07:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2234355 ']' 00:27:58.216 07:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2234355 00:27:58.216 07:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:27:58.216 07:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:58.216 07:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2234355 00:27:58.216 07:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:58.216 07:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:58.216 07:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2234355' 00:27:58.216 killing process with pid 2234355 00:27:58.216 07:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2234355 00:27:58.216 07:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2234355 00:27:58.478 07:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:58.478 07:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:58.478 07:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:58.478 07:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:27:58.478 07:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:58.478 07:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:27:58.478 07:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:27:58.478 07:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:58.478 07:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:58.478 07:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.478 07:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:58.478 07:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.398 07:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:00.398 00:28:00.398 real 0m41.103s 00:28:00.398 user 2m3.557s 00:28:00.398 sys 0m9.350s 00:28:00.398 07:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:00.398 07:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:00.398 ************************************ 00:28:00.398 END TEST nvmf_failover 00:28:00.398 ************************************ 00:28:00.659 07:37:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:00.659 07:37:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:00.659 07:37:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:00.659 07:37:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.659 ************************************ 00:28:00.659 START TEST nvmf_host_discovery 00:28:00.659 ************************************ 00:28:00.659 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:00.659 * Looking for test storage... 00:28:00.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:00.659 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:00.659 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:28:00.659 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:00.659 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:00.659 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:00.659 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:00.659 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:00.659 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:00.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.660 --rc genhtml_branch_coverage=1 00:28:00.660 --rc genhtml_function_coverage=1 00:28:00.660 --rc genhtml_legend=1 00:28:00.660 --rc geninfo_all_blocks=1 00:28:00.660 --rc geninfo_unexecuted_blocks=1 00:28:00.660 00:28:00.660 ' 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:00.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.660 --rc genhtml_branch_coverage=1 00:28:00.660 --rc genhtml_function_coverage=1 00:28:00.660 --rc genhtml_legend=1 00:28:00.660 --rc geninfo_all_blocks=1 00:28:00.660 --rc geninfo_unexecuted_blocks=1 00:28:00.660 00:28:00.660 ' 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:00.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.660 --rc genhtml_branch_coverage=1 00:28:00.660 --rc genhtml_function_coverage=1 00:28:00.660 --rc genhtml_legend=1 00:28:00.660 --rc geninfo_all_blocks=1 00:28:00.660 --rc geninfo_unexecuted_blocks=1 00:28:00.660 00:28:00.660 ' 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:00.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.660 --rc genhtml_branch_coverage=1 00:28:00.660 --rc genhtml_function_coverage=1 00:28:00.660 --rc genhtml_legend=1 00:28:00.660 --rc geninfo_all_blocks=1 00:28:00.660 --rc geninfo_unexecuted_blocks=1 00:28:00.660 00:28:00.660 ' 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:00.660 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:00.921 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:28:00.921 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:09.062 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:09.062 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:09.062 Found net devices under 0000:31:00.0: cvl_0_0 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:09.062 Found net devices under 0000:31:00.1: cvl_0_1 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.062 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:09.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:09.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:28:09.063 00:28:09.063 --- 10.0.0.2 ping statistics --- 00:28:09.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.063 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:09.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:09.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:28:09.063 00:28:09.063 --- 10.0.0.1 ping statistics --- 00:28:09.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.063 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2244892 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2244892 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2244892 ']' 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:09.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:09.063 07:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:09.063 [2024-11-26 07:37:52.830002] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:28:09.063 [2024-11-26 07:37:52.830060] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:09.063 [2024-11-26 07:37:52.936731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.063 [2024-11-26 07:37:52.976632] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:09.063 [2024-11-26 07:37:52.976678] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:09.063 [2024-11-26 07:37:52.976687] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:09.063 [2024-11-26 07:37:52.976694] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:09.063 [2024-11-26 07:37:52.976700] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:09.063 [2024-11-26 07:37:52.977441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.634 07:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:09.634 07:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:28:09.634 07:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:09.634 07:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:09.634 07:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:09.634 07:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:09.634 07:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:09.634 07:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.634 07:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:09.634 [2024-11-26 07:37:53.684846] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:09.634 07:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.634 07:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:28:09.634 07:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.634 07:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:09.634 [2024-11-26 07:37:53.693186] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:09.634 07:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.635 07:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:28:09.635 07:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.635 07:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:09.635 null0 00:28:09.635 07:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.635 07:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:28:09.635 07:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.635 07:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:09.635 null1 00:28:09.635 07:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.635 07:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:28:09.635 07:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.635 07:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:09.635 07:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.635 07:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2244961 00:28:09.635 07:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:28:09.635 07:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2244961 /tmp/host.sock 00:28:09.635 07:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2244961 ']' 00:28:09.635 07:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:28:09.635 07:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:09.635 07:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:09.635 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:09.635 07:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:09.635 07:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:09.895 [2024-11-26 07:37:53.779481] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:28:09.895 [2024-11-26 07:37:53.779547] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2244961 ] 00:28:09.895 [2024-11-26 07:37:53.862531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.895 [2024-11-26 07:37:53.905052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.467 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:10.467 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:28:10.467 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:10.467 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:28:10.467 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.467 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:10.467 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.467 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:28:10.467 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.467 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:10.467 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.467 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:28:10.467 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:28:10.467 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:10.467 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:10.467 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.467 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:10.467 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:10.727 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:10.727 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.727 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:28:10.727 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:28:10.727 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:10.727 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:10.727 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.727 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:10.727 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:10.727 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:10.727 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.727 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:28:10.727 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:28:10.727 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.728 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:10.728 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.728 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:28:10.728 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:10.728 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:10.728 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.728 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:10.728 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:10.728 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:10.728 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.728 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:28:10.728 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:28:10.728 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:10.728 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:10.728 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.728 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:10.728 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:10.728 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:10.728 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.728 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:28:10.728 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:28:10.728 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.728 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:10.728 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.728 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:28:10.728 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:10.728 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:10.728 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.728 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:10.728 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:10.728 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:10.728 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.989 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:28:10.989 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:28:10.989 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:10.989 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:10.989 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.989 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:10.989 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:10.989 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:10.989 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.989 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:28:10.989 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:10.989 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.989 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:10.989 [2024-11-26 07:37:54.920212] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:10.989 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.989 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:28:10.989 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:10.989 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:10.989 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.989 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:10.989 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:10.989 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:10.989 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.989 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:28:10.989 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:28:10.989 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:10.989 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:10.989 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.989 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:10.989 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:10.989 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:10.989 07:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.989 07:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:28:10.989 07:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:28:10.989 07:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:10.989 07:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:10.989 07:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:10.989 07:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:10.989 07:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:10.989 07:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:10.989 07:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:10.989 07:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:10.989 07:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:10.989 07:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.989 07:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:10.989 07:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.989 07:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:10.989 07:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:28:10.989 07:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:10.989 07:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:10.989 07:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:28:10.989 07:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.989 07:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:10.989 07:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.989 07:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:10.989 07:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:10.989 07:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:10.989 07:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:10.989 07:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:10.989 07:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:10.989 07:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:10.989 07:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:10.989 07:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.989 07:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:10.989 07:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:10.989 07:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:10.989 07:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.250 07:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:28:11.250 07:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:28:11.510 [2024-11-26 07:37:55.632740] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:11.510 [2024-11-26 07:37:55.632761] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:11.510 [2024-11-26 07:37:55.632775] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:11.771 [2024-11-26 07:37:55.760232] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:12.033 [2024-11-26 07:37:55.942388] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:28:12.033 [2024-11-26 07:37:55.943491] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x24cd650:1 started. 00:28:12.033 [2024-11-26 07:37:55.945153] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:12.033 [2024-11-26 07:37:55.945172] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:12.033 [2024-11-26 07:37:55.952589] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x24cd650 was disconnected and freed. delete nvme_qpair. 00:28:12.033 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:12.033 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:12.033 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:12.033 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:12.033 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:12.033 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.033 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:12.033 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:12.033 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:12.033 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:12.295 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:12.296 [2024-11-26 07:37:56.366947] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x24cd9d0:1 started. 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.296 [2024-11-26 07:37:56.414656] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x24cd9d0 was disconnected and freed. delete nvme_qpair. 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.296 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:12.557 [2024-11-26 07:37:56.468312] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:12.557 [2024-11-26 07:37:56.469188] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:12.557 [2024-11-26 07:37:56.469206] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:12.557 [2024-11-26 07:37:56.595602] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:28:12.557 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.558 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:28:12.558 07:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:28:12.818 [2024-11-26 07:37:56.901357] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:28:12.818 [2024-11-26 07:37:56.901400] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:12.818 [2024-11-26 07:37:56.901414] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:12.818 [2024-11-26 07:37:56.901420] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:13.762 [2024-11-26 07:37:57.740457] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:13.762 [2024-11-26 07:37:57.740483] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:13.762 [2024-11-26 07:37:57.746924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.762 [2024-11-26 07:37:57.746944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.762 [2024-11-26 07:37:57.746953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.762 [2024-11-26 07:37:57.746961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.762 [2024-11-26 07:37:57.746969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.762 [2024-11-26 07:37:57.746977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.762 [2024-11-26 07:37:57.746984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.762 [2024-11-26 07:37:57.746992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.762 [2024-11-26 07:37:57.746999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dd90 is same with the state(6) to be set 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:13.762 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:13.763 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.763 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:13.763 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:13.763 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:13.763 [2024-11-26 07:37:57.756936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249dd90 (9): Bad file descriptor 00:28:13.763 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.763 [2024-11-26 07:37:57.766969] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:13.763 [2024-11-26 07:37:57.766981] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:13.763 [2024-11-26 07:37:57.766986] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:13.763 [2024-11-26 07:37:57.766991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:13.763 [2024-11-26 07:37:57.767010] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:13.763 [2024-11-26 07:37:57.767353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.763 [2024-11-26 07:37:57.767367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249dd90 with addr=10.0.0.2, port=4420 00:28:13.763 [2024-11-26 07:37:57.767379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dd90 is same with the state(6) to be set 00:28:13.763 [2024-11-26 07:37:57.767391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249dd90 (9): Bad file descriptor 00:28:13.763 [2024-11-26 07:37:57.767402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:13.763 [2024-11-26 07:37:57.767409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:13.763 [2024-11-26 07:37:57.767417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:13.763 [2024-11-26 07:37:57.767424] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:13.763 [2024-11-26 07:37:57.767429] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:13.763 [2024-11-26 07:37:57.767435] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:13.763 [2024-11-26 07:37:57.777039] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:13.763 [2024-11-26 07:37:57.777052] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:13.763 [2024-11-26 07:37:57.777057] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:13.763 [2024-11-26 07:37:57.777061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:13.763 [2024-11-26 07:37:57.777078] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:13.763 [2024-11-26 07:37:57.777446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.763 [2024-11-26 07:37:57.777457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249dd90 with addr=10.0.0.2, port=4420 00:28:13.763 [2024-11-26 07:37:57.777465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dd90 is same with the state(6) to be set 00:28:13.763 [2024-11-26 07:37:57.777476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249dd90 (9): Bad file descriptor 00:28:13.763 [2024-11-26 07:37:57.777487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:13.763 [2024-11-26 07:37:57.777493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:13.763 [2024-11-26 07:37:57.777500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:13.763 [2024-11-26 07:37:57.777507] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:13.763 [2024-11-26 07:37:57.777511] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:13.763 [2024-11-26 07:37:57.777516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:13.763 [2024-11-26 07:37:57.787110] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:13.763 [2024-11-26 07:37:57.787125] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:13.763 [2024-11-26 07:37:57.787130] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:13.763 [2024-11-26 07:37:57.787135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:13.763 [2024-11-26 07:37:57.787151] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:13.763 [2024-11-26 07:37:57.787476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.763 [2024-11-26 07:37:57.787489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249dd90 with addr=10.0.0.2, port=4420 00:28:13.763 [2024-11-26 07:37:57.787497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dd90 is same with the state(6) to be set 00:28:13.763 [2024-11-26 07:37:57.787509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249dd90 (9): Bad file descriptor 00:28:13.763 [2024-11-26 07:37:57.787520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:13.763 [2024-11-26 07:37:57.787526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:13.763 [2024-11-26 07:37:57.787534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:13.763 [2024-11-26 07:37:57.787540] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:13.763 [2024-11-26 07:37:57.787545] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:13.763 [2024-11-26 07:37:57.787550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:13.763 [2024-11-26 07:37:57.797181] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:13.763 [2024-11-26 07:37:57.797194] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:13.763 [2024-11-26 07:37:57.797199] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:13.763 [2024-11-26 07:37:57.797203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:13.763 [2024-11-26 07:37:57.797217] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:13.763 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.763 [2024-11-26 07:37:57.797531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.763 [2024-11-26 07:37:57.797543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249dd90 with addr=10.0.0.2, port=4420 00:28:13.763 [2024-11-26 07:37:57.797552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dd90 is same with the state(6) to be set 00:28:13.763 [2024-11-26 07:37:57.797565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249dd90 (9): Bad file descriptor 00:28:13.763 [2024-11-26 07:37:57.797577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:13.763 [2024-11-26 07:37:57.797583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:13.763 [2024-11-26 07:37:57.797591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:13.763 [2024-11-26 07:37:57.797597] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:13.763 [2024-11-26 07:37:57.797602] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:13.763 [2024-11-26 07:37:57.797607] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:13.763 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:13.763 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:13.763 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:13.763 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:13.763 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:13.763 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:13.763 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:13.763 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:13.763 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:13.763 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.763 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:13.763 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:13.763 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:13.763 [2024-11-26 07:37:57.807248] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:13.763 [2024-11-26 07:37:57.807262] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:13.763 [2024-11-26 07:37:57.807267] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:13.763 [2024-11-26 07:37:57.807272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:13.763 [2024-11-26 07:37:57.807285] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:13.763 [2024-11-26 07:37:57.807604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.763 [2024-11-26 07:37:57.807615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249dd90 with addr=10.0.0.2, port=4420 00:28:13.763 [2024-11-26 07:37:57.807623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dd90 is same with the state(6) to be set 00:28:13.763 [2024-11-26 07:37:57.807634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249dd90 (9): Bad file descriptor 00:28:13.763 [2024-11-26 07:37:57.807646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:13.763 [2024-11-26 07:37:57.807653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:13.763 [2024-11-26 07:37:57.807660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:13.764 [2024-11-26 07:37:57.807667] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:13.764 [2024-11-26 07:37:57.807671] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:13.764 [2024-11-26 07:37:57.807677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:13.764 [2024-11-26 07:37:57.817318] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:13.764 [2024-11-26 07:37:57.817332] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:13.764 [2024-11-26 07:37:57.817337] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:13.764 [2024-11-26 07:37:57.817342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:13.764 [2024-11-26 07:37:57.817356] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:13.764 [2024-11-26 07:37:57.817686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.764 [2024-11-26 07:37:57.817699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249dd90 with addr=10.0.0.2, port=4420 00:28:13.764 [2024-11-26 07:37:57.817707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dd90 is same with the state(6) to be set 00:28:13.764 [2024-11-26 07:37:57.817725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249dd90 (9): Bad file descriptor 00:28:13.764 [2024-11-26 07:37:57.817736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:13.764 [2024-11-26 07:37:57.817742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:13.764 [2024-11-26 07:37:57.817750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:13.764 [2024-11-26 07:37:57.817756] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:13.764 [2024-11-26 07:37:57.817761] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:13.764 [2024-11-26 07:37:57.817766] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:13.764 [2024-11-26 07:37:57.827388] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:13.764 [2024-11-26 07:37:57.827400] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:13.764 [2024-11-26 07:37:57.827405] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:13.764 [2024-11-26 07:37:57.827410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:13.764 [2024-11-26 07:37:57.827424] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:13.764 [2024-11-26 07:37:57.827729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.764 [2024-11-26 07:37:57.827740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249dd90 with addr=10.0.0.2, port=4420 00:28:13.764 [2024-11-26 07:37:57.827748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249dd90 is same with the state(6) to be set 00:28:13.764 [2024-11-26 07:37:57.827759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249dd90 (9): Bad file descriptor 00:28:13.764 [2024-11-26 07:37:57.827770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:13.764 [2024-11-26 07:37:57.827776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:13.764 [2024-11-26 07:37:57.827783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:13.764 [2024-11-26 07:37:57.827790] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:13.764 [2024-11-26 07:37:57.827794] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:13.764 [2024-11-26 07:37:57.827799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:13.764 [2024-11-26 07:37:57.827972] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:28:13.764 [2024-11-26 07:37:57.827988] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:13.764 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.764 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:13.764 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:13.764 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:28:13.764 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:28:13.764 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:13.764 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:13.764 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:28:13.764 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:28:13.764 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:13.764 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:13.764 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.764 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:13.764 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:13.764 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:13.764 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.026 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:28:14.026 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:14.026 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:28:14.026 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:14.026 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:14.026 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:14.026 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:14.026 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:14.026 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:14.026 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:14.026 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:14.026 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:14.026 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.026 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:14.026 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.026 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:14.026 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:14.026 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:14.026 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:14.026 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:28:14.026 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.026 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:14.026 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.026 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:28:14.026 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:28:14.026 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:14.026 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:14.026 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:28:14.026 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:14.026 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:14.026 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:14.026 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:14.026 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.026 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:14.026 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:14.027 07:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.027 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:28:14.027 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:14.027 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:28:14.027 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:28:14.027 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:14.027 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:14.027 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:28:14.027 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:14.027 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:14.027 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.027 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:14.027 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:14.027 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:14.027 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:14.027 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.027 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:28:14.027 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:14.027 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:28:14.027 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:28:14.027 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:14.027 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:14.027 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:14.027 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:14.027 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:14.027 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:14.027 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:14.027 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.027 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:14.027 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:14.027 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.027 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:28:14.027 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:28:14.027 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:14.027 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:14.027 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:14.027 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.027 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:15.413 [2024-11-26 07:37:59.172732] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:15.413 [2024-11-26 07:37:59.172748] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:15.413 [2024-11-26 07:37:59.172761] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:15.413 [2024-11-26 07:37:59.300183] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:28:15.675 [2024-11-26 07:37:59.607733] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:28:15.675 [2024-11-26 07:37:59.608523] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x24c7020:1 started. 00:28:15.675 [2024-11-26 07:37:59.610329] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:15.675 [2024-11-26 07:37:59.610356] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:15.675 [2024-11-26 07:37:59.613022] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x24c7020 was disconnected and freed. delete nvme_qpair. 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:15.675 request: 00:28:15.675 { 00:28:15.675 "name": "nvme", 00:28:15.675 "trtype": "tcp", 00:28:15.675 "traddr": "10.0.0.2", 00:28:15.675 "adrfam": "ipv4", 00:28:15.675 "trsvcid": "8009", 00:28:15.675 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:15.675 "wait_for_attach": true, 00:28:15.675 "method": "bdev_nvme_start_discovery", 00:28:15.675 "req_id": 1 00:28:15.675 } 00:28:15.675 Got JSON-RPC error response 00:28:15.675 response: 00:28:15.675 { 00:28:15.675 "code": -17, 00:28:15.675 "message": "File exists" 00:28:15.675 } 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:15.675 request: 00:28:15.675 { 00:28:15.675 "name": "nvme_second", 00:28:15.675 "trtype": "tcp", 00:28:15.675 "traddr": "10.0.0.2", 00:28:15.675 "adrfam": "ipv4", 00:28:15.675 "trsvcid": "8009", 00:28:15.675 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:15.675 "wait_for_attach": true, 00:28:15.675 "method": "bdev_nvme_start_discovery", 00:28:15.675 "req_id": 1 00:28:15.675 } 00:28:15.675 Got JSON-RPC error response 00:28:15.675 response: 00:28:15.675 { 00:28:15.675 "code": -17, 00:28:15.675 "message": "File exists" 00:28:15.675 } 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.675 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:28:15.937 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:28:15.937 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:15.937 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:15.937 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:15.937 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:15.937 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.937 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:15.937 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.937 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:15.937 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:15.937 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:28:15.937 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:15.937 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:15.937 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:15.937 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:15.937 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:15.937 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:15.937 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.937 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:16.881 [2024-11-26 07:38:00.865849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-11-26 07:38:00.865891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c90c0 with addr=10.0.0.2, port=8010 00:28:16.882 [2024-11-26 07:38:00.865911] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:16.882 [2024-11-26 07:38:00.865919] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:16.882 [2024-11-26 07:38:00.865926] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:17.824 [2024-11-26 07:38:01.868178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.824 [2024-11-26 07:38:01.868203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c06b0 with addr=10.0.0.2, port=8010 00:28:17.824 [2024-11-26 07:38:01.868216] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:17.824 [2024-11-26 07:38:01.868223] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:17.824 [2024-11-26 07:38:01.868229] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:18.767 [2024-11-26 07:38:02.870152] bdev_nvme.c:7527:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:28:18.767 request: 00:28:18.767 { 00:28:18.767 "name": "nvme_second", 00:28:18.767 "trtype": "tcp", 00:28:18.767 "traddr": "10.0.0.2", 00:28:18.767 "adrfam": "ipv4", 00:28:18.767 "trsvcid": "8010", 00:28:18.767 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:18.767 "wait_for_attach": false, 00:28:18.767 "attach_timeout_ms": 3000, 00:28:18.767 "method": "bdev_nvme_start_discovery", 00:28:18.767 "req_id": 1 00:28:18.767 } 00:28:18.767 Got JSON-RPC error response 00:28:18.767 response: 00:28:18.767 { 00:28:18.767 "code": -110, 00:28:18.767 "message": "Connection timed out" 00:28:18.767 } 00:28:18.767 07:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:18.767 07:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:28:18.768 07:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:18.768 07:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:18.768 07:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:18.768 07:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:28:18.768 07:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:18.768 07:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:18.768 07:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.768 07:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:28:18.768 07:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:18.768 07:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:28:18.768 07:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.028 07:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:28:19.028 07:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:28:19.028 07:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2244961 00:28:19.028 07:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:28:19.028 07:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:19.028 07:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:28:19.028 07:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:19.028 07:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:28:19.028 07:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:19.028 07:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:19.028 rmmod nvme_tcp 00:28:19.028 rmmod nvme_fabrics 00:28:19.028 rmmod nvme_keyring 00:28:19.028 07:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:19.028 07:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:28:19.028 07:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:28:19.028 07:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2244892 ']' 00:28:19.028 07:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2244892 00:28:19.028 07:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2244892 ']' 00:28:19.028 07:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2244892 00:28:19.028 07:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:28:19.028 07:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:19.028 07:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2244892 00:28:19.028 07:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:19.028 07:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:19.028 07:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2244892' 00:28:19.028 killing process with pid 2244892 00:28:19.028 07:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2244892 00:28:19.028 07:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2244892 00:28:19.289 07:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:19.289 07:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:19.289 07:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:19.289 07:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:28:19.289 07:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:28:19.289 07:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:19.289 07:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:28:19.289 07:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:19.289 07:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:19.289 07:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.289 07:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:19.289 07:38:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:21.202 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:21.202 00:28:21.202 real 0m20.682s 00:28:21.202 user 0m23.368s 00:28:21.202 sys 0m7.620s 00:28:21.202 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:21.202 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:21.202 ************************************ 00:28:21.202 END TEST nvmf_host_discovery 00:28:21.202 ************************************ 00:28:21.202 07:38:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:28:21.202 07:38:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:21.202 07:38:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:21.202 07:38:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.464 ************************************ 00:28:21.464 START TEST nvmf_host_multipath_status 00:28:21.464 ************************************ 00:28:21.464 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:28:21.464 * Looking for test storage... 00:28:21.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:21.464 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:21.464 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:28:21.464 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:21.464 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:21.464 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:21.464 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:21.464 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:21.464 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:28:21.464 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:28:21.464 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:28:21.464 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:28:21.464 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:21.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.465 --rc genhtml_branch_coverage=1 00:28:21.465 --rc genhtml_function_coverage=1 00:28:21.465 --rc genhtml_legend=1 00:28:21.465 --rc geninfo_all_blocks=1 00:28:21.465 --rc geninfo_unexecuted_blocks=1 00:28:21.465 00:28:21.465 ' 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:21.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.465 --rc genhtml_branch_coverage=1 00:28:21.465 --rc genhtml_function_coverage=1 00:28:21.465 --rc genhtml_legend=1 00:28:21.465 --rc geninfo_all_blocks=1 00:28:21.465 --rc geninfo_unexecuted_blocks=1 00:28:21.465 00:28:21.465 ' 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:21.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.465 --rc genhtml_branch_coverage=1 00:28:21.465 --rc genhtml_function_coverage=1 00:28:21.465 --rc genhtml_legend=1 00:28:21.465 --rc geninfo_all_blocks=1 00:28:21.465 --rc geninfo_unexecuted_blocks=1 00:28:21.465 00:28:21.465 ' 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:21.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.465 --rc genhtml_branch_coverage=1 00:28:21.465 --rc genhtml_function_coverage=1 00:28:21.465 --rc genhtml_legend=1 00:28:21.465 --rc geninfo_all_blocks=1 00:28:21.465 --rc geninfo_unexecuted_blocks=1 00:28:21.465 00:28:21.465 ' 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:21.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:21.465 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:21.466 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:21.466 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:21.466 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:21.466 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:21.726 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:21.726 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:21.726 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:28:21.726 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:29.868 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:29.868 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:28:29.868 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:29.868 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:29.868 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:29.868 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:29.868 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:29.868 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:28:29.868 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:29.868 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:28:29.868 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:28:29.868 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:29.869 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:29.869 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:29.869 Found net devices under 0000:31:00.0: cvl_0_0 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:29.869 Found net devices under 0000:31:00.1: cvl_0_1 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:29.869 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:30.131 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:30.131 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:30.131 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:30.131 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:30.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:30.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.599 ms 00:28:30.131 00:28:30.131 --- 10.0.0.2 ping statistics --- 00:28:30.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.131 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:28:30.131 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:30.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:30.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.346 ms 00:28:30.131 00:28:30.131 --- 10.0.0.1 ping statistics --- 00:28:30.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.131 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:28:30.131 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:30.131 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:28:30.131 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:30.131 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:30.131 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:30.131 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:30.131 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:30.131 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:30.131 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:30.131 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:28:30.131 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:30.131 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:30.131 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:30.131 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2251741 00:28:30.131 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2251741 00:28:30.131 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:28:30.131 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2251741 ']' 00:28:30.131 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:30.131 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:30.131 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:30.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:30.131 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:30.131 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:30.131 [2024-11-26 07:38:14.147851] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:28:30.131 [2024-11-26 07:38:14.147927] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:30.131 [2024-11-26 07:38:14.243127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:30.391 [2024-11-26 07:38:14.283491] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:30.391 [2024-11-26 07:38:14.283530] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:30.391 [2024-11-26 07:38:14.283538] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:30.391 [2024-11-26 07:38:14.283545] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:30.391 [2024-11-26 07:38:14.283551] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:30.391 [2024-11-26 07:38:14.284909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:30.391 [2024-11-26 07:38:14.284933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.962 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:30.962 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:28:30.962 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:30.962 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:30.962 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:30.962 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:30.962 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2251741 00:28:30.962 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:31.222 [2024-11-26 07:38:15.139167] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:31.222 07:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:31.222 Malloc0 00:28:31.222 07:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:28:31.482 07:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:31.743 07:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:31.743 [2024-11-26 07:38:15.816185] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:31.743 07:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:32.003 [2024-11-26 07:38:15.984634] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:32.003 07:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2252174 00:28:32.003 07:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:32.003 07:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:28:32.003 07:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2252174 /var/tmp/bdevperf.sock 00:28:32.003 07:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2252174 ']' 00:28:32.003 07:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:32.003 07:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:32.003 07:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:32.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:32.003 07:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:32.003 07:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:32.263 07:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:32.263 07:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:28:32.263 07:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:32.523 07:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:28:32.784 Nvme0n1 00:28:32.784 07:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:28:33.357 Nvme0n1 00:28:33.357 07:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:28:33.357 07:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:28:35.269 07:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:28:35.269 07:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:28:35.530 07:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:35.790 07:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:28:36.733 07:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:28:36.733 07:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:36.733 07:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:36.733 07:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:36.995 07:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:36.995 07:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:36.995 07:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:36.995 07:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:36.995 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:36.995 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:36.995 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:36.995 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:37.255 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:37.255 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:37.255 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:37.255 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:37.517 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:37.517 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:37.517 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:37.517 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:37.778 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:37.778 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:37.778 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:37.778 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:37.778 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:37.778 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:28:37.778 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:38.039 07:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:38.301 07:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:28:39.242 07:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:28:39.242 07:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:39.242 07:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:39.242 07:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:39.503 07:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:39.503 07:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:39.503 07:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:39.503 07:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:39.503 07:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:39.503 07:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:39.503 07:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:39.503 07:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:39.763 07:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:39.764 07:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:39.764 07:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:39.764 07:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:40.025 07:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:40.025 07:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:40.025 07:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:40.025 07:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:40.025 07:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:40.025 07:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:40.025 07:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:40.025 07:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:40.286 07:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:40.286 07:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:28:40.286 07:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:40.546 07:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:28:40.546 07:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:28:42.030 07:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:28:42.030 07:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:42.030 07:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:42.030 07:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:42.030 07:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:42.030 07:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:42.030 07:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:42.030 07:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:42.030 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:42.030 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:42.030 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:42.030 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:42.326 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:42.326 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:42.326 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:42.326 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:42.326 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:42.326 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:42.326 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:42.326 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:42.618 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:42.618 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:42.618 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:42.618 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:42.880 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:42.880 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:28:42.880 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:42.880 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:43.141 07:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:28:44.085 07:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:28:44.085 07:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:44.085 07:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:44.085 07:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:44.346 07:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:44.346 07:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:44.346 07:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:44.346 07:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:44.606 07:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:44.606 07:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:44.606 07:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:44.606 07:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:44.606 07:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:44.606 07:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:44.606 07:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:44.606 07:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:44.865 07:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:44.865 07:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:44.865 07:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:44.865 07:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:45.124 07:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:45.124 07:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:45.125 07:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:45.125 07:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:45.385 07:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:45.385 07:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:28:45.385 07:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:45.385 07:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:45.646 07:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:28:46.586 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:28:46.586 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:46.586 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:46.586 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:46.846 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:46.846 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:46.846 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:46.846 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:47.107 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:47.107 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:47.107 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:47.107 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:47.107 07:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:47.107 07:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:47.107 07:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:47.107 07:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:47.367 07:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:47.367 07:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:47.367 07:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:47.367 07:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:47.628 07:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:47.628 07:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:47.628 07:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:47.628 07:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:47.628 07:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:47.628 07:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:28:47.628 07:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:47.888 07:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:48.148 07:38:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:28:49.090 07:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:28:49.090 07:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:49.090 07:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:49.090 07:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:49.352 07:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:49.352 07:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:49.352 07:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:49.352 07:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:49.352 07:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:49.352 07:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:49.352 07:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:49.352 07:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:49.614 07:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:49.614 07:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:49.614 07:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:49.614 07:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:49.875 07:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:49.875 07:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:49.875 07:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:49.875 07:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:49.875 07:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:49.875 07:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:49.875 07:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:49.875 07:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:50.136 07:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:50.136 07:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:28:50.397 07:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:28:50.397 07:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:28:50.660 07:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:50.660 07:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:28:51.604 07:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:28:51.605 07:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:51.866 07:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:51.866 07:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:51.866 07:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:51.866 07:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:51.866 07:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:51.866 07:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:52.127 07:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:52.127 07:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:52.127 07:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:52.127 07:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:52.388 07:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:52.388 07:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:52.388 07:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:52.388 07:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:52.388 07:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:52.388 07:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:52.388 07:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:52.388 07:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:52.649 07:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:52.649 07:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:52.649 07:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:52.649 07:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:52.911 07:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:52.911 07:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:28:52.911 07:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:52.911 07:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:53.172 07:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:28:54.115 07:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:28:54.115 07:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:54.115 07:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:54.115 07:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:54.376 07:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:54.376 07:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:54.376 07:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:54.376 07:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:54.636 07:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:54.636 07:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:54.636 07:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:54.636 07:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:54.636 07:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:54.636 07:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:54.636 07:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:54.636 07:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:54.897 07:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:54.897 07:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:54.897 07:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:54.897 07:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:55.158 07:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:55.158 07:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:55.158 07:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:55.158 07:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:55.158 07:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:55.159 07:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:28:55.159 07:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:55.419 07:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:28:55.679 07:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:28:56.624 07:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:28:56.624 07:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:56.624 07:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:56.624 07:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:56.885 07:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:56.885 07:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:56.885 07:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:56.885 07:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:57.146 07:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:57.146 07:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:57.146 07:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:57.146 07:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:57.146 07:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:57.146 07:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:57.146 07:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:57.146 07:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:57.407 07:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:57.407 07:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:57.407 07:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:57.407 07:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:57.668 07:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:57.668 07:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:57.668 07:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:57.668 07:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:57.668 07:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:57.668 07:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:28:57.668 07:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:57.928 07:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:58.188 07:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:28:59.130 07:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:28:59.130 07:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:59.130 07:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:59.130 07:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:59.391 07:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:59.391 07:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:59.391 07:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:59.391 07:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:59.391 07:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:59.391 07:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:59.391 07:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:59.391 07:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:59.651 07:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:59.651 07:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:59.651 07:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:59.651 07:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:59.912 07:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:59.912 07:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:59.912 07:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:59.912 07:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:59.912 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:59.912 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:59.912 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:59.912 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:00.172 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:00.172 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2252174 00:29:00.172 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2252174 ']' 00:29:00.172 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2252174 00:29:00.172 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:29:00.172 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:00.172 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2252174 00:29:00.172 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:29:00.172 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:29:00.172 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2252174' 00:29:00.172 killing process with pid 2252174 00:29:00.172 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2252174 00:29:00.172 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2252174 00:29:00.172 { 00:29:00.172 "results": [ 00:29:00.172 { 00:29:00.172 "job": "Nvme0n1", 00:29:00.172 "core_mask": "0x4", 00:29:00.172 "workload": "verify", 00:29:00.172 "status": "terminated", 00:29:00.172 "verify_range": { 00:29:00.172 "start": 0, 00:29:00.172 "length": 16384 00:29:00.172 }, 00:29:00.172 "queue_depth": 128, 00:29:00.172 "io_size": 4096, 00:29:00.172 "runtime": 26.809087, 00:29:00.172 "iops": 10763.551925509437, 00:29:00.172 "mibps": 42.04512470902124, 00:29:00.172 "io_failed": 0, 00:29:00.172 "io_timeout": 0, 00:29:00.172 "avg_latency_us": 11873.865864710293, 00:29:00.172 "min_latency_us": 192.0, 00:29:00.172 "max_latency_us": 3019898.88 00:29:00.172 } 00:29:00.172 ], 00:29:00.172 "core_count": 1 00:29:00.172 } 00:29:00.435 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2252174 00:29:00.435 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:00.435 [2024-11-26 07:38:16.058589] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:29:00.435 [2024-11-26 07:38:16.058661] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2252174 ] 00:29:00.435 [2024-11-26 07:38:16.124091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.435 [2024-11-26 07:38:16.153012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:00.435 Running I/O for 90 seconds... 00:29:00.436 9460.00 IOPS, 36.95 MiB/s [2024-11-26T06:38:44.573Z] 9570.50 IOPS, 37.38 MiB/s [2024-11-26T06:38:44.573Z] 9592.00 IOPS, 37.47 MiB/s [2024-11-26T06:38:44.573Z] 9600.00 IOPS, 37.50 MiB/s [2024-11-26T06:38:44.573Z] 9844.60 IOPS, 38.46 MiB/s [2024-11-26T06:38:44.573Z] 10339.00 IOPS, 40.39 MiB/s [2024-11-26T06:38:44.573Z] 10695.00 IOPS, 41.78 MiB/s [2024-11-26T06:38:44.573Z] 10683.75 IOPS, 41.73 MiB/s [2024-11-26T06:38:44.573Z] 10561.33 IOPS, 41.26 MiB/s [2024-11-26T06:38:44.573Z] 10478.50 IOPS, 40.93 MiB/s [2024-11-26T06:38:44.573Z] 10407.73 IOPS, 40.66 MiB/s [2024-11-26T06:38:44.573Z] [2024-11-26 07:38:29.417840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:74032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.436 [2024-11-26 07:38:29.417880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:00.436 [2024-11-26 07:38:29.417921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.436 [2024-11-26 07:38:29.417930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:00.436 [2024-11-26 07:38:29.417943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.436 [2024-11-26 07:38:29.417951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:00.436 [2024-11-26 07:38:29.417965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.436 [2024-11-26 07:38:29.417973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:00.436 [2024-11-26 07:38:29.417986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.436 [2024-11-26 07:38:29.417994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:00.436 [2024-11-26 07:38:29.418008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.436 [2024-11-26 07:38:29.418016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:00.436 [2024-11-26 07:38:29.418031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.436 [2024-11-26 07:38:29.418039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:00.436 [2024-11-26 07:38:29.418053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.436 [2024-11-26 07:38:29.418060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:00.436 [2024-11-26 07:38:29.418075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.436 [2024-11-26 07:38:29.418084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:00.436 [2024-11-26 07:38:29.418097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:74104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.436 [2024-11-26 07:38:29.418113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:00.436 [2024-11-26 07:38:29.418128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.436 [2024-11-26 07:38:29.418136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:00.436 [2024-11-26 07:38:29.418150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:74120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.436 [2024-11-26 07:38:29.418157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:00.436 [2024-11-26 07:38:29.418171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.436 [2024-11-26 07:38:29.418178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:00.436 [2024-11-26 07:38:29.418193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.436 [2024-11-26 07:38:29.418201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:00.436 [2024-11-26 07:38:29.418215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:74144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.436 [2024-11-26 07:38:29.418223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:00.436 [2024-11-26 07:38:29.418237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.436 [2024-11-26 07:38:29.418245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.436 [2024-11-26 07:38:29.418258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.436 [2024-11-26 07:38:29.418265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:00.436 [2024-11-26 07:38:29.419635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.436 [2024-11-26 07:38:29.419653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:00.436 [2024-11-26 07:38:29.419670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.436 [2024-11-26 07:38:29.419680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:00.436 [2024-11-26 07:38:29.419696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.436 [2024-11-26 07:38:29.419705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:00.436 [2024-11-26 07:38:29.419722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.436 [2024-11-26 07:38:29.419731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:00.436 [2024-11-26 07:38:29.419748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:74200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.436 [2024-11-26 07:38:29.419757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:00.436 [2024-11-26 07:38:29.419777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.436 [2024-11-26 07:38:29.419786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:00.436 [2024-11-26 07:38:29.419802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:74216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.436 [2024-11-26 07:38:29.419811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:00.436 [2024-11-26 07:38:29.419827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.436 [2024-11-26 07:38:29.419836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:00.436 [2024-11-26 07:38:29.419852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.436 [2024-11-26 07:38:29.419865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:00.436 [2024-11-26 07:38:29.419882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.436 [2024-11-26 07:38:29.419891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:00.436 [2024-11-26 07:38:29.419907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.436 [2024-11-26 07:38:29.419916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:00.436 [2024-11-26 07:38:29.419932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.436 [2024-11-26 07:38:29.419941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:00.436 [2024-11-26 07:38:29.419957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.436 [2024-11-26 07:38:29.419965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:00.436 [2024-11-26 07:38:29.419982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.436 [2024-11-26 07:38:29.419990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:00.436 [2024-11-26 07:38:29.420007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.436 [2024-11-26 07:38:29.420015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:00.436 [2024-11-26 07:38:29.420032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.436 [2024-11-26 07:38:29.420040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:00.436 [2024-11-26 07:38:29.420057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.436 [2024-11-26 07:38:29.420066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:00.436 [2024-11-26 07:38:29.420124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.436 [2024-11-26 07:38:29.420134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:00.436 [2024-11-26 07:38:29.420153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.420161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:00.437 [2024-11-26 07:38:29.420179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.420188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:00.437 [2024-11-26 07:38:29.420205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.420214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:00.437 [2024-11-26 07:38:29.420231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.420240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:00.437 [2024-11-26 07:38:29.420258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.420267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:00.437 [2024-11-26 07:38:29.420284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.420293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:00.437 [2024-11-26 07:38:29.420311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.420320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:00.437 [2024-11-26 07:38:29.420337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.420346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:00.437 [2024-11-26 07:38:29.420363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.420372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:00.437 [2024-11-26 07:38:29.420391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.420399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:00.437 [2024-11-26 07:38:29.420416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.420425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:00.437 [2024-11-26 07:38:29.420443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.420454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.437 [2024-11-26 07:38:29.420471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.420480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.437 [2024-11-26 07:38:29.420498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.420507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:00.437 [2024-11-26 07:38:29.420524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.420533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:00.437 [2024-11-26 07:38:29.420551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.420560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:00.437 [2024-11-26 07:38:29.420578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.420586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:00.437 [2024-11-26 07:38:29.420604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.420612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:00.437 [2024-11-26 07:38:29.420630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.420639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:00.437 [2024-11-26 07:38:29.420656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.420665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:00.437 [2024-11-26 07:38:29.420683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.420692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:00.437 [2024-11-26 07:38:29.420709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.420718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:00.437 [2024-11-26 07:38:29.420736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.420745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:00.437 [2024-11-26 07:38:29.420762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.420772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:00.437 [2024-11-26 07:38:29.420790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.420798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:00.437 [2024-11-26 07:38:29.420816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.420824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:00.437 [2024-11-26 07:38:29.420841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.420849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:00.437 [2024-11-26 07:38:29.420872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.420882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:00.437 [2024-11-26 07:38:29.420899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.420908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:00.437 [2024-11-26 07:38:29.420926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.420934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:00.437 [2024-11-26 07:38:29.420952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.420961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:00.437 [2024-11-26 07:38:29.420978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.420987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:00.437 [2024-11-26 07:38:29.421005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.421013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:00.437 [2024-11-26 07:38:29.421030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.421039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:00.437 [2024-11-26 07:38:29.421056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.421065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:00.437 [2024-11-26 07:38:29.421082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.421090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:00.437 [2024-11-26 07:38:29.421110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.421118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:00.437 [2024-11-26 07:38:29.421136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.421144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:00.437 [2024-11-26 07:38:29.421162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.437 [2024-11-26 07:38:29.421171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.421189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.421198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.421215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.421224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.421242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.421250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.421268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.421278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.421295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.421304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.421322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.421330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.421348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.421356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.421466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.421476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.421498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.421507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.421530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.421539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.421559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.421568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.421589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.421598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.421619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.421628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.421649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.421657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.421678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.421687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.421708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.421716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.421737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.421746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.421766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.421775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.421796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.421804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.421825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.421833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.421854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.421867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.421889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.421902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.421923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.421932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.421953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.421962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.421983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.421992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.422012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.422021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.422042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.422050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.422071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.422080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.422101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.422110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.422130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.422138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.422159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.422168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.422190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.422199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.422219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.422228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.422249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.422260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.422281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.422289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.422310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.422319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.422340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.422349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.422369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.422378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.438 [2024-11-26 07:38:29.422399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.438 [2024-11-26 07:38:29.422407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:00.439 [2024-11-26 07:38:29.422428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.439 [2024-11-26 07:38:29.422437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:00.439 [2024-11-26 07:38:29.422458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.439 [2024-11-26 07:38:29.422466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:00.439 [2024-11-26 07:38:29.422487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.439 [2024-11-26 07:38:29.422495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:00.439 [2024-11-26 07:38:29.422516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.439 [2024-11-26 07:38:29.422524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:00.439 [2024-11-26 07:38:29.422545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.439 [2024-11-26 07:38:29.422554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:00.439 [2024-11-26 07:38:29.422575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.439 [2024-11-26 07:38:29.422585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:00.439 [2024-11-26 07:38:29.422605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.439 [2024-11-26 07:38:29.422613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:00.439 [2024-11-26 07:38:29.422636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.439 [2024-11-26 07:38:29.422645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:00.439 [2024-11-26 07:38:29.422666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.439 [2024-11-26 07:38:29.422675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:00.439 [2024-11-26 07:38:29.422695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.439 [2024-11-26 07:38:29.422704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:00.439 [2024-11-26 07:38:29.422724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.439 [2024-11-26 07:38:29.422733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:00.439 [2024-11-26 07:38:29.422755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.439 [2024-11-26 07:38:29.422763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:00.439 [2024-11-26 07:38:29.422785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.439 [2024-11-26 07:38:29.422794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:00.439 [2024-11-26 07:38:29.422814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.439 [2024-11-26 07:38:29.422823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:00.439 [2024-11-26 07:38:29.422844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.439 [2024-11-26 07:38:29.422853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:00.439 10328.50 IOPS, 40.35 MiB/s [2024-11-26T06:38:44.576Z] 9534.00 IOPS, 37.24 MiB/s [2024-11-26T06:38:44.576Z] 8853.00 IOPS, 34.58 MiB/s [2024-11-26T06:38:44.576Z] 8279.27 IOPS, 32.34 MiB/s [2024-11-26T06:38:44.576Z] 8564.06 IOPS, 33.45 MiB/s [2024-11-26T06:38:44.576Z] 8829.35 IOPS, 34.49 MiB/s [2024-11-26T06:38:44.576Z] 9241.72 IOPS, 36.10 MiB/s [2024-11-26T06:38:44.576Z] 9636.42 IOPS, 37.64 MiB/s [2024-11-26T06:38:44.576Z] 9918.05 IOPS, 38.74 MiB/s [2024-11-26T06:38:44.576Z] 10079.62 IOPS, 39.37 MiB/s [2024-11-26T06:38:44.576Z] 10210.23 IOPS, 39.88 MiB/s [2024-11-26T06:38:44.576Z] 10457.43 IOPS, 40.85 MiB/s [2024-11-26T06:38:44.576Z] 10719.04 IOPS, 41.87 MiB/s [2024-11-26T06:38:44.576Z] [2024-11-26 07:38:42.070197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.439 [2024-11-26 07:38:42.070235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:00.439 [2024-11-26 07:38:42.070274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.439 [2024-11-26 07:38:42.070284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:00.439 [2024-11-26 07:38:42.070298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.439 [2024-11-26 07:38:42.070305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:00.439 [2024-11-26 07:38:42.070324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.439 [2024-11-26 07:38:42.070331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:00.439 [2024-11-26 07:38:42.070345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.439 [2024-11-26 07:38:42.070352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:00.439 [2024-11-26 07:38:42.070366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.439 [2024-11-26 07:38:42.070374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:00.439 [2024-11-26 07:38:42.070388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.439 [2024-11-26 07:38:42.070396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:00.439 [2024-11-26 07:38:42.070410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.439 [2024-11-26 07:38:42.070418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:00.439 [2024-11-26 07:38:42.070433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:40712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.439 [2024-11-26 07:38:42.070441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:00.439 [2024-11-26 07:38:42.070456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:40744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.439 [2024-11-26 07:38:42.070464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:00.439 [2024-11-26 07:38:42.070478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.439 [2024-11-26 07:38:42.070486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:00.439 [2024-11-26 07:38:42.070500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:40808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.439 [2024-11-26 07:38:42.070508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:00.439 [2024-11-26 07:38:42.070522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.439 [2024-11-26 07:38:42.070530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:00.439 [2024-11-26 07:38:42.070543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:40880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.439 [2024-11-26 07:38:42.070550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:00.439 [2024-11-26 07:38:42.070564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.439 [2024-11-26 07:38:42.070571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:00.439 [2024-11-26 07:38:42.070588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.439 [2024-11-26 07:38:42.070595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:00.439 [2024-11-26 07:38:42.070608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.439 [2024-11-26 07:38:42.070615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:00.439 [2024-11-26 07:38:42.070630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:41336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.439 [2024-11-26 07:38:42.070639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:00.439 [2024-11-26 07:38:42.070654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:41368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.439 [2024-11-26 07:38:42.070663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:00.439 [2024-11-26 07:38:42.070679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.439 [2024-11-26 07:38:42.070688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:00.439 [2024-11-26 07:38:42.070704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.439 [2024-11-26 07:38:42.070713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:00.439 [2024-11-26 07:38:42.070943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.440 [2024-11-26 07:38:42.070957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.440 [2024-11-26 07:38:42.070974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.440 [2024-11-26 07:38:42.070983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:00.440 [2024-11-26 07:38:42.070999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.440 [2024-11-26 07:38:42.071007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:00.440 [2024-11-26 07:38:42.071023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.440 [2024-11-26 07:38:42.071032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:00.440 [2024-11-26 07:38:42.071047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.440 [2024-11-26 07:38:42.071056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:00.440 [2024-11-26 07:38:42.071072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.440 [2024-11-26 07:38:42.071081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:00.440 [2024-11-26 07:38:42.071096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.440 [2024-11-26 07:38:42.071108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:00.440 [2024-11-26 07:38:42.071123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.440 [2024-11-26 07:38:42.071132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:00.440 [2024-11-26 07:38:42.071147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.440 [2024-11-26 07:38:42.071157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:00.440 [2024-11-26 07:38:42.071172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.440 [2024-11-26 07:38:42.071181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:00.440 [2024-11-26 07:38:42.071196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.440 [2024-11-26 07:38:42.071205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:00.440 10842.52 IOPS, 42.35 MiB/s [2024-11-26T06:38:44.577Z] 10799.58 IOPS, 42.19 MiB/s [2024-11-26T06:38:44.577Z] Received shutdown signal, test time was about 26.809697 seconds 00:29:00.440 00:29:00.440 Latency(us) 00:29:00.440 [2024-11-26T06:38:44.577Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:00.440 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:00.440 Verification LBA range: start 0x0 length 0x4000 00:29:00.440 Nvme0n1 : 26.81 10763.55 42.05 0.00 0.00 11873.87 192.00 3019898.88 00:29:00.440 [2024-11-26T06:38:44.577Z] =================================================================================================================== 00:29:00.440 [2024-11-26T06:38:44.577Z] Total : 10763.55 42.05 0.00 0.00 11873.87 192.00 3019898.88 00:29:00.440 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:00.440 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:29:00.440 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:00.440 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:29:00.440 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:00.440 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:29:00.440 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:00.440 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:29:00.440 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:00.440 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:00.702 rmmod nvme_tcp 00:29:00.702 rmmod nvme_fabrics 00:29:00.702 rmmod nvme_keyring 00:29:00.702 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:00.702 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:29:00.702 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:29:00.702 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2251741 ']' 00:29:00.702 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2251741 00:29:00.702 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2251741 ']' 00:29:00.702 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2251741 00:29:00.702 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:29:00.702 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:00.702 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2251741 00:29:00.702 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:00.702 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:00.702 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2251741' 00:29:00.702 killing process with pid 2251741 00:29:00.702 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2251741 00:29:00.702 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2251741 00:29:00.702 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:00.702 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:00.702 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:00.702 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:29:00.702 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:29:00.702 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:29:00.702 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:00.963 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:00.963 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:00.963 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.963 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:00.963 07:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:02.875 07:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:02.875 00:29:02.875 real 0m41.556s 00:29:02.875 user 1m44.587s 00:29:02.875 sys 0m12.289s 00:29:02.875 07:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:02.875 07:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:02.875 ************************************ 00:29:02.875 END TEST nvmf_host_multipath_status 00:29:02.875 ************************************ 00:29:02.875 07:38:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:29:02.875 07:38:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:02.875 07:38:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:02.875 07:38:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.875 ************************************ 00:29:02.875 START TEST nvmf_discovery_remove_ifc 00:29:02.875 ************************************ 00:29:02.875 07:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:29:03.138 * Looking for test storage... 00:29:03.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:03.138 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:03.138 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:29:03.138 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:03.138 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:03.138 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:03.138 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:03.138 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:03.138 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:29:03.138 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:29:03.138 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:29:03.138 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:29:03.138 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:29:03.138 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:29:03.138 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:29:03.138 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:03.138 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:03.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.139 --rc genhtml_branch_coverage=1 00:29:03.139 --rc genhtml_function_coverage=1 00:29:03.139 --rc genhtml_legend=1 00:29:03.139 --rc geninfo_all_blocks=1 00:29:03.139 --rc geninfo_unexecuted_blocks=1 00:29:03.139 00:29:03.139 ' 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:03.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.139 --rc genhtml_branch_coverage=1 00:29:03.139 --rc genhtml_function_coverage=1 00:29:03.139 --rc genhtml_legend=1 00:29:03.139 --rc geninfo_all_blocks=1 00:29:03.139 --rc geninfo_unexecuted_blocks=1 00:29:03.139 00:29:03.139 ' 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:03.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.139 --rc genhtml_branch_coverage=1 00:29:03.139 --rc genhtml_function_coverage=1 00:29:03.139 --rc genhtml_legend=1 00:29:03.139 --rc geninfo_all_blocks=1 00:29:03.139 --rc geninfo_unexecuted_blocks=1 00:29:03.139 00:29:03.139 ' 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:03.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.139 --rc genhtml_branch_coverage=1 00:29:03.139 --rc genhtml_function_coverage=1 00:29:03.139 --rc genhtml_legend=1 00:29:03.139 --rc geninfo_all_blocks=1 00:29:03.139 --rc geninfo_unexecuted_blocks=1 00:29:03.139 00:29:03.139 ' 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:03.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:03.139 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:03.140 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:03.140 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:03.140 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:03.140 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:03.140 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:29:03.140 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:11.290 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:11.290 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:29:11.290 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:11.290 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:11.290 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:11.290 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:11.290 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:11.290 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:29:11.290 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:11.290 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:29:11.290 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:29:11.290 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:29:11.290 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:29:11.290 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:29:11.290 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:29:11.290 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:11.290 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:11.290 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:11.290 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:11.290 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:11.290 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:11.290 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:11.290 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:11.291 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:11.291 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:11.291 Found net devices under 0000:31:00.0: cvl_0_0 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:11.291 Found net devices under 0000:31:00.1: cvl_0_1 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:11.291 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:11.552 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:11.552 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:11.552 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:11.552 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:11.552 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:11.552 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:11.552 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:11.552 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:11.552 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:11.552 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:29:11.552 00:29:11.552 --- 10.0.0.2 ping statistics --- 00:29:11.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.552 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:29:11.552 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:11.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:11.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:29:11.553 00:29:11.553 --- 10.0.0.1 ping statistics --- 00:29:11.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.553 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:29:11.553 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:11.553 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:29:11.553 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:11.553 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:11.553 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:11.553 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:11.553 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:11.553 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:11.553 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:11.553 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:29:11.553 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:11.553 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:11.553 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:11.553 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2262435 00:29:11.553 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2262435 00:29:11.553 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:11.553 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2262435 ']' 00:29:11.553 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:11.553 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:11.553 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:11.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:11.553 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:11.553 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:11.814 [2024-11-26 07:38:55.725984] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:29:11.814 [2024-11-26 07:38:55.726051] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:11.814 [2024-11-26 07:38:55.833482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.814 [2024-11-26 07:38:55.884675] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:11.814 [2024-11-26 07:38:55.884725] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:11.814 [2024-11-26 07:38:55.884733] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:11.814 [2024-11-26 07:38:55.884741] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:11.814 [2024-11-26 07:38:55.884748] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:11.814 [2024-11-26 07:38:55.885504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.758 07:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:12.758 07:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:29:12.758 07:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:12.758 07:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:12.758 07:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:12.758 07:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:12.758 07:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:29:12.758 07:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.758 07:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:12.758 [2024-11-26 07:38:56.595369] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:12.758 [2024-11-26 07:38:56.603629] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:12.758 null0 00:29:12.758 [2024-11-26 07:38:56.635590] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:12.758 07:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.758 07:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2262779 00:29:12.758 07:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2262779 /tmp/host.sock 00:29:12.758 07:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:29:12.758 07:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2262779 ']' 00:29:12.758 07:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:29:12.758 07:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:12.758 07:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:12.758 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:12.758 07:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:12.758 07:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:12.758 [2024-11-26 07:38:56.716562] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:29:12.758 [2024-11-26 07:38:56.716649] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2262779 ] 00:29:12.758 [2024-11-26 07:38:56.801919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.758 [2024-11-26 07:38:56.843828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.700 07:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:13.700 07:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:29:13.700 07:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:13.700 07:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:29:13.700 07:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.700 07:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:13.700 07:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.700 07:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:29:13.700 07:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.700 07:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:13.700 07:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.700 07:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:29:13.700 07:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.700 07:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:14.642 [2024-11-26 07:38:58.644945] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:14.642 [2024-11-26 07:38:58.644964] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:14.642 [2024-11-26 07:38:58.644978] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:14.642 [2024-11-26 07:38:58.732264] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:14.903 [2024-11-26 07:38:58.957543] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:29:14.903 [2024-11-26 07:38:58.958564] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xbc8670:1 started. 00:29:14.903 [2024-11-26 07:38:58.960138] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:29:14.903 [2024-11-26 07:38:58.960182] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:29:14.903 [2024-11-26 07:38:58.960203] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:29:14.903 [2024-11-26 07:38:58.960217] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:14.903 [2024-11-26 07:38:58.960237] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:14.903 07:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.903 07:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:29:14.903 [2024-11-26 07:38:58.963422] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xbc8670 was disconnected and freed. delete nvme_qpair. 00:29:14.903 07:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:14.903 07:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:14.903 07:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:14.903 07:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.903 07:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:14.903 07:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:14.903 07:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:14.903 07:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.903 07:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:29:14.903 07:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:29:14.903 07:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:29:15.172 07:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:29:15.172 07:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:15.172 07:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:15.172 07:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:15.172 07:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.172 07:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:15.172 07:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:15.172 07:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:15.172 07:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.172 07:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:15.172 07:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:16.117 07:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:16.117 07:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:16.117 07:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:16.117 07:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:16.117 07:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.117 07:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:16.117 07:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:16.117 07:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.377 07:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:16.377 07:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:17.317 07:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:17.317 07:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:17.318 07:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:17.318 07:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.318 07:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:17.318 07:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:17.318 07:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:17.318 07:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.318 07:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:17.318 07:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:18.259 07:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:18.259 07:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:18.259 07:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:18.259 07:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.259 07:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:18.259 07:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:18.259 07:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:18.259 07:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.259 07:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:18.259 07:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:19.642 07:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:19.642 07:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:19.642 07:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:19.642 07:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.642 07:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:19.642 07:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:19.642 07:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:19.642 07:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.642 07:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:19.642 07:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:20.583 [2024-11-26 07:39:04.400849] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:29:20.583 [2024-11-26 07:39:04.400899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.583 [2024-11-26 07:39:04.400912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.583 [2024-11-26 07:39:04.400921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.583 [2024-11-26 07:39:04.400929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.583 [2024-11-26 07:39:04.400937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.583 [2024-11-26 07:39:04.400944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.583 [2024-11-26 07:39:04.400952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.583 [2024-11-26 07:39:04.400960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.583 [2024-11-26 07:39:04.400973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.583 [2024-11-26 07:39:04.400981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.583 [2024-11-26 07:39:04.400989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba5050 is same with the state(6) to be set 00:29:20.583 [2024-11-26 07:39:04.410870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba5050 (9): Bad file descriptor 00:29:20.583 [2024-11-26 07:39:04.420905] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:20.583 [2024-11-26 07:39:04.420917] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:20.583 [2024-11-26 07:39:04.420922] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:20.583 [2024-11-26 07:39:04.420927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:20.583 [2024-11-26 07:39:04.420949] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:20.583 07:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:20.583 07:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:20.583 07:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:20.583 07:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.583 07:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:20.583 07:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:20.583 07:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:21.523 [2024-11-26 07:39:05.435934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:29:21.523 [2024-11-26 07:39:05.435978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba5050 with addr=10.0.0.2, port=4420 00:29:21.523 [2024-11-26 07:39:05.435990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba5050 is same with the state(6) to be set 00:29:21.523 [2024-11-26 07:39:05.436016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba5050 (9): Bad file descriptor 00:29:21.523 [2024-11-26 07:39:05.436392] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:29:21.523 [2024-11-26 07:39:05.436417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:21.523 [2024-11-26 07:39:05.436425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:21.523 [2024-11-26 07:39:05.436434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:21.523 [2024-11-26 07:39:05.436442] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:21.523 [2024-11-26 07:39:05.436447] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:21.523 [2024-11-26 07:39:05.436453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:21.523 [2024-11-26 07:39:05.436461] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:21.523 [2024-11-26 07:39:05.436466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:21.523 07:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.523 07:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:21.523 07:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:22.464 [2024-11-26 07:39:06.438837] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:22.464 [2024-11-26 07:39:06.438857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:22.464 [2024-11-26 07:39:06.438872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:22.464 [2024-11-26 07:39:06.438879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:22.464 [2024-11-26 07:39:06.438887] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:29:22.464 [2024-11-26 07:39:06.438894] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:22.464 [2024-11-26 07:39:06.438900] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:22.464 [2024-11-26 07:39:06.438904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:22.464 [2024-11-26 07:39:06.438925] bdev_nvme.c:7235:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:29:22.464 [2024-11-26 07:39:06.438949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:22.464 [2024-11-26 07:39:06.438960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.464 [2024-11-26 07:39:06.438970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:22.464 [2024-11-26 07:39:06.438978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.464 [2024-11-26 07:39:06.438986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:22.464 [2024-11-26 07:39:06.438994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.464 [2024-11-26 07:39:06.439002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:22.464 [2024-11-26 07:39:06.439009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.464 [2024-11-26 07:39:06.439018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:22.464 [2024-11-26 07:39:06.439025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.464 [2024-11-26 07:39:06.439033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:29:22.464 [2024-11-26 07:39:06.439557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb94380 (9): Bad file descriptor 00:29:22.464 [2024-11-26 07:39:06.440570] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:29:22.464 [2024-11-26 07:39:06.440581] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:29:22.464 07:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:22.464 07:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:22.464 07:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:22.465 07:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.465 07:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:22.465 07:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:22.465 07:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:22.465 07:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.465 07:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:29:22.465 07:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:22.465 07:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:22.725 07:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:29:22.725 07:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:22.725 07:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:22.725 07:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:22.725 07:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.725 07:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:22.725 07:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:22.725 07:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:22.725 07:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.725 07:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:22.725 07:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:23.666 07:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:23.666 07:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:23.666 07:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:23.666 07:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.666 07:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:23.666 07:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:23.666 07:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:23.666 07:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.666 07:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:23.666 07:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:24.608 [2024-11-26 07:39:08.500104] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:24.608 [2024-11-26 07:39:08.500121] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:24.608 [2024-11-26 07:39:08.500135] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:24.608 [2024-11-26 07:39:08.587404] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:29:24.608 [2024-11-26 07:39:08.647164] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:29:24.608 [2024-11-26 07:39:08.647974] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xbaf860:1 started. 00:29:24.608 [2024-11-26 07:39:08.649193] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:29:24.608 [2024-11-26 07:39:08.649230] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:29:24.608 [2024-11-26 07:39:08.649250] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:29:24.608 [2024-11-26 07:39:08.649264] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:29:24.608 [2024-11-26 07:39:08.649272] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:24.608 [2024-11-26 07:39:08.657400] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xbaf860 was disconnected and freed. delete nvme_qpair. 00:29:24.608 07:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:24.608 07:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:24.608 07:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:24.608 07:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:24.608 07:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.608 07:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:24.608 07:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:24.869 07:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.869 07:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:29:24.869 07:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:29:24.869 07:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2262779 00:29:24.869 07:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2262779 ']' 00:29:24.869 07:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2262779 00:29:24.869 07:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:29:24.869 07:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:24.869 07:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2262779 00:29:24.869 07:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:24.869 07:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:24.869 07:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2262779' 00:29:24.869 killing process with pid 2262779 00:29:24.869 07:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2262779 00:29:24.869 07:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2262779 00:29:24.869 07:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:29:24.869 07:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:24.869 07:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:29:24.869 07:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:24.869 07:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:29:24.869 07:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:24.869 07:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:24.869 rmmod nvme_tcp 00:29:24.869 rmmod nvme_fabrics 00:29:24.869 rmmod nvme_keyring 00:29:25.130 07:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:25.130 07:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:29:25.130 07:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:29:25.130 07:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2262435 ']' 00:29:25.130 07:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2262435 00:29:25.130 07:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2262435 ']' 00:29:25.130 07:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2262435 00:29:25.130 07:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:29:25.130 07:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:25.130 07:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2262435 00:29:25.130 07:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:25.130 07:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:25.130 07:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2262435' 00:29:25.130 killing process with pid 2262435 00:29:25.130 07:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2262435 00:29:25.131 07:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2262435 00:29:25.131 07:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:25.131 07:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:25.131 07:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:25.131 07:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:29:25.131 07:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:29:25.131 07:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:29:25.131 07:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:25.131 07:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:25.131 07:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:25.131 07:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.131 07:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:25.131 07:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.679 07:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:27.679 00:29:27.679 real 0m24.280s 00:29:27.679 user 0m27.570s 00:29:27.679 sys 0m7.796s 00:29:27.679 07:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:27.679 07:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:27.679 ************************************ 00:29:27.679 END TEST nvmf_discovery_remove_ifc 00:29:27.679 ************************************ 00:29:27.679 07:39:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:29:27.679 07:39:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:27.679 07:39:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:27.679 07:39:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.679 ************************************ 00:29:27.679 START TEST nvmf_identify_kernel_target 00:29:27.679 ************************************ 00:29:27.679 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:29:27.679 * Looking for test storage... 00:29:27.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:27.679 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:27.679 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:29:27.679 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:27.679 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:27.679 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:27.679 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:27.679 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:27.679 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:29:27.679 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:29:27.679 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:29:27.679 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:29:27.679 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:29:27.679 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:29:27.679 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:29:27.679 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:27.679 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:29:27.679 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:29:27.679 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:27.679 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:27.679 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:29:27.679 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:29:27.679 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:27.679 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:27.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.680 --rc genhtml_branch_coverage=1 00:29:27.680 --rc genhtml_function_coverage=1 00:29:27.680 --rc genhtml_legend=1 00:29:27.680 --rc geninfo_all_blocks=1 00:29:27.680 --rc geninfo_unexecuted_blocks=1 00:29:27.680 00:29:27.680 ' 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:27.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.680 --rc genhtml_branch_coverage=1 00:29:27.680 --rc genhtml_function_coverage=1 00:29:27.680 --rc genhtml_legend=1 00:29:27.680 --rc geninfo_all_blocks=1 00:29:27.680 --rc geninfo_unexecuted_blocks=1 00:29:27.680 00:29:27.680 ' 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:27.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.680 --rc genhtml_branch_coverage=1 00:29:27.680 --rc genhtml_function_coverage=1 00:29:27.680 --rc genhtml_legend=1 00:29:27.680 --rc geninfo_all_blocks=1 00:29:27.680 --rc geninfo_unexecuted_blocks=1 00:29:27.680 00:29:27.680 ' 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:27.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.680 --rc genhtml_branch_coverage=1 00:29:27.680 --rc genhtml_function_coverage=1 00:29:27.680 --rc genhtml_legend=1 00:29:27.680 --rc geninfo_all_blocks=1 00:29:27.680 --rc geninfo_unexecuted_blocks=1 00:29:27.680 00:29:27.680 ' 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:29:27.680 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:27.681 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:27.681 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:27.681 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:27.681 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:27.681 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:27.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:27.681 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:27.681 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:27.681 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:27.681 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:29:27.681 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:27.681 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:27.681 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:27.681 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:27.681 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:27.681 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.681 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:27.681 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.681 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:27.681 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:27.681 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:29:27.681 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:29:35.983 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:35.983 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:29:35.983 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:35.983 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:35.983 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:35.983 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:35.983 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:35.983 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:29:35.983 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:35.983 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:29:35.983 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:35.984 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:35.984 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:35.984 Found net devices under 0000:31:00.0: cvl_0_0 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:35.984 Found net devices under 0000:31:00.1: cvl_0_1 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:35.984 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:35.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:35.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:29:35.985 00:29:35.985 --- 10.0.0.2 ping statistics --- 00:29:35.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.985 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:35.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:35.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:29:35.985 00:29:35.985 --- 10.0.0.1 ping statistics --- 00:29:35.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.985 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:35.985 07:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:40.188 Waiting for block devices as requested 00:29:40.188 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:40.188 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:40.188 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:40.188 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:40.188 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:40.188 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:40.188 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:40.188 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:40.448 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:29:40.448 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:40.448 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:40.708 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:40.708 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:40.708 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:40.708 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:40.967 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:40.967 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:41.236 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:29:41.236 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:41.236 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:29:41.236 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:29:41.236 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:41.236 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:41.236 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:29:41.236 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:29:41.236 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:41.236 No valid GPT data, bailing 00:29:41.236 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:41.236 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:29:41.236 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:29:41.236 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:29:41.236 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:29:41.236 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:41.236 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:41.236 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:41.236 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:41.236 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:29:41.236 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:29:41.236 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:29:41.236 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:29:41.236 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:29:41.237 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:29:41.237 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:29:41.237 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:41.497 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:29:41.497 00:29:41.497 Discovery Log Number of Records 2, Generation counter 2 00:29:41.497 =====Discovery Log Entry 0====== 00:29:41.497 trtype: tcp 00:29:41.497 adrfam: ipv4 00:29:41.497 subtype: current discovery subsystem 00:29:41.497 treq: not specified, sq flow control disable supported 00:29:41.497 portid: 1 00:29:41.497 trsvcid: 4420 00:29:41.497 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:41.497 traddr: 10.0.0.1 00:29:41.497 eflags: none 00:29:41.497 sectype: none 00:29:41.497 =====Discovery Log Entry 1====== 00:29:41.497 trtype: tcp 00:29:41.497 adrfam: ipv4 00:29:41.497 subtype: nvme subsystem 00:29:41.497 treq: not specified, sq flow control disable supported 00:29:41.497 portid: 1 00:29:41.497 trsvcid: 4420 00:29:41.497 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:41.497 traddr: 10.0.0.1 00:29:41.497 eflags: none 00:29:41.497 sectype: none 00:29:41.497 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:29:41.497 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:29:41.497 ===================================================== 00:29:41.497 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:41.497 ===================================================== 00:29:41.497 Controller Capabilities/Features 00:29:41.497 ================================ 00:29:41.497 Vendor ID: 0000 00:29:41.497 Subsystem Vendor ID: 0000 00:29:41.497 Serial Number: d4719dfc36ae9c24d6a4 00:29:41.497 Model Number: Linux 00:29:41.497 Firmware Version: 6.8.9-20 00:29:41.497 Recommended Arb Burst: 0 00:29:41.497 IEEE OUI Identifier: 00 00 00 00:29:41.497 Multi-path I/O 00:29:41.497 May have multiple subsystem ports: No 00:29:41.497 May have multiple controllers: No 00:29:41.497 Associated with SR-IOV VF: No 00:29:41.497 Max Data Transfer Size: Unlimited 00:29:41.497 Max Number of Namespaces: 0 00:29:41.497 Max Number of I/O Queues: 1024 00:29:41.497 NVMe Specification Version (VS): 1.3 00:29:41.497 NVMe Specification Version (Identify): 1.3 00:29:41.497 Maximum Queue Entries: 1024 00:29:41.497 Contiguous Queues Required: No 00:29:41.497 Arbitration Mechanisms Supported 00:29:41.497 Weighted Round Robin: Not Supported 00:29:41.497 Vendor Specific: Not Supported 00:29:41.497 Reset Timeout: 7500 ms 00:29:41.497 Doorbell Stride: 4 bytes 00:29:41.497 NVM Subsystem Reset: Not Supported 00:29:41.497 Command Sets Supported 00:29:41.497 NVM Command Set: Supported 00:29:41.497 Boot Partition: Not Supported 00:29:41.497 Memory Page Size Minimum: 4096 bytes 00:29:41.497 Memory Page Size Maximum: 4096 bytes 00:29:41.497 Persistent Memory Region: Not Supported 00:29:41.497 Optional Asynchronous Events Supported 00:29:41.497 Namespace Attribute Notices: Not Supported 00:29:41.497 Firmware Activation Notices: Not Supported 00:29:41.497 ANA Change Notices: Not Supported 00:29:41.497 PLE Aggregate Log Change Notices: Not Supported 00:29:41.497 LBA Status Info Alert Notices: Not Supported 00:29:41.497 EGE Aggregate Log Change Notices: Not Supported 00:29:41.497 Normal NVM Subsystem Shutdown event: Not Supported 00:29:41.497 Zone Descriptor Change Notices: Not Supported 00:29:41.497 Discovery Log Change Notices: Supported 00:29:41.497 Controller Attributes 00:29:41.497 128-bit Host Identifier: Not Supported 00:29:41.497 Non-Operational Permissive Mode: Not Supported 00:29:41.497 NVM Sets: Not Supported 00:29:41.497 Read Recovery Levels: Not Supported 00:29:41.497 Endurance Groups: Not Supported 00:29:41.497 Predictable Latency Mode: Not Supported 00:29:41.497 Traffic Based Keep ALive: Not Supported 00:29:41.497 Namespace Granularity: Not Supported 00:29:41.497 SQ Associations: Not Supported 00:29:41.497 UUID List: Not Supported 00:29:41.497 Multi-Domain Subsystem: Not Supported 00:29:41.497 Fixed Capacity Management: Not Supported 00:29:41.497 Variable Capacity Management: Not Supported 00:29:41.497 Delete Endurance Group: Not Supported 00:29:41.497 Delete NVM Set: Not Supported 00:29:41.497 Extended LBA Formats Supported: Not Supported 00:29:41.497 Flexible Data Placement Supported: Not Supported 00:29:41.497 00:29:41.497 Controller Memory Buffer Support 00:29:41.497 ================================ 00:29:41.497 Supported: No 00:29:41.497 00:29:41.497 Persistent Memory Region Support 00:29:41.497 ================================ 00:29:41.497 Supported: No 00:29:41.497 00:29:41.497 Admin Command Set Attributes 00:29:41.497 ============================ 00:29:41.497 Security Send/Receive: Not Supported 00:29:41.497 Format NVM: Not Supported 00:29:41.497 Firmware Activate/Download: Not Supported 00:29:41.497 Namespace Management: Not Supported 00:29:41.497 Device Self-Test: Not Supported 00:29:41.497 Directives: Not Supported 00:29:41.498 NVMe-MI: Not Supported 00:29:41.498 Virtualization Management: Not Supported 00:29:41.498 Doorbell Buffer Config: Not Supported 00:29:41.498 Get LBA Status Capability: Not Supported 00:29:41.498 Command & Feature Lockdown Capability: Not Supported 00:29:41.498 Abort Command Limit: 1 00:29:41.498 Async Event Request Limit: 1 00:29:41.498 Number of Firmware Slots: N/A 00:29:41.498 Firmware Slot 1 Read-Only: N/A 00:29:41.498 Firmware Activation Without Reset: N/A 00:29:41.498 Multiple Update Detection Support: N/A 00:29:41.498 Firmware Update Granularity: No Information Provided 00:29:41.498 Per-Namespace SMART Log: No 00:29:41.498 Asymmetric Namespace Access Log Page: Not Supported 00:29:41.498 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:41.498 Command Effects Log Page: Not Supported 00:29:41.498 Get Log Page Extended Data: Supported 00:29:41.498 Telemetry Log Pages: Not Supported 00:29:41.498 Persistent Event Log Pages: Not Supported 00:29:41.498 Supported Log Pages Log Page: May Support 00:29:41.498 Commands Supported & Effects Log Page: Not Supported 00:29:41.498 Feature Identifiers & Effects Log Page:May Support 00:29:41.498 NVMe-MI Commands & Effects Log Page: May Support 00:29:41.498 Data Area 4 for Telemetry Log: Not Supported 00:29:41.498 Error Log Page Entries Supported: 1 00:29:41.498 Keep Alive: Not Supported 00:29:41.498 00:29:41.498 NVM Command Set Attributes 00:29:41.498 ========================== 00:29:41.498 Submission Queue Entry Size 00:29:41.498 Max: 1 00:29:41.498 Min: 1 00:29:41.498 Completion Queue Entry Size 00:29:41.498 Max: 1 00:29:41.498 Min: 1 00:29:41.498 Number of Namespaces: 0 00:29:41.498 Compare Command: Not Supported 00:29:41.498 Write Uncorrectable Command: Not Supported 00:29:41.498 Dataset Management Command: Not Supported 00:29:41.498 Write Zeroes Command: Not Supported 00:29:41.498 Set Features Save Field: Not Supported 00:29:41.498 Reservations: Not Supported 00:29:41.498 Timestamp: Not Supported 00:29:41.498 Copy: Not Supported 00:29:41.498 Volatile Write Cache: Not Present 00:29:41.498 Atomic Write Unit (Normal): 1 00:29:41.498 Atomic Write Unit (PFail): 1 00:29:41.498 Atomic Compare & Write Unit: 1 00:29:41.498 Fused Compare & Write: Not Supported 00:29:41.498 Scatter-Gather List 00:29:41.498 SGL Command Set: Supported 00:29:41.498 SGL Keyed: Not Supported 00:29:41.498 SGL Bit Bucket Descriptor: Not Supported 00:29:41.498 SGL Metadata Pointer: Not Supported 00:29:41.498 Oversized SGL: Not Supported 00:29:41.498 SGL Metadata Address: Not Supported 00:29:41.498 SGL Offset: Supported 00:29:41.498 Transport SGL Data Block: Not Supported 00:29:41.498 Replay Protected Memory Block: Not Supported 00:29:41.498 00:29:41.498 Firmware Slot Information 00:29:41.498 ========================= 00:29:41.498 Active slot: 0 00:29:41.498 00:29:41.498 00:29:41.498 Error Log 00:29:41.498 ========= 00:29:41.498 00:29:41.498 Active Namespaces 00:29:41.498 ================= 00:29:41.498 Discovery Log Page 00:29:41.498 ================== 00:29:41.498 Generation Counter: 2 00:29:41.498 Number of Records: 2 00:29:41.498 Record Format: 0 00:29:41.498 00:29:41.498 Discovery Log Entry 0 00:29:41.498 ---------------------- 00:29:41.498 Transport Type: 3 (TCP) 00:29:41.498 Address Family: 1 (IPv4) 00:29:41.498 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:41.498 Entry Flags: 00:29:41.498 Duplicate Returned Information: 0 00:29:41.498 Explicit Persistent Connection Support for Discovery: 0 00:29:41.498 Transport Requirements: 00:29:41.498 Secure Channel: Not Specified 00:29:41.498 Port ID: 1 (0x0001) 00:29:41.498 Controller ID: 65535 (0xffff) 00:29:41.498 Admin Max SQ Size: 32 00:29:41.498 Transport Service Identifier: 4420 00:29:41.498 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:41.498 Transport Address: 10.0.0.1 00:29:41.498 Discovery Log Entry 1 00:29:41.498 ---------------------- 00:29:41.498 Transport Type: 3 (TCP) 00:29:41.498 Address Family: 1 (IPv4) 00:29:41.498 Subsystem Type: 2 (NVM Subsystem) 00:29:41.498 Entry Flags: 00:29:41.498 Duplicate Returned Information: 0 00:29:41.498 Explicit Persistent Connection Support for Discovery: 0 00:29:41.498 Transport Requirements: 00:29:41.498 Secure Channel: Not Specified 00:29:41.498 Port ID: 1 (0x0001) 00:29:41.498 Controller ID: 65535 (0xffff) 00:29:41.498 Admin Max SQ Size: 32 00:29:41.498 Transport Service Identifier: 4420 00:29:41.498 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:29:41.498 Transport Address: 10.0.0.1 00:29:41.498 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:41.758 get_feature(0x01) failed 00:29:41.758 get_feature(0x02) failed 00:29:41.758 get_feature(0x04) failed 00:29:41.758 ===================================================== 00:29:41.758 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:41.758 ===================================================== 00:29:41.758 Controller Capabilities/Features 00:29:41.758 ================================ 00:29:41.758 Vendor ID: 0000 00:29:41.758 Subsystem Vendor ID: 0000 00:29:41.758 Serial Number: 84049a0103273d090fd3 00:29:41.758 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:29:41.758 Firmware Version: 6.8.9-20 00:29:41.758 Recommended Arb Burst: 6 00:29:41.758 IEEE OUI Identifier: 00 00 00 00:29:41.758 Multi-path I/O 00:29:41.758 May have multiple subsystem ports: Yes 00:29:41.758 May have multiple controllers: Yes 00:29:41.758 Associated with SR-IOV VF: No 00:29:41.758 Max Data Transfer Size: Unlimited 00:29:41.758 Max Number of Namespaces: 1024 00:29:41.758 Max Number of I/O Queues: 128 00:29:41.758 NVMe Specification Version (VS): 1.3 00:29:41.758 NVMe Specification Version (Identify): 1.3 00:29:41.758 Maximum Queue Entries: 1024 00:29:41.758 Contiguous Queues Required: No 00:29:41.758 Arbitration Mechanisms Supported 00:29:41.758 Weighted Round Robin: Not Supported 00:29:41.758 Vendor Specific: Not Supported 00:29:41.758 Reset Timeout: 7500 ms 00:29:41.758 Doorbell Stride: 4 bytes 00:29:41.759 NVM Subsystem Reset: Not Supported 00:29:41.759 Command Sets Supported 00:29:41.759 NVM Command Set: Supported 00:29:41.759 Boot Partition: Not Supported 00:29:41.759 Memory Page Size Minimum: 4096 bytes 00:29:41.759 Memory Page Size Maximum: 4096 bytes 00:29:41.759 Persistent Memory Region: Not Supported 00:29:41.759 Optional Asynchronous Events Supported 00:29:41.759 Namespace Attribute Notices: Supported 00:29:41.759 Firmware Activation Notices: Not Supported 00:29:41.759 ANA Change Notices: Supported 00:29:41.759 PLE Aggregate Log Change Notices: Not Supported 00:29:41.759 LBA Status Info Alert Notices: Not Supported 00:29:41.759 EGE Aggregate Log Change Notices: Not Supported 00:29:41.759 Normal NVM Subsystem Shutdown event: Not Supported 00:29:41.759 Zone Descriptor Change Notices: Not Supported 00:29:41.759 Discovery Log Change Notices: Not Supported 00:29:41.759 Controller Attributes 00:29:41.759 128-bit Host Identifier: Supported 00:29:41.759 Non-Operational Permissive Mode: Not Supported 00:29:41.759 NVM Sets: Not Supported 00:29:41.759 Read Recovery Levels: Not Supported 00:29:41.759 Endurance Groups: Not Supported 00:29:41.759 Predictable Latency Mode: Not Supported 00:29:41.759 Traffic Based Keep ALive: Supported 00:29:41.759 Namespace Granularity: Not Supported 00:29:41.759 SQ Associations: Not Supported 00:29:41.759 UUID List: Not Supported 00:29:41.759 Multi-Domain Subsystem: Not Supported 00:29:41.759 Fixed Capacity Management: Not Supported 00:29:41.759 Variable Capacity Management: Not Supported 00:29:41.759 Delete Endurance Group: Not Supported 00:29:41.759 Delete NVM Set: Not Supported 00:29:41.759 Extended LBA Formats Supported: Not Supported 00:29:41.759 Flexible Data Placement Supported: Not Supported 00:29:41.759 00:29:41.759 Controller Memory Buffer Support 00:29:41.759 ================================ 00:29:41.759 Supported: No 00:29:41.759 00:29:41.759 Persistent Memory Region Support 00:29:41.759 ================================ 00:29:41.759 Supported: No 00:29:41.759 00:29:41.759 Admin Command Set Attributes 00:29:41.759 ============================ 00:29:41.759 Security Send/Receive: Not Supported 00:29:41.759 Format NVM: Not Supported 00:29:41.759 Firmware Activate/Download: Not Supported 00:29:41.759 Namespace Management: Not Supported 00:29:41.759 Device Self-Test: Not Supported 00:29:41.759 Directives: Not Supported 00:29:41.759 NVMe-MI: Not Supported 00:29:41.759 Virtualization Management: Not Supported 00:29:41.759 Doorbell Buffer Config: Not Supported 00:29:41.759 Get LBA Status Capability: Not Supported 00:29:41.759 Command & Feature Lockdown Capability: Not Supported 00:29:41.759 Abort Command Limit: 4 00:29:41.759 Async Event Request Limit: 4 00:29:41.759 Number of Firmware Slots: N/A 00:29:41.759 Firmware Slot 1 Read-Only: N/A 00:29:41.759 Firmware Activation Without Reset: N/A 00:29:41.759 Multiple Update Detection Support: N/A 00:29:41.759 Firmware Update Granularity: No Information Provided 00:29:41.759 Per-Namespace SMART Log: Yes 00:29:41.759 Asymmetric Namespace Access Log Page: Supported 00:29:41.759 ANA Transition Time : 10 sec 00:29:41.759 00:29:41.759 Asymmetric Namespace Access Capabilities 00:29:41.759 ANA Optimized State : Supported 00:29:41.759 ANA Non-Optimized State : Supported 00:29:41.759 ANA Inaccessible State : Supported 00:29:41.759 ANA Persistent Loss State : Supported 00:29:41.759 ANA Change State : Supported 00:29:41.759 ANAGRPID is not changed : No 00:29:41.759 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:29:41.759 00:29:41.759 ANA Group Identifier Maximum : 128 00:29:41.759 Number of ANA Group Identifiers : 128 00:29:41.759 Max Number of Allowed Namespaces : 1024 00:29:41.759 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:29:41.759 Command Effects Log Page: Supported 00:29:41.759 Get Log Page Extended Data: Supported 00:29:41.759 Telemetry Log Pages: Not Supported 00:29:41.759 Persistent Event Log Pages: Not Supported 00:29:41.759 Supported Log Pages Log Page: May Support 00:29:41.759 Commands Supported & Effects Log Page: Not Supported 00:29:41.759 Feature Identifiers & Effects Log Page:May Support 00:29:41.759 NVMe-MI Commands & Effects Log Page: May Support 00:29:41.759 Data Area 4 for Telemetry Log: Not Supported 00:29:41.759 Error Log Page Entries Supported: 128 00:29:41.759 Keep Alive: Supported 00:29:41.759 Keep Alive Granularity: 1000 ms 00:29:41.759 00:29:41.759 NVM Command Set Attributes 00:29:41.759 ========================== 00:29:41.759 Submission Queue Entry Size 00:29:41.759 Max: 64 00:29:41.759 Min: 64 00:29:41.759 Completion Queue Entry Size 00:29:41.759 Max: 16 00:29:41.759 Min: 16 00:29:41.759 Number of Namespaces: 1024 00:29:41.759 Compare Command: Not Supported 00:29:41.759 Write Uncorrectable Command: Not Supported 00:29:41.759 Dataset Management Command: Supported 00:29:41.759 Write Zeroes Command: Supported 00:29:41.759 Set Features Save Field: Not Supported 00:29:41.759 Reservations: Not Supported 00:29:41.759 Timestamp: Not Supported 00:29:41.759 Copy: Not Supported 00:29:41.759 Volatile Write Cache: Present 00:29:41.759 Atomic Write Unit (Normal): 1 00:29:41.759 Atomic Write Unit (PFail): 1 00:29:41.759 Atomic Compare & Write Unit: 1 00:29:41.759 Fused Compare & Write: Not Supported 00:29:41.759 Scatter-Gather List 00:29:41.759 SGL Command Set: Supported 00:29:41.759 SGL Keyed: Not Supported 00:29:41.759 SGL Bit Bucket Descriptor: Not Supported 00:29:41.759 SGL Metadata Pointer: Not Supported 00:29:41.759 Oversized SGL: Not Supported 00:29:41.759 SGL Metadata Address: Not Supported 00:29:41.759 SGL Offset: Supported 00:29:41.759 Transport SGL Data Block: Not Supported 00:29:41.759 Replay Protected Memory Block: Not Supported 00:29:41.759 00:29:41.759 Firmware Slot Information 00:29:41.759 ========================= 00:29:41.759 Active slot: 0 00:29:41.759 00:29:41.759 Asymmetric Namespace Access 00:29:41.759 =========================== 00:29:41.759 Change Count : 0 00:29:41.759 Number of ANA Group Descriptors : 1 00:29:41.759 ANA Group Descriptor : 0 00:29:41.759 ANA Group ID : 1 00:29:41.759 Number of NSID Values : 1 00:29:41.759 Change Count : 0 00:29:41.759 ANA State : 1 00:29:41.759 Namespace Identifier : 1 00:29:41.759 00:29:41.759 Commands Supported and Effects 00:29:41.759 ============================== 00:29:41.759 Admin Commands 00:29:41.759 -------------- 00:29:41.759 Get Log Page (02h): Supported 00:29:41.759 Identify (06h): Supported 00:29:41.759 Abort (08h): Supported 00:29:41.759 Set Features (09h): Supported 00:29:41.759 Get Features (0Ah): Supported 00:29:41.759 Asynchronous Event Request (0Ch): Supported 00:29:41.759 Keep Alive (18h): Supported 00:29:41.759 I/O Commands 00:29:41.759 ------------ 00:29:41.759 Flush (00h): Supported 00:29:41.759 Write (01h): Supported LBA-Change 00:29:41.759 Read (02h): Supported 00:29:41.759 Write Zeroes (08h): Supported LBA-Change 00:29:41.759 Dataset Management (09h): Supported 00:29:41.759 00:29:41.759 Error Log 00:29:41.759 ========= 00:29:41.759 Entry: 0 00:29:41.759 Error Count: 0x3 00:29:41.759 Submission Queue Id: 0x0 00:29:41.759 Command Id: 0x5 00:29:41.759 Phase Bit: 0 00:29:41.759 Status Code: 0x2 00:29:41.759 Status Code Type: 0x0 00:29:41.759 Do Not Retry: 1 00:29:41.759 Error Location: 0x28 00:29:41.759 LBA: 0x0 00:29:41.759 Namespace: 0x0 00:29:41.759 Vendor Log Page: 0x0 00:29:41.759 ----------- 00:29:41.759 Entry: 1 00:29:41.759 Error Count: 0x2 00:29:41.759 Submission Queue Id: 0x0 00:29:41.759 Command Id: 0x5 00:29:41.759 Phase Bit: 0 00:29:41.759 Status Code: 0x2 00:29:41.759 Status Code Type: 0x0 00:29:41.759 Do Not Retry: 1 00:29:41.759 Error Location: 0x28 00:29:41.759 LBA: 0x0 00:29:41.759 Namespace: 0x0 00:29:41.759 Vendor Log Page: 0x0 00:29:41.759 ----------- 00:29:41.759 Entry: 2 00:29:41.759 Error Count: 0x1 00:29:41.759 Submission Queue Id: 0x0 00:29:41.759 Command Id: 0x4 00:29:41.759 Phase Bit: 0 00:29:41.759 Status Code: 0x2 00:29:41.759 Status Code Type: 0x0 00:29:41.759 Do Not Retry: 1 00:29:41.759 Error Location: 0x28 00:29:41.759 LBA: 0x0 00:29:41.759 Namespace: 0x0 00:29:41.759 Vendor Log Page: 0x0 00:29:41.759 00:29:41.759 Number of Queues 00:29:41.759 ================ 00:29:41.759 Number of I/O Submission Queues: 128 00:29:41.759 Number of I/O Completion Queues: 128 00:29:41.759 00:29:41.759 ZNS Specific Controller Data 00:29:41.759 ============================ 00:29:41.759 Zone Append Size Limit: 0 00:29:41.759 00:29:41.759 00:29:41.759 Active Namespaces 00:29:41.759 ================= 00:29:41.759 get_feature(0x05) failed 00:29:41.760 Namespace ID:1 00:29:41.760 Command Set Identifier: NVM (00h) 00:29:41.760 Deallocate: Supported 00:29:41.760 Deallocated/Unwritten Error: Not Supported 00:29:41.760 Deallocated Read Value: Unknown 00:29:41.760 Deallocate in Write Zeroes: Not Supported 00:29:41.760 Deallocated Guard Field: 0xFFFF 00:29:41.760 Flush: Supported 00:29:41.760 Reservation: Not Supported 00:29:41.760 Namespace Sharing Capabilities: Multiple Controllers 00:29:41.760 Size (in LBAs): 3750748848 (1788GiB) 00:29:41.760 Capacity (in LBAs): 3750748848 (1788GiB) 00:29:41.760 Utilization (in LBAs): 3750748848 (1788GiB) 00:29:41.760 UUID: 7e412817-9cce-4613-84e4-2cf2fb8bcf76 00:29:41.760 Thin Provisioning: Not Supported 00:29:41.760 Per-NS Atomic Units: Yes 00:29:41.760 Atomic Write Unit (Normal): 8 00:29:41.760 Atomic Write Unit (PFail): 8 00:29:41.760 Preferred Write Granularity: 8 00:29:41.760 Atomic Compare & Write Unit: 8 00:29:41.760 Atomic Boundary Size (Normal): 0 00:29:41.760 Atomic Boundary Size (PFail): 0 00:29:41.760 Atomic Boundary Offset: 0 00:29:41.760 NGUID/EUI64 Never Reused: No 00:29:41.760 ANA group ID: 1 00:29:41.760 Namespace Write Protected: No 00:29:41.760 Number of LBA Formats: 1 00:29:41.760 Current LBA Format: LBA Format #00 00:29:41.760 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:41.760 00:29:41.760 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:29:41.760 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:41.760 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:29:41.760 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:41.760 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:29:41.760 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:41.760 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:41.760 rmmod nvme_tcp 00:29:41.760 rmmod nvme_fabrics 00:29:41.760 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:41.760 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:29:41.760 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:29:41.760 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:29:41.760 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:41.760 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:41.760 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:41.760 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:29:41.760 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:29:41.760 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:41.760 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:29:41.760 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:41.760 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:41.760 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:41.760 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:41.760 07:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:43.670 07:39:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:43.670 07:39:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:29:43.670 07:39:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:43.670 07:39:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:29:43.670 07:39:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:43.670 07:39:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:43.930 07:39:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:43.930 07:39:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:43.930 07:39:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:29:43.930 07:39:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:29:43.930 07:39:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:48.128 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:48.128 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:48.128 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:48.128 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:48.128 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:48.128 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:48.128 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:48.128 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:48.128 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:48.128 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:48.128 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:48.128 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:48.128 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:48.128 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:48.128 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:48.128 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:48.128 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:29:48.128 00:29:48.128 real 0m20.626s 00:29:48.128 user 0m5.586s 00:29:48.128 sys 0m12.095s 00:29:48.128 07:39:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:48.128 07:39:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:29:48.128 ************************************ 00:29:48.128 END TEST nvmf_identify_kernel_target 00:29:48.128 ************************************ 00:29:48.128 07:39:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:48.128 07:39:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:48.128 07:39:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:48.128 07:39:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.128 ************************************ 00:29:48.128 START TEST nvmf_auth_host 00:29:48.128 ************************************ 00:29:48.128 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:48.128 * Looking for test storage... 00:29:48.129 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:48.129 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:48.129 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:29:48.129 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:48.129 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:48.129 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:48.129 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:48.129 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:48.129 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:48.129 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:48.129 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:48.129 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:48.129 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:48.129 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:48.129 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:48.129 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:48.129 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:29:48.129 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:29:48.129 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:48.129 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:48.129 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:29:48.129 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:29:48.129 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:48.129 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:29:48.129 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:48.129 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:29:48.129 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:29:48.129 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:48.129 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:48.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.390 --rc genhtml_branch_coverage=1 00:29:48.390 --rc genhtml_function_coverage=1 00:29:48.390 --rc genhtml_legend=1 00:29:48.390 --rc geninfo_all_blocks=1 00:29:48.390 --rc geninfo_unexecuted_blocks=1 00:29:48.390 00:29:48.390 ' 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:48.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.390 --rc genhtml_branch_coverage=1 00:29:48.390 --rc genhtml_function_coverage=1 00:29:48.390 --rc genhtml_legend=1 00:29:48.390 --rc geninfo_all_blocks=1 00:29:48.390 --rc geninfo_unexecuted_blocks=1 00:29:48.390 00:29:48.390 ' 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:48.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.390 --rc genhtml_branch_coverage=1 00:29:48.390 --rc genhtml_function_coverage=1 00:29:48.390 --rc genhtml_legend=1 00:29:48.390 --rc geninfo_all_blocks=1 00:29:48.390 --rc geninfo_unexecuted_blocks=1 00:29:48.390 00:29:48.390 ' 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:48.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.390 --rc genhtml_branch_coverage=1 00:29:48.390 --rc genhtml_function_coverage=1 00:29:48.390 --rc genhtml_legend=1 00:29:48.390 --rc geninfo_all_blocks=1 00:29:48.390 --rc geninfo_unexecuted_blocks=1 00:29:48.390 00:29:48.390 ' 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:48.390 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:48.391 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:48.391 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:48.391 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:48.391 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:48.391 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:29:48.391 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:29:48.391 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:29:48.391 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:29:48.391 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:48.391 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:48.391 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:29:48.391 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:29:48.391 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:29:48.391 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:48.391 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:48.391 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:48.391 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:48.391 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:48.391 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.391 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:48.391 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.391 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:48.391 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:48.391 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:29:48.391 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.537 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:56.537 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:29:56.537 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:56.537 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:56.537 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:56.537 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:56.537 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:56.537 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:29:56.537 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:56.537 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:29:56.537 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:29:56.537 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:29:56.537 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:29:56.537 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:29:56.537 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:29:56.537 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:56.537 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:56.537 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:56.537 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:56.537 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:56.537 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:56.537 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:56.537 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:56.537 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:56.537 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:56.537 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:56.537 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:56.537 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:56.537 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:56.537 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:56.537 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:56.537 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:56.537 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:56.537 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:56.537 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:56.537 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:56.537 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:56.538 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:56.538 Found net devices under 0000:31:00.0: cvl_0_0 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:56.538 Found net devices under 0000:31:00.1: cvl_0_1 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:56.538 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:56.800 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:56.800 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:56.800 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:56.800 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:56.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:56.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:29:56.800 00:29:56.800 --- 10.0.0.2 ping statistics --- 00:29:56.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.800 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:29:56.800 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:56.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:56.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:29:56.800 00:29:56.800 --- 10.0.0.1 ping statistics --- 00:29:56.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.800 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:29:56.800 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:56.800 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:29:56.800 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:56.800 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:56.800 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:56.800 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:56.800 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:56.800 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:56.800 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:56.800 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:29:56.800 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:56.800 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:56.800 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.801 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2278844 00:29:56.801 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2278844 00:29:56.801 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:29:56.801 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2278844 ']' 00:29:56.801 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:56.801 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:56.801 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:56.801 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:56.801 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e9256510fd001dbf312ab55e5963f28e 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.cfB 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e9256510fd001dbf312ab55e5963f28e 0 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e9256510fd001dbf312ab55e5963f28e 0 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e9256510fd001dbf312ab55e5963f28e 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.cfB 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.cfB 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.cfB 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6074c1127cd05d41235f15e2b7d7719c781d49614dcf6497afaff7e5bd8503c1 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.jSE 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6074c1127cd05d41235f15e2b7d7719c781d49614dcf6497afaff7e5bd8503c1 3 00:29:57.742 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6074c1127cd05d41235f15e2b7d7719c781d49614dcf6497afaff7e5bd8503c1 3 00:29:57.743 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:57.743 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:57.743 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6074c1127cd05d41235f15e2b7d7719c781d49614dcf6497afaff7e5bd8503c1 00:29:57.743 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:29:57.743 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:57.743 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.jSE 00:29:57.743 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.jSE 00:29:57.743 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.jSE 00:29:57.743 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:29:57.743 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:57.743 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:57.743 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:57.743 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:29:57.743 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:29:57.743 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:57.743 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=850c03af4a50acab08f2ff30c526c9618f4438cf390ff88c 00:29:57.743 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:29:57.743 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.lvc 00:29:57.743 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 850c03af4a50acab08f2ff30c526c9618f4438cf390ff88c 0 00:29:57.743 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 850c03af4a50acab08f2ff30c526c9618f4438cf390ff88c 0 00:29:57.743 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:57.743 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:57.743 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=850c03af4a50acab08f2ff30c526c9618f4438cf390ff88c 00:29:57.743 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:29:57.743 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.lvc 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.lvc 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.lvc 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4c845f70b0a9683f8552ab2ecec16a952bf17eb3ca7c3090 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.EPA 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4c845f70b0a9683f8552ab2ecec16a952bf17eb3ca7c3090 2 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4c845f70b0a9683f8552ab2ecec16a952bf17eb3ca7c3090 2 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4c845f70b0a9683f8552ab2ecec16a952bf17eb3ca7c3090 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.EPA 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.EPA 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.EPA 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cc6fe4de2ef03194e1d7a33fd53ed720 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.IWh 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cc6fe4de2ef03194e1d7a33fd53ed720 1 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cc6fe4de2ef03194e1d7a33fd53ed720 1 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cc6fe4de2ef03194e1d7a33fd53ed720 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:58.004 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.IWh 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.IWh 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.IWh 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9b98f6b132f49d2b303542c0d2e0cb81 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.gfh 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9b98f6b132f49d2b303542c0d2e0cb81 1 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9b98f6b132f49d2b303542c0d2e0cb81 1 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9b98f6b132f49d2b303542c0d2e0cb81 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.gfh 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.gfh 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.gfh 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=771ad9a7fab5351d0b4088fbd8cda8d90bfe8331729afb0a 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.XD7 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 771ad9a7fab5351d0b4088fbd8cda8d90bfe8331729afb0a 2 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 771ad9a7fab5351d0b4088fbd8cda8d90bfe8331729afb0a 2 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=771ad9a7fab5351d0b4088fbd8cda8d90bfe8331729afb0a 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.XD7 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.XD7 00:29:58.004 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.XD7 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=56af6650ead5e4e6b2a66834546813f0 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.yVY 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 56af6650ead5e4e6b2a66834546813f0 0 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 56af6650ead5e4e6b2a66834546813f0 0 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=56af6650ead5e4e6b2a66834546813f0 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.yVY 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.yVY 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.yVY 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ca825e4c5481aab25e088a6c751e466d626bd04a8086d1d75d998b8233327b50 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.g45 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ca825e4c5481aab25e088a6c751e466d626bd04a8086d1d75d998b8233327b50 3 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ca825e4c5481aab25e088a6c751e466d626bd04a8086d1d75d998b8233327b50 3 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ca825e4c5481aab25e088a6c751e466d626bd04a8086d1d75d998b8233327b50 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.g45 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.g45 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.g45 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2278844 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2278844 ']' 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:58.264 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:58.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:58.265 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:58.265 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.cfB 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.jSE ]] 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.jSE 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.lvc 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.EPA ]] 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.EPA 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.IWh 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.gfh ]] 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.gfh 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.XD7 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.yVY ]] 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.yVY 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.g45 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.525 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.526 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.526 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:29:58.526 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:29:58.526 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:29:58.526 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:58.526 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:58.526 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:58.526 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:58.526 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:58.526 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:58.526 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:58.526 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:58.526 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:58.526 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:58.526 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:29:58.526 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:29:58.526 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:29:58.526 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:58.526 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:58.526 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:58.526 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:29:58.526 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:29:58.526 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:29:58.526 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:58.526 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:02.727 Waiting for block devices as requested 00:30:02.727 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:30:02.727 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:30:02.727 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:30:02.727 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:30:02.727 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:30:02.727 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:30:02.727 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:30:02.987 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:30:02.987 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:30:03.247 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:30:03.247 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:30:03.247 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:30:03.508 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:30:03.508 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:30:03.508 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:30:03.769 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:30:03.769 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:30:04.714 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:30:04.714 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:04.714 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:30:04.714 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:30:04.714 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:04.714 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:30:04.714 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:30:04.714 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:30:04.714 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:04.714 No valid GPT data, bailing 00:30:04.714 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:04.714 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:30:04.714 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:30:04.714 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:30:04.714 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:30:04.714 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:04.714 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:04.714 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:04.714 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:30:04.714 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:30:04.714 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:30:04.714 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:30:04.714 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:30:04.714 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:30:04.714 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:30:04.714 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:30:04.714 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:04.714 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:30:04.714 00:30:04.714 Discovery Log Number of Records 2, Generation counter 2 00:30:04.714 =====Discovery Log Entry 0====== 00:30:04.714 trtype: tcp 00:30:04.714 adrfam: ipv4 00:30:04.714 subtype: current discovery subsystem 00:30:04.714 treq: not specified, sq flow control disable supported 00:30:04.714 portid: 1 00:30:04.714 trsvcid: 4420 00:30:04.714 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:04.714 traddr: 10.0.0.1 00:30:04.714 eflags: none 00:30:04.714 sectype: none 00:30:04.714 =====Discovery Log Entry 1====== 00:30:04.714 trtype: tcp 00:30:04.714 adrfam: ipv4 00:30:04.714 subtype: nvme subsystem 00:30:04.714 treq: not specified, sq flow control disable supported 00:30:04.714 portid: 1 00:30:04.714 trsvcid: 4420 00:30:04.715 subnqn: nqn.2024-02.io.spdk:cnode0 00:30:04.715 traddr: 10.0.0.1 00:30:04.715 eflags: none 00:30:04.715 sectype: none 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUwYzAzYWY0YTUwYWNhYjA4ZjJmZjMwYzUyNmM5NjE4ZjQ0MzhjZjM5MGZmODhjkk3TPw==: 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUwYzAzYWY0YTUwYWNhYjA4ZjJmZjMwYzUyNmM5NjE4ZjQ0MzhjZjM5MGZmODhjkk3TPw==: 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: ]] 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.715 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.975 nvme0n1 00:30:04.975 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.976 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:04.976 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:04.976 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.976 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.976 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.976 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:04.976 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:04.976 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.976 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyNTY1MTBmZDAwMWRiZjMxMmFiNTVlNTk2M2YyOGVBzHF7: 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyNTY1MTBmZDAwMWRiZjMxMmFiNTVlNTk2M2YyOGVBzHF7: 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: ]] 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.976 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.236 nvme0n1 00:30:05.236 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.236 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:05.236 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.236 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:05.236 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.236 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.236 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:05.236 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:05.236 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.236 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.236 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.236 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:05.236 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:05.236 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:05.236 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:05.236 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:05.236 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:05.236 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUwYzAzYWY0YTUwYWNhYjA4ZjJmZjMwYzUyNmM5NjE4ZjQ0MzhjZjM5MGZmODhjkk3TPw==: 00:30:05.236 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: 00:30:05.236 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:05.236 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:05.237 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUwYzAzYWY0YTUwYWNhYjA4ZjJmZjMwYzUyNmM5NjE4ZjQ0MzhjZjM5MGZmODhjkk3TPw==: 00:30:05.237 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: ]] 00:30:05.237 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: 00:30:05.237 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:30:05.237 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:05.237 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:05.237 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:05.237 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:05.237 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:05.237 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:05.237 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.237 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.237 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.237 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:05.237 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:05.237 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:05.237 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:05.237 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:05.237 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:05.237 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:05.237 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:05.237 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:05.237 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:05.237 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:05.237 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:05.237 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.237 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.497 nvme0n1 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2M2ZmU0ZGUyZWYwMzE5NGUxZDdhMzNmZDUzZWQ3MjCEeDtM: 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2M2ZmU0ZGUyZWYwMzE5NGUxZDdhMzNmZDUzZWQ3MjCEeDtM: 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: ]] 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.497 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.758 nvme0n1 00:30:05.758 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.758 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:05.758 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:05.758 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.758 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.758 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.758 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:05.758 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:05.758 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.758 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.758 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.758 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:05.758 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:30:05.758 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:05.758 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:05.758 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:05.758 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:05.758 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzcxYWQ5YTdmYWI1MzUxZDBiNDA4OGZiZDhjZGE4ZDkwYmZlODMzMTcyOWFmYjBhuYyXww==: 00:30:05.758 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: 00:30:05.758 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:05.758 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:05.759 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzcxYWQ5YTdmYWI1MzUxZDBiNDA4OGZiZDhjZGE4ZDkwYmZlODMzMTcyOWFmYjBhuYyXww==: 00:30:05.759 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: ]] 00:30:05.759 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: 00:30:05.759 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:30:05.759 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:05.759 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:05.759 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:05.759 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:05.759 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:05.759 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:05.759 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.759 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.759 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.759 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:05.759 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:05.759 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:05.759 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:05.759 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:05.759 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:05.759 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:05.759 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:05.759 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:05.759 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:05.759 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:05.759 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:05.759 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.759 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.759 nvme0n1 00:30:05.759 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.759 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:05.759 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:05.759 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.759 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2E4MjVlNGM1NDgxYWFiMjVlMDg4YTZjNzUxZTQ2NmQ2MjZiZDA0YTgwODZkMWQ3NWQ5OThiODIzMzMyN2I1MJWLZxg=: 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2E4MjVlNGM1NDgxYWFiMjVlMDg4YTZjNzUxZTQ2NmQ2MjZiZDA0YTgwODZkMWQ3NWQ5OThiODIzMzMyN2I1MJWLZxg=: 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.022 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.022 nvme0n1 00:30:06.022 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.022 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:06.022 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:06.022 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.022 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.022 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyNTY1MTBmZDAwMWRiZjMxMmFiNTVlNTk2M2YyOGVBzHF7: 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyNTY1MTBmZDAwMWRiZjMxMmFiNTVlNTk2M2YyOGVBzHF7: 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: ]] 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.283 nvme0n1 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:06.283 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.284 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.284 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUwYzAzYWY0YTUwYWNhYjA4ZjJmZjMwYzUyNmM5NjE4ZjQ0MzhjZjM5MGZmODhjkk3TPw==: 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUwYzAzYWY0YTUwYWNhYjA4ZjJmZjMwYzUyNmM5NjE4ZjQ0MzhjZjM5MGZmODhjkk3TPw==: 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: ]] 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.544 nvme0n1 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.544 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2M2ZmU0ZGUyZWYwMzE5NGUxZDdhMzNmZDUzZWQ3MjCEeDtM: 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2M2ZmU0ZGUyZWYwMzE5NGUxZDdhMzNmZDUzZWQ3MjCEeDtM: 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: ]] 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.806 nvme0n1 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.806 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzcxYWQ5YTdmYWI1MzUxZDBiNDA4OGZiZDhjZGE4ZDkwYmZlODMzMTcyOWFmYjBhuYyXww==: 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzcxYWQ5YTdmYWI1MzUxZDBiNDA4OGZiZDhjZGE4ZDkwYmZlODMzMTcyOWFmYjBhuYyXww==: 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: ]] 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:07.067 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.068 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.068 nvme0n1 00:30:07.068 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.068 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:07.068 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:07.068 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.068 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.327 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.327 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2E4MjVlNGM1NDgxYWFiMjVlMDg4YTZjNzUxZTQ2NmQ2MjZiZDA0YTgwODZkMWQ3NWQ5OThiODIzMzMyN2I1MJWLZxg=: 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2E4MjVlNGM1NDgxYWFiMjVlMDg4YTZjNzUxZTQ2NmQ2MjZiZDA0YTgwODZkMWQ3NWQ5OThiODIzMzMyN2I1MJWLZxg=: 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.328 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.589 nvme0n1 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyNTY1MTBmZDAwMWRiZjMxMmFiNTVlNTk2M2YyOGVBzHF7: 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyNTY1MTBmZDAwMWRiZjMxMmFiNTVlNTk2M2YyOGVBzHF7: 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: ]] 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:07.589 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:07.590 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:07.590 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.590 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.850 nvme0n1 00:30:07.850 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.850 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:07.850 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:07.850 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.850 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.850 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.850 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:07.850 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:07.850 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.850 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.850 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.850 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:07.850 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:30:07.850 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:07.850 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:07.850 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:07.850 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:07.850 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUwYzAzYWY0YTUwYWNhYjA4ZjJmZjMwYzUyNmM5NjE4ZjQ0MzhjZjM5MGZmODhjkk3TPw==: 00:30:07.850 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: 00:30:07.850 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:07.850 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:07.850 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUwYzAzYWY0YTUwYWNhYjA4ZjJmZjMwYzUyNmM5NjE4ZjQ0MzhjZjM5MGZmODhjkk3TPw==: 00:30:07.850 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: ]] 00:30:07.850 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: 00:30:07.850 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:30:07.850 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:07.850 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:07.850 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:07.850 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:07.850 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:07.850 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:07.850 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.850 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.850 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.850 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:07.850 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:07.850 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:07.850 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:07.851 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:07.851 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:07.851 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:07.851 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:07.851 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:07.851 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:07.851 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:07.851 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:07.851 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.851 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.111 nvme0n1 00:30:08.111 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.111 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:08.111 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:08.111 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.111 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.111 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.111 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:08.111 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:08.111 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.111 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.373 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.373 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:08.373 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:30:08.373 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:08.373 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:08.373 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:08.373 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:08.373 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2M2ZmU0ZGUyZWYwMzE5NGUxZDdhMzNmZDUzZWQ3MjCEeDtM: 00:30:08.373 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: 00:30:08.373 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:08.373 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:08.373 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2M2ZmU0ZGUyZWYwMzE5NGUxZDdhMzNmZDUzZWQ3MjCEeDtM: 00:30:08.373 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: ]] 00:30:08.373 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: 00:30:08.373 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:30:08.373 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:08.373 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:08.373 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:08.373 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:08.373 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:08.373 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:08.373 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.373 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.373 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.373 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:08.373 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:08.373 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:08.373 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:08.373 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:08.373 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:08.373 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:08.373 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:08.373 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:08.373 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:08.373 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:08.373 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:08.373 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.373 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.634 nvme0n1 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzcxYWQ5YTdmYWI1MzUxZDBiNDA4OGZiZDhjZGE4ZDkwYmZlODMzMTcyOWFmYjBhuYyXww==: 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzcxYWQ5YTdmYWI1MzUxZDBiNDA4OGZiZDhjZGE4ZDkwYmZlODMzMTcyOWFmYjBhuYyXww==: 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: ]] 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.634 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.895 nvme0n1 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2E4MjVlNGM1NDgxYWFiMjVlMDg4YTZjNzUxZTQ2NmQ2MjZiZDA0YTgwODZkMWQ3NWQ5OThiODIzMzMyN2I1MJWLZxg=: 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2E4MjVlNGM1NDgxYWFiMjVlMDg4YTZjNzUxZTQ2NmQ2MjZiZDA0YTgwODZkMWQ3NWQ5OThiODIzMzMyN2I1MJWLZxg=: 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.895 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.156 nvme0n1 00:30:09.156 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.156 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:09.156 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:09.156 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.156 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.156 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.416 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:09.416 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:09.416 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.416 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.416 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.416 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:09.416 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:09.416 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:30:09.416 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:09.416 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:09.416 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:09.416 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:09.416 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyNTY1MTBmZDAwMWRiZjMxMmFiNTVlNTk2M2YyOGVBzHF7: 00:30:09.417 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: 00:30:09.417 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:09.417 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:09.417 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyNTY1MTBmZDAwMWRiZjMxMmFiNTVlNTk2M2YyOGVBzHF7: 00:30:09.417 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: ]] 00:30:09.417 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: 00:30:09.417 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:30:09.417 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:09.417 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:09.417 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:09.417 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:09.417 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:09.417 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:09.417 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.417 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.417 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.417 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:09.417 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:09.417 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:09.417 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:09.417 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:09.417 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:09.417 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:09.417 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:09.417 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:09.417 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:09.417 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:09.417 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:09.417 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.417 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.678 nvme0n1 00:30:09.678 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUwYzAzYWY0YTUwYWNhYjA4ZjJmZjMwYzUyNmM5NjE4ZjQ0MzhjZjM5MGZmODhjkk3TPw==: 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUwYzAzYWY0YTUwYWNhYjA4ZjJmZjMwYzUyNmM5NjE4ZjQ0MzhjZjM5MGZmODhjkk3TPw==: 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: ]] 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.938 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.199 nvme0n1 00:30:10.199 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.199 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:10.199 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:10.199 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.199 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.199 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2M2ZmU0ZGUyZWYwMzE5NGUxZDdhMzNmZDUzZWQ3MjCEeDtM: 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2M2ZmU0ZGUyZWYwMzE5NGUxZDdhMzNmZDUzZWQ3MjCEeDtM: 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: ]] 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.460 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.720 nvme0n1 00:30:10.720 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.720 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:10.720 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:10.720 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.720 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzcxYWQ5YTdmYWI1MzUxZDBiNDA4OGZiZDhjZGE4ZDkwYmZlODMzMTcyOWFmYjBhuYyXww==: 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzcxYWQ5YTdmYWI1MzUxZDBiNDA4OGZiZDhjZGE4ZDkwYmZlODMzMTcyOWFmYjBhuYyXww==: 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: ]] 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.981 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.552 nvme0n1 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2E4MjVlNGM1NDgxYWFiMjVlMDg4YTZjNzUxZTQ2NmQ2MjZiZDA0YTgwODZkMWQ3NWQ5OThiODIzMzMyN2I1MJWLZxg=: 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2E4MjVlNGM1NDgxYWFiMjVlMDg4YTZjNzUxZTQ2NmQ2MjZiZDA0YTgwODZkMWQ3NWQ5OThiODIzMzMyN2I1MJWLZxg=: 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.552 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.812 nvme0n1 00:30:11.812 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.073 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:12.073 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:12.073 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.073 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.073 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.073 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:12.073 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:12.073 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.073 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyNTY1MTBmZDAwMWRiZjMxMmFiNTVlNTk2M2YyOGVBzHF7: 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyNTY1MTBmZDAwMWRiZjMxMmFiNTVlNTk2M2YyOGVBzHF7: 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: ]] 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.073 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.644 nvme0n1 00:30:12.644 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUwYzAzYWY0YTUwYWNhYjA4ZjJmZjMwYzUyNmM5NjE4ZjQ0MzhjZjM5MGZmODhjkk3TPw==: 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUwYzAzYWY0YTUwYWNhYjA4ZjJmZjMwYzUyNmM5NjE4ZjQ0MzhjZjM5MGZmODhjkk3TPw==: 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: ]] 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.905 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.844 nvme0n1 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2M2ZmU0ZGUyZWYwMzE5NGUxZDdhMzNmZDUzZWQ3MjCEeDtM: 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2M2ZmU0ZGUyZWYwMzE5NGUxZDdhMzNmZDUzZWQ3MjCEeDtM: 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: ]] 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.844 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.413 nvme0n1 00:30:14.413 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.413 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:14.413 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.413 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzcxYWQ5YTdmYWI1MzUxZDBiNDA4OGZiZDhjZGE4ZDkwYmZlODMzMTcyOWFmYjBhuYyXww==: 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzcxYWQ5YTdmYWI1MzUxZDBiNDA4OGZiZDhjZGE4ZDkwYmZlODMzMTcyOWFmYjBhuYyXww==: 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: ]] 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.414 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.354 nvme0n1 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2E4MjVlNGM1NDgxYWFiMjVlMDg4YTZjNzUxZTQ2NmQ2MjZiZDA0YTgwODZkMWQ3NWQ5OThiODIzMzMyN2I1MJWLZxg=: 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2E4MjVlNGM1NDgxYWFiMjVlMDg4YTZjNzUxZTQ2NmQ2MjZiZDA0YTgwODZkMWQ3NWQ5OThiODIzMzMyN2I1MJWLZxg=: 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.354 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.296 nvme0n1 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyNTY1MTBmZDAwMWRiZjMxMmFiNTVlNTk2M2YyOGVBzHF7: 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyNTY1MTBmZDAwMWRiZjMxMmFiNTVlNTk2M2YyOGVBzHF7: 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: ]] 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.296 nvme0n1 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:16.296 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:16.297 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:16.297 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUwYzAzYWY0YTUwYWNhYjA4ZjJmZjMwYzUyNmM5NjE4ZjQ0MzhjZjM5MGZmODhjkk3TPw==: 00:30:16.297 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: 00:30:16.297 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:16.297 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:16.297 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUwYzAzYWY0YTUwYWNhYjA4ZjJmZjMwYzUyNmM5NjE4ZjQ0MzhjZjM5MGZmODhjkk3TPw==: 00:30:16.297 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: ]] 00:30:16.297 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: 00:30:16.297 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:30:16.297 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:16.297 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:16.297 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:16.297 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:16.297 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:16.297 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:16.297 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.297 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.297 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.297 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:16.297 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:16.297 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:16.297 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:16.297 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:16.297 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:16.297 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:16.297 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:16.297 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:16.297 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:16.297 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:16.297 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:16.297 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.297 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.558 nvme0n1 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2M2ZmU0ZGUyZWYwMzE5NGUxZDdhMzNmZDUzZWQ3MjCEeDtM: 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2M2ZmU0ZGUyZWYwMzE5NGUxZDdhMzNmZDUzZWQ3MjCEeDtM: 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: ]] 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.558 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.819 nvme0n1 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzcxYWQ5YTdmYWI1MzUxZDBiNDA4OGZiZDhjZGE4ZDkwYmZlODMzMTcyOWFmYjBhuYyXww==: 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzcxYWQ5YTdmYWI1MzUxZDBiNDA4OGZiZDhjZGE4ZDkwYmZlODMzMTcyOWFmYjBhuYyXww==: 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: ]] 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.819 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.080 nvme0n1 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2E4MjVlNGM1NDgxYWFiMjVlMDg4YTZjNzUxZTQ2NmQ2MjZiZDA0YTgwODZkMWQ3NWQ5OThiODIzMzMyN2I1MJWLZxg=: 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2E4MjVlNGM1NDgxYWFiMjVlMDg4YTZjNzUxZTQ2NmQ2MjZiZDA0YTgwODZkMWQ3NWQ5OThiODIzMzMyN2I1MJWLZxg=: 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.080 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.340 nvme0n1 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyNTY1MTBmZDAwMWRiZjMxMmFiNTVlNTk2M2YyOGVBzHF7: 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyNTY1MTBmZDAwMWRiZjMxMmFiNTVlNTk2M2YyOGVBzHF7: 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: ]] 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.340 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.601 nvme0n1 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUwYzAzYWY0YTUwYWNhYjA4ZjJmZjMwYzUyNmM5NjE4ZjQ0MzhjZjM5MGZmODhjkk3TPw==: 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUwYzAzYWY0YTUwYWNhYjA4ZjJmZjMwYzUyNmM5NjE4ZjQ0MzhjZjM5MGZmODhjkk3TPw==: 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: ]] 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.601 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.862 nvme0n1 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2M2ZmU0ZGUyZWYwMzE5NGUxZDdhMzNmZDUzZWQ3MjCEeDtM: 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2M2ZmU0ZGUyZWYwMzE5NGUxZDdhMzNmZDUzZWQ3MjCEeDtM: 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: ]] 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.862 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.123 nvme0n1 00:30:18.123 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.123 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:18.123 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:18.123 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.123 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.123 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.123 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:18.123 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:18.123 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.123 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.123 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.123 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:18.123 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:30:18.123 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:18.123 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:18.123 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:18.123 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:18.123 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzcxYWQ5YTdmYWI1MzUxZDBiNDA4OGZiZDhjZGE4ZDkwYmZlODMzMTcyOWFmYjBhuYyXww==: 00:30:18.123 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: 00:30:18.123 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:18.123 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:18.123 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzcxYWQ5YTdmYWI1MzUxZDBiNDA4OGZiZDhjZGE4ZDkwYmZlODMzMTcyOWFmYjBhuYyXww==: 00:30:18.123 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: ]] 00:30:18.123 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: 00:30:18.123 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:30:18.123 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:18.123 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:18.123 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:18.123 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:18.123 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:18.123 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:18.123 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.124 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.124 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.124 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:18.124 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:18.124 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:18.124 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:18.124 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:18.124 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:18.124 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:18.124 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:18.124 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:18.124 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:18.124 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:18.124 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:18.124 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.124 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.385 nvme0n1 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2E4MjVlNGM1NDgxYWFiMjVlMDg4YTZjNzUxZTQ2NmQ2MjZiZDA0YTgwODZkMWQ3NWQ5OThiODIzMzMyN2I1MJWLZxg=: 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2E4MjVlNGM1NDgxYWFiMjVlMDg4YTZjNzUxZTQ2NmQ2MjZiZDA0YTgwODZkMWQ3NWQ5OThiODIzMzMyN2I1MJWLZxg=: 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.385 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.645 nvme0n1 00:30:18.645 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.645 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:18.645 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.645 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:18.645 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.645 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.645 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:18.645 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:18.645 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.645 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.645 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.645 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:18.645 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:18.645 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:30:18.645 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:18.645 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:18.645 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:18.645 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:18.645 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyNTY1MTBmZDAwMWRiZjMxMmFiNTVlNTk2M2YyOGVBzHF7: 00:30:18.646 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: 00:30:18.646 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:18.646 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:18.646 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyNTY1MTBmZDAwMWRiZjMxMmFiNTVlNTk2M2YyOGVBzHF7: 00:30:18.646 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: ]] 00:30:18.646 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: 00:30:18.646 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:30:18.646 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:18.646 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:18.646 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:18.646 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:18.646 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:18.646 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:18.646 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.646 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.646 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.646 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:18.646 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:18.646 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:18.646 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:18.646 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:18.646 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:18.646 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:18.646 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:18.646 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:18.646 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:18.646 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:18.646 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:18.646 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.646 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.906 nvme0n1 00:30:18.906 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.906 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:18.906 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:18.906 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.906 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.906 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUwYzAzYWY0YTUwYWNhYjA4ZjJmZjMwYzUyNmM5NjE4ZjQ0MzhjZjM5MGZmODhjkk3TPw==: 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUwYzAzYWY0YTUwYWNhYjA4ZjJmZjMwYzUyNmM5NjE4ZjQ0MzhjZjM5MGZmODhjkk3TPw==: 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: ]] 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.166 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.428 nvme0n1 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2M2ZmU0ZGUyZWYwMzE5NGUxZDdhMzNmZDUzZWQ3MjCEeDtM: 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2M2ZmU0ZGUyZWYwMzE5NGUxZDdhMzNmZDUzZWQ3MjCEeDtM: 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: ]] 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.428 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.690 nvme0n1 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzcxYWQ5YTdmYWI1MzUxZDBiNDA4OGZiZDhjZGE4ZDkwYmZlODMzMTcyOWFmYjBhuYyXww==: 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzcxYWQ5YTdmYWI1MzUxZDBiNDA4OGZiZDhjZGE4ZDkwYmZlODMzMTcyOWFmYjBhuYyXww==: 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: ]] 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.690 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.951 nvme0n1 00:30:19.951 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.951 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:19.951 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:19.951 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.951 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.951 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.239 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2E4MjVlNGM1NDgxYWFiMjVlMDg4YTZjNzUxZTQ2NmQ2MjZiZDA0YTgwODZkMWQ3NWQ5OThiODIzMzMyN2I1MJWLZxg=: 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2E4MjVlNGM1NDgxYWFiMjVlMDg4YTZjNzUxZTQ2NmQ2MjZiZDA0YTgwODZkMWQ3NWQ5OThiODIzMzMyN2I1MJWLZxg=: 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.240 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.501 nvme0n1 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyNTY1MTBmZDAwMWRiZjMxMmFiNTVlNTk2M2YyOGVBzHF7: 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyNTY1MTBmZDAwMWRiZjMxMmFiNTVlNTk2M2YyOGVBzHF7: 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: ]] 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.501 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.074 nvme0n1 00:30:21.074 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.074 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:21.074 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:21.074 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.074 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.074 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.074 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:21.074 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:21.074 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.074 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.074 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.074 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:21.074 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:30:21.074 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:21.074 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:21.075 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:21.075 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:21.075 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUwYzAzYWY0YTUwYWNhYjA4ZjJmZjMwYzUyNmM5NjE4ZjQ0MzhjZjM5MGZmODhjkk3TPw==: 00:30:21.075 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: 00:30:21.075 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:21.075 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:21.075 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUwYzAzYWY0YTUwYWNhYjA4ZjJmZjMwYzUyNmM5NjE4ZjQ0MzhjZjM5MGZmODhjkk3TPw==: 00:30:21.075 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: ]] 00:30:21.075 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: 00:30:21.075 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:30:21.075 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:21.075 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:21.075 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:21.075 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:21.075 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:21.075 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:21.075 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.075 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.075 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.075 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:21.075 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:21.075 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:21.075 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:21.075 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:21.075 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:21.075 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:21.075 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:21.075 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:21.075 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:21.075 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:21.075 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:21.075 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.075 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.336 nvme0n1 00:30:21.336 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2M2ZmU0ZGUyZWYwMzE5NGUxZDdhMzNmZDUzZWQ3MjCEeDtM: 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2M2ZmU0ZGUyZWYwMzE5NGUxZDdhMzNmZDUzZWQ3MjCEeDtM: 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: ]] 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:21.598 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:21.599 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:21.599 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:21.599 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:21.599 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:21.599 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:21.599 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.599 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.172 nvme0n1 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzcxYWQ5YTdmYWI1MzUxZDBiNDA4OGZiZDhjZGE4ZDkwYmZlODMzMTcyOWFmYjBhuYyXww==: 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzcxYWQ5YTdmYWI1MzUxZDBiNDA4OGZiZDhjZGE4ZDkwYmZlODMzMTcyOWFmYjBhuYyXww==: 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: ]] 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:22.172 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:22.173 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:22.173 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:22.173 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:22.173 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:22.173 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:22.173 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:22.173 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:22.173 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.173 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.434 nvme0n1 00:30:22.434 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.434 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:22.434 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:22.434 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.434 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.434 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.434 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:22.434 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:22.434 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.434 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.694 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.694 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:22.694 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:30:22.694 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:22.694 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:22.694 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:22.694 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:22.694 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2E4MjVlNGM1NDgxYWFiMjVlMDg4YTZjNzUxZTQ2NmQ2MjZiZDA0YTgwODZkMWQ3NWQ5OThiODIzMzMyN2I1MJWLZxg=: 00:30:22.694 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:22.694 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:22.694 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:22.694 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2E4MjVlNGM1NDgxYWFiMjVlMDg4YTZjNzUxZTQ2NmQ2MjZiZDA0YTgwODZkMWQ3NWQ5OThiODIzMzMyN2I1MJWLZxg=: 00:30:22.694 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:22.694 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:30:22.694 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:22.694 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:22.694 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:22.694 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:22.694 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:22.694 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:22.694 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.694 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.694 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.694 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:22.694 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:22.694 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:22.694 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:22.695 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:22.695 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:22.695 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:22.695 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:22.695 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:22.695 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:22.695 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:22.695 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:22.695 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.695 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.963 nvme0n1 00:30:22.963 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.963 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:22.963 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.963 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:22.963 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.963 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyNTY1MTBmZDAwMWRiZjMxMmFiNTVlNTk2M2YyOGVBzHF7: 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyNTY1MTBmZDAwMWRiZjMxMmFiNTVlNTk2M2YyOGVBzHF7: 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: ]] 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.270 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.893 nvme0n1 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUwYzAzYWY0YTUwYWNhYjA4ZjJmZjMwYzUyNmM5NjE4ZjQ0MzhjZjM5MGZmODhjkk3TPw==: 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUwYzAzYWY0YTUwYWNhYjA4ZjJmZjMwYzUyNmM5NjE4ZjQ0MzhjZjM5MGZmODhjkk3TPw==: 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: ]] 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.893 07:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.836 nvme0n1 00:30:24.836 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.836 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:24.836 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:24.836 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.836 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.836 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.836 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:24.836 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:24.836 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.836 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.836 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.836 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:24.836 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:30:24.836 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:24.836 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:24.836 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:24.836 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:24.836 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2M2ZmU0ZGUyZWYwMzE5NGUxZDdhMzNmZDUzZWQ3MjCEeDtM: 00:30:24.836 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: 00:30:24.836 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:24.836 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:24.836 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2M2ZmU0ZGUyZWYwMzE5NGUxZDdhMzNmZDUzZWQ3MjCEeDtM: 00:30:24.836 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: ]] 00:30:24.836 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: 00:30:24.836 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:30:24.836 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:24.836 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:24.836 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:24.836 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:24.836 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:24.836 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:24.836 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.836 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.837 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.837 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:24.837 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:24.837 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:24.837 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:24.837 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:24.837 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:24.837 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:24.837 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:24.837 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:24.837 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:24.837 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:24.837 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:24.837 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.837 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.780 nvme0n1 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzcxYWQ5YTdmYWI1MzUxZDBiNDA4OGZiZDhjZGE4ZDkwYmZlODMzMTcyOWFmYjBhuYyXww==: 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzcxYWQ5YTdmYWI1MzUxZDBiNDA4OGZiZDhjZGE4ZDkwYmZlODMzMTcyOWFmYjBhuYyXww==: 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: ]] 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.780 07:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.352 nvme0n1 00:30:26.352 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.352 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:26.352 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.352 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:26.352 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.352 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.352 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:26.352 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:26.352 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.352 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.352 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.352 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:26.352 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:30:26.352 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:26.352 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:26.352 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:26.352 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:26.352 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2E4MjVlNGM1NDgxYWFiMjVlMDg4YTZjNzUxZTQ2NmQ2MjZiZDA0YTgwODZkMWQ3NWQ5OThiODIzMzMyN2I1MJWLZxg=: 00:30:26.352 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:26.352 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:26.353 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:26.353 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2E4MjVlNGM1NDgxYWFiMjVlMDg4YTZjNzUxZTQ2NmQ2MjZiZDA0YTgwODZkMWQ3NWQ5OThiODIzMzMyN2I1MJWLZxg=: 00:30:26.353 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:26.353 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:30:26.353 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:26.353 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:26.353 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:26.353 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:26.353 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:26.353 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:26.353 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.353 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.353 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.614 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:26.614 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:26.614 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:26.614 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:26.614 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:26.614 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:26.614 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:26.614 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:26.614 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:26.614 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:26.614 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:26.614 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:26.614 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.614 07:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.187 nvme0n1 00:30:27.187 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.187 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:27.187 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:27.187 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.187 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.187 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.187 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:27.187 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:27.187 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.187 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.187 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.187 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:30:27.187 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:27.187 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:27.187 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:30:27.187 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:27.187 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:27.187 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:27.187 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:27.187 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyNTY1MTBmZDAwMWRiZjMxMmFiNTVlNTk2M2YyOGVBzHF7: 00:30:27.187 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: 00:30:27.187 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:27.187 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:27.187 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyNTY1MTBmZDAwMWRiZjMxMmFiNTVlNTk2M2YyOGVBzHF7: 00:30:27.187 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: ]] 00:30:27.187 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: 00:30:27.187 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:30:27.187 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:27.187 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:27.187 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:27.187 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:27.187 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:27.187 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:27.187 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.187 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.448 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.448 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:27.448 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:27.448 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:27.448 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:27.448 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:27.448 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:27.448 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:27.448 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.449 nvme0n1 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUwYzAzYWY0YTUwYWNhYjA4ZjJmZjMwYzUyNmM5NjE4ZjQ0MzhjZjM5MGZmODhjkk3TPw==: 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUwYzAzYWY0YTUwYWNhYjA4ZjJmZjMwYzUyNmM5NjE4ZjQ0MzhjZjM5MGZmODhjkk3TPw==: 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: ]] 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.449 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.711 nvme0n1 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2M2ZmU0ZGUyZWYwMzE5NGUxZDdhMzNmZDUzZWQ3MjCEeDtM: 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2M2ZmU0ZGUyZWYwMzE5NGUxZDdhMzNmZDUzZWQ3MjCEeDtM: 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: ]] 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.711 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.973 nvme0n1 00:30:27.973 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.973 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:27.973 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:27.973 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.973 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.973 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.973 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:27.973 07:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzcxYWQ5YTdmYWI1MzUxZDBiNDA4OGZiZDhjZGE4ZDkwYmZlODMzMTcyOWFmYjBhuYyXww==: 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzcxYWQ5YTdmYWI1MzUxZDBiNDA4OGZiZDhjZGE4ZDkwYmZlODMzMTcyOWFmYjBhuYyXww==: 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: ]] 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.973 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.235 nvme0n1 00:30:28.235 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.235 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:28.235 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.235 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:28.235 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.235 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.235 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:28.235 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:28.235 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.235 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.235 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.235 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:28.235 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:30:28.235 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:28.235 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:28.235 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:28.235 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:28.235 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2E4MjVlNGM1NDgxYWFiMjVlMDg4YTZjNzUxZTQ2NmQ2MjZiZDA0YTgwODZkMWQ3NWQ5OThiODIzMzMyN2I1MJWLZxg=: 00:30:28.235 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:28.235 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:28.235 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:28.235 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2E4MjVlNGM1NDgxYWFiMjVlMDg4YTZjNzUxZTQ2NmQ2MjZiZDA0YTgwODZkMWQ3NWQ5OThiODIzMzMyN2I1MJWLZxg=: 00:30:28.235 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:28.235 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:30:28.235 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:28.235 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:28.235 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:28.235 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:28.235 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:28.235 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:28.235 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.235 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.235 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.235 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:28.236 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:28.236 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:28.236 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:28.236 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:28.236 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:28.236 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:28.236 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:28.236 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:28.236 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:28.236 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:28.236 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:28.236 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.236 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.496 nvme0n1 00:30:28.496 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.496 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:28.496 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:28.496 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.496 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.496 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.496 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:28.496 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyNTY1MTBmZDAwMWRiZjMxMmFiNTVlNTk2M2YyOGVBzHF7: 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyNTY1MTBmZDAwMWRiZjMxMmFiNTVlNTk2M2YyOGVBzHF7: 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: ]] 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.497 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.761 nvme0n1 00:30:28.761 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.761 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:28.761 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:28.761 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.761 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.761 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.761 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:28.761 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:28.761 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.761 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.761 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.761 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:28.761 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:30:28.761 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:28.761 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:28.761 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:28.761 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:28.762 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUwYzAzYWY0YTUwYWNhYjA4ZjJmZjMwYzUyNmM5NjE4ZjQ0MzhjZjM5MGZmODhjkk3TPw==: 00:30:28.762 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: 00:30:28.762 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:28.762 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:28.762 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUwYzAzYWY0YTUwYWNhYjA4ZjJmZjMwYzUyNmM5NjE4ZjQ0MzhjZjM5MGZmODhjkk3TPw==: 00:30:28.762 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: ]] 00:30:28.762 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: 00:30:28.762 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:30:28.762 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:28.762 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:28.762 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:28.762 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:28.762 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:28.762 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:28.762 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.762 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.762 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.762 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:28.762 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:28.762 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:28.762 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:28.763 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:28.763 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:28.763 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:28.763 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:28.763 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:28.763 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:28.763 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:28.763 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:28.763 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.763 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.029 nvme0n1 00:30:29.029 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.029 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:29.029 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:29.029 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.029 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.029 07:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.029 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:29.029 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:29.029 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.029 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.029 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.029 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:29.029 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:30:29.029 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:29.029 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:29.029 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:29.029 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:29.029 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2M2ZmU0ZGUyZWYwMzE5NGUxZDdhMzNmZDUzZWQ3MjCEeDtM: 00:30:29.030 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: 00:30:29.030 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:29.030 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:29.030 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2M2ZmU0ZGUyZWYwMzE5NGUxZDdhMzNmZDUzZWQ3MjCEeDtM: 00:30:29.030 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: ]] 00:30:29.030 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: 00:30:29.030 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:30:29.030 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:29.030 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:29.030 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:29.030 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:29.030 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:29.030 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:29.030 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.030 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.030 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.030 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:29.030 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:29.030 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:29.030 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:29.030 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:29.030 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:29.030 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:29.030 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:29.030 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:29.030 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:29.030 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:29.030 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:29.030 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.030 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.291 nvme0n1 00:30:29.291 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.291 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:29.291 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:29.291 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.291 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.291 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.291 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:29.291 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:29.291 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.291 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.291 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.291 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:29.291 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:30:29.291 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:29.291 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:29.291 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:29.291 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:29.291 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzcxYWQ5YTdmYWI1MzUxZDBiNDA4OGZiZDhjZGE4ZDkwYmZlODMzMTcyOWFmYjBhuYyXww==: 00:30:29.291 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: 00:30:29.291 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:29.291 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:29.291 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzcxYWQ5YTdmYWI1MzUxZDBiNDA4OGZiZDhjZGE4ZDkwYmZlODMzMTcyOWFmYjBhuYyXww==: 00:30:29.291 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: ]] 00:30:29.291 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: 00:30:29.291 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:30:29.291 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:29.291 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:29.291 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:29.291 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:29.292 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:29.292 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:29.292 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.292 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.292 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.292 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:29.292 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:29.292 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:29.292 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:29.292 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:29.292 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:29.292 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:29.292 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:29.292 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:29.292 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:29.292 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:29.292 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:29.292 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.292 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.553 nvme0n1 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2E4MjVlNGM1NDgxYWFiMjVlMDg4YTZjNzUxZTQ2NmQ2MjZiZDA0YTgwODZkMWQ3NWQ5OThiODIzMzMyN2I1MJWLZxg=: 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2E4MjVlNGM1NDgxYWFiMjVlMDg4YTZjNzUxZTQ2NmQ2MjZiZDA0YTgwODZkMWQ3NWQ5OThiODIzMzMyN2I1MJWLZxg=: 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:29.553 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:29.554 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:29.554 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:29.554 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:29.554 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:29.554 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.554 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.815 nvme0n1 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyNTY1MTBmZDAwMWRiZjMxMmFiNTVlNTk2M2YyOGVBzHF7: 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyNTY1MTBmZDAwMWRiZjMxMmFiNTVlNTk2M2YyOGVBzHF7: 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: ]] 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:29.815 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:29.816 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:29.816 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:29.816 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:29.816 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.816 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.078 nvme0n1 00:30:30.078 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.078 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:30.078 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:30.078 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.078 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.078 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.078 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:30.078 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:30.078 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.078 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.340 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.340 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:30.340 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:30:30.340 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:30.340 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:30.340 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:30.340 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:30.340 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUwYzAzYWY0YTUwYWNhYjA4ZjJmZjMwYzUyNmM5NjE4ZjQ0MzhjZjM5MGZmODhjkk3TPw==: 00:30:30.340 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: 00:30:30.340 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:30.340 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:30.340 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUwYzAzYWY0YTUwYWNhYjA4ZjJmZjMwYzUyNmM5NjE4ZjQ0MzhjZjM5MGZmODhjkk3TPw==: 00:30:30.340 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: ]] 00:30:30.340 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: 00:30:30.340 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:30:30.340 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:30.340 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:30.340 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:30.340 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:30.340 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:30.340 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:30.340 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.340 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.340 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.340 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:30.340 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:30.340 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:30.340 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:30.340 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:30.340 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:30.340 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:30.340 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:30.340 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:30.340 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:30.340 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:30.340 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:30.340 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.340 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.602 nvme0n1 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2M2ZmU0ZGUyZWYwMzE5NGUxZDdhMzNmZDUzZWQ3MjCEeDtM: 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2M2ZmU0ZGUyZWYwMzE5NGUxZDdhMzNmZDUzZWQ3MjCEeDtM: 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: ]] 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.602 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.863 nvme0n1 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzcxYWQ5YTdmYWI1MzUxZDBiNDA4OGZiZDhjZGE4ZDkwYmZlODMzMTcyOWFmYjBhuYyXww==: 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzcxYWQ5YTdmYWI1MzUxZDBiNDA4OGZiZDhjZGE4ZDkwYmZlODMzMTcyOWFmYjBhuYyXww==: 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: ]] 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.864 07:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.125 nvme0n1 00:30:31.125 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.125 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:31.125 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:31.125 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.125 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.125 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.385 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:31.385 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:31.385 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.385 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.386 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.386 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:31.386 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:30:31.386 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:31.386 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:31.386 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:31.386 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:31.386 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2E4MjVlNGM1NDgxYWFiMjVlMDg4YTZjNzUxZTQ2NmQ2MjZiZDA0YTgwODZkMWQ3NWQ5OThiODIzMzMyN2I1MJWLZxg=: 00:30:31.386 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:31.386 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:31.386 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:31.386 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2E4MjVlNGM1NDgxYWFiMjVlMDg4YTZjNzUxZTQ2NmQ2MjZiZDA0YTgwODZkMWQ3NWQ5OThiODIzMzMyN2I1MJWLZxg=: 00:30:31.386 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:31.386 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:30:31.386 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:31.386 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:31.386 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:31.386 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:31.386 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:31.386 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:31.386 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.386 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.386 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.386 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:31.386 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:31.386 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:31.386 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:31.386 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:31.386 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:31.386 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:31.386 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:31.386 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:31.386 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:31.386 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:31.386 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:31.386 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.386 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.647 nvme0n1 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyNTY1MTBmZDAwMWRiZjMxMmFiNTVlNTk2M2YyOGVBzHF7: 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyNTY1MTBmZDAwMWRiZjMxMmFiNTVlNTk2M2YyOGVBzHF7: 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: ]] 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.647 07:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.221 nvme0n1 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUwYzAzYWY0YTUwYWNhYjA4ZjJmZjMwYzUyNmM5NjE4ZjQ0MzhjZjM5MGZmODhjkk3TPw==: 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUwYzAzYWY0YTUwYWNhYjA4ZjJmZjMwYzUyNmM5NjE4ZjQ0MzhjZjM5MGZmODhjkk3TPw==: 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: ]] 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.221 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.793 nvme0n1 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2M2ZmU0ZGUyZWYwMzE5NGUxZDdhMzNmZDUzZWQ3MjCEeDtM: 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2M2ZmU0ZGUyZWYwMzE5NGUxZDdhMzNmZDUzZWQ3MjCEeDtM: 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: ]] 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:32.793 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:32.794 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:32.794 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.794 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.364 nvme0n1 00:30:33.364 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.364 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:33.364 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:33.364 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.364 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.364 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.364 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:33.364 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:33.364 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.364 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.364 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.364 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:33.364 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:30:33.364 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:33.364 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:33.364 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:33.364 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:33.364 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzcxYWQ5YTdmYWI1MzUxZDBiNDA4OGZiZDhjZGE4ZDkwYmZlODMzMTcyOWFmYjBhuYyXww==: 00:30:33.364 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: 00:30:33.364 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:33.364 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:33.364 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzcxYWQ5YTdmYWI1MzUxZDBiNDA4OGZiZDhjZGE4ZDkwYmZlODMzMTcyOWFmYjBhuYyXww==: 00:30:33.365 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: ]] 00:30:33.365 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: 00:30:33.365 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:30:33.365 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:33.365 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:33.365 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:33.365 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:33.365 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:33.365 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:33.365 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.365 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.365 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.365 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:33.365 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:33.365 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:33.365 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:33.365 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:33.365 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:33.365 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:33.365 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:33.365 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:33.365 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:33.365 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:33.365 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:33.365 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.365 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.936 nvme0n1 00:30:33.936 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.936 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:33.936 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:33.936 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.936 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.936 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.936 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:33.936 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:33.936 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.936 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.936 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.936 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:33.936 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:30:33.936 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:33.936 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:33.936 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:33.936 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:33.936 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2E4MjVlNGM1NDgxYWFiMjVlMDg4YTZjNzUxZTQ2NmQ2MjZiZDA0YTgwODZkMWQ3NWQ5OThiODIzMzMyN2I1MJWLZxg=: 00:30:33.936 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:33.936 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:33.936 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:33.936 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2E4MjVlNGM1NDgxYWFiMjVlMDg4YTZjNzUxZTQ2NmQ2MjZiZDA0YTgwODZkMWQ3NWQ5OThiODIzMzMyN2I1MJWLZxg=: 00:30:33.936 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:33.936 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:30:33.936 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:33.936 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:33.936 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:33.936 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:33.936 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:33.936 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:33.936 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.936 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.936 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.936 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:33.937 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:33.937 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:33.937 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:33.937 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:33.937 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:33.937 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:33.937 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:33.937 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:33.937 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:33.937 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:33.937 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:33.937 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.937 07:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.509 nvme0n1 00:30:34.509 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.509 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:34.509 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:34.509 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.509 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.509 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.509 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:34.509 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:34.509 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.509 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.509 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.509 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:34.509 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:34.509 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:30:34.509 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:34.509 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:34.509 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:34.509 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:34.509 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyNTY1MTBmZDAwMWRiZjMxMmFiNTVlNTk2M2YyOGVBzHF7: 00:30:34.509 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: 00:30:34.509 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:34.509 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:34.509 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyNTY1MTBmZDAwMWRiZjMxMmFiNTVlNTk2M2YyOGVBzHF7: 00:30:34.509 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: ]] 00:30:34.509 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjA3NGMxMTI3Y2QwNWQ0MTIzNWYxNWUyYjdkNzcxOWM3ODFkNDk2MTRkY2Y2NDk3YWZhZmY3ZTViZDg1MDNjMTzZjEg=: 00:30:34.509 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:30:34.509 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:34.509 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:34.509 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:34.509 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:34.509 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:34.510 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:34.510 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.510 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.510 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.510 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:34.510 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:34.510 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:34.510 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:34.510 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:34.510 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:34.510 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:34.510 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:34.510 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:34.510 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:34.510 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:34.510 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:34.510 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.510 07:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:35.081 nvme0n1 00:30:35.081 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.081 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:35.081 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:35.081 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.081 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:35.081 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.341 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:35.341 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:35.341 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.341 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:35.341 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.341 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:35.341 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:30:35.341 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:35.341 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:35.341 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:35.341 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:35.341 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUwYzAzYWY0YTUwYWNhYjA4ZjJmZjMwYzUyNmM5NjE4ZjQ0MzhjZjM5MGZmODhjkk3TPw==: 00:30:35.341 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: 00:30:35.341 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:35.341 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:35.341 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUwYzAzYWY0YTUwYWNhYjA4ZjJmZjMwYzUyNmM5NjE4ZjQ0MzhjZjM5MGZmODhjkk3TPw==: 00:30:35.341 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: ]] 00:30:35.341 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: 00:30:35.341 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:30:35.341 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:35.341 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:35.341 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:35.342 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:35.342 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:35.342 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:35.342 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.342 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:35.342 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.342 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:35.342 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:35.342 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:35.342 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:35.342 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:35.342 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:35.342 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:35.342 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:35.342 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:35.342 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:35.342 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:35.342 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:35.342 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.342 07:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:35.914 nvme0n1 00:30:35.914 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.914 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:35.914 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:35.914 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.914 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:35.914 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2M2ZmU0ZGUyZWYwMzE5NGUxZDdhMzNmZDUzZWQ3MjCEeDtM: 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2M2ZmU0ZGUyZWYwMzE5NGUxZDdhMzNmZDUzZWQ3MjCEeDtM: 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: ]] 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.175 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:36.748 nvme0n1 00:30:36.748 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.748 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:36.748 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:36.748 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.748 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:36.749 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.009 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:37.009 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:37.009 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.009 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:37.009 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.009 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:37.009 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:30:37.009 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:37.009 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:37.009 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:37.009 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:37.009 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzcxYWQ5YTdmYWI1MzUxZDBiNDA4OGZiZDhjZGE4ZDkwYmZlODMzMTcyOWFmYjBhuYyXww==: 00:30:37.009 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: 00:30:37.009 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:37.009 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:37.009 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzcxYWQ5YTdmYWI1MzUxZDBiNDA4OGZiZDhjZGE4ZDkwYmZlODMzMTcyOWFmYjBhuYyXww==: 00:30:37.009 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: ]] 00:30:37.009 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTZhZjY2NTBlYWQ1ZTRlNmIyYTY2ODM0NTQ2ODEzZjCTKrUJ: 00:30:37.009 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:30:37.009 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:37.009 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:37.009 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:37.009 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:37.009 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:37.009 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:37.009 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.009 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:37.009 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.009 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:37.009 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:37.009 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:37.009 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:37.010 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:37.010 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:37.010 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:37.010 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:37.010 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:37.010 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:37.010 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:37.010 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:37.010 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.010 07:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:37.581 nvme0n1 00:30:37.581 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.581 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:37.581 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:37.581 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.581 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:37.581 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.581 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:37.581 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:37.581 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.581 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:37.581 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.581 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:37.581 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:30:37.581 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:37.581 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:37.581 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:37.581 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:37.581 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2E4MjVlNGM1NDgxYWFiMjVlMDg4YTZjNzUxZTQ2NmQ2MjZiZDA0YTgwODZkMWQ3NWQ5OThiODIzMzMyN2I1MJWLZxg=: 00:30:37.581 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:37.581 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:37.581 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:37.581 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2E4MjVlNGM1NDgxYWFiMjVlMDg4YTZjNzUxZTQ2NmQ2MjZiZDA0YTgwODZkMWQ3NWQ5OThiODIzMzMyN2I1MJWLZxg=: 00:30:37.581 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:37.581 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:30:37.581 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:37.581 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:37.581 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:37.581 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:37.581 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:37.582 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:37.582 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.582 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:37.582 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.582 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:37.582 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:37.582 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:37.582 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:37.582 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:37.582 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:37.582 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:37.582 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:37.582 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:37.582 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:37.582 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:37.582 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:37.582 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.582 07:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:38.521 nvme0n1 00:30:38.521 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.521 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:38.521 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:38.521 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.521 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:38.521 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.521 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:38.521 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:38.521 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.521 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:38.521 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.521 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:38.521 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:38.521 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:38.521 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:38.521 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:38.521 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUwYzAzYWY0YTUwYWNhYjA4ZjJmZjMwYzUyNmM5NjE4ZjQ0MzhjZjM5MGZmODhjkk3TPw==: 00:30:38.521 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: 00:30:38.521 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:38.521 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:38.521 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUwYzAzYWY0YTUwYWNhYjA4ZjJmZjMwYzUyNmM5NjE4ZjQ0MzhjZjM5MGZmODhjkk3TPw==: 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: ]] 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:38.522 request: 00:30:38.522 { 00:30:38.522 "name": "nvme0", 00:30:38.522 "trtype": "tcp", 00:30:38.522 "traddr": "10.0.0.1", 00:30:38.522 "adrfam": "ipv4", 00:30:38.522 "trsvcid": "4420", 00:30:38.522 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:30:38.522 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:30:38.522 "prchk_reftag": false, 00:30:38.522 "prchk_guard": false, 00:30:38.522 "hdgst": false, 00:30:38.522 "ddgst": false, 00:30:38.522 "allow_unrecognized_csi": false, 00:30:38.522 "method": "bdev_nvme_attach_controller", 00:30:38.522 "req_id": 1 00:30:38.522 } 00:30:38.522 Got JSON-RPC error response 00:30:38.522 response: 00:30:38.522 { 00:30:38.522 "code": -5, 00:30:38.522 "message": "Input/output error" 00:30:38.522 } 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.522 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:38.782 request: 00:30:38.782 { 00:30:38.782 "name": "nvme0", 00:30:38.782 "trtype": "tcp", 00:30:38.782 "traddr": "10.0.0.1", 00:30:38.782 "adrfam": "ipv4", 00:30:38.782 "trsvcid": "4420", 00:30:38.782 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:30:38.782 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:30:38.782 "prchk_reftag": false, 00:30:38.782 "prchk_guard": false, 00:30:38.782 "hdgst": false, 00:30:38.782 "ddgst": false, 00:30:38.782 "dhchap_key": "key2", 00:30:38.782 "allow_unrecognized_csi": false, 00:30:38.782 "method": "bdev_nvme_attach_controller", 00:30:38.782 "req_id": 1 00:30:38.782 } 00:30:38.782 Got JSON-RPC error response 00:30:38.782 response: 00:30:38.782 { 00:30:38.782 "code": -5, 00:30:38.782 "message": "Input/output error" 00:30:38.782 } 00:30:38.782 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:38.782 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:30:38.782 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:38.782 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:38.782 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:38.782 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:30:38.782 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:30:38.782 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.782 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:38.782 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.782 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:30:38.782 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:30:38.782 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:38.782 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:38.782 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:38.782 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:38.782 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:38.782 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:38.782 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:38.782 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:38.782 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:38.782 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:38.782 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:38.782 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:30:38.782 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:38.782 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:38.782 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:38.782 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:38.782 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:38.782 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:38.782 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.782 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:38.782 request: 00:30:38.782 { 00:30:38.782 "name": "nvme0", 00:30:38.782 "trtype": "tcp", 00:30:38.782 "traddr": "10.0.0.1", 00:30:38.782 "adrfam": "ipv4", 00:30:38.782 "trsvcid": "4420", 00:30:38.782 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:30:38.782 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:30:38.782 "prchk_reftag": false, 00:30:38.782 "prchk_guard": false, 00:30:38.782 "hdgst": false, 00:30:38.782 "ddgst": false, 00:30:38.782 "dhchap_key": "key1", 00:30:38.782 "dhchap_ctrlr_key": "ckey2", 00:30:38.782 "allow_unrecognized_csi": false, 00:30:38.782 "method": "bdev_nvme_attach_controller", 00:30:38.782 "req_id": 1 00:30:38.782 } 00:30:38.782 Got JSON-RPC error response 00:30:38.782 response: 00:30:38.782 { 00:30:38.782 "code": -5, 00:30:38.782 "message": "Input/output error" 00:30:38.782 } 00:30:38.782 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:38.782 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:30:38.782 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:38.782 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:38.783 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:38.783 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:30:38.783 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:38.783 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:38.783 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:38.783 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:38.783 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:38.783 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:38.783 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:38.783 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:38.783 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:38.783 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:38.783 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:30:38.783 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.783 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.042 nvme0n1 00:30:39.042 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.042 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:30:39.042 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:39.042 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:39.042 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:39.042 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:39.042 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2M2ZmU0ZGUyZWYwMzE5NGUxZDdhMzNmZDUzZWQ3MjCEeDtM: 00:30:39.042 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: 00:30:39.042 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:39.042 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:39.042 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2M2ZmU0ZGUyZWYwMzE5NGUxZDdhMzNmZDUzZWQ3MjCEeDtM: 00:30:39.042 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: ]] 00:30:39.042 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: 00:30:39.042 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:39.042 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.042 07:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.042 07:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.042 07:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:30:39.042 07:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:30:39.042 07:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.042 07:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.042 07:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.042 07:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:39.042 07:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:39.042 07:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:30:39.042 07:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:39.042 07:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:39.042 07:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:39.042 07:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:39.042 07:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:39.042 07:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:39.042 07:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.042 07:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.301 request: 00:30:39.301 { 00:30:39.301 "name": "nvme0", 00:30:39.301 "dhchap_key": "key1", 00:30:39.301 "dhchap_ctrlr_key": "ckey2", 00:30:39.301 "method": "bdev_nvme_set_keys", 00:30:39.301 "req_id": 1 00:30:39.301 } 00:30:39.301 Got JSON-RPC error response 00:30:39.301 response: 00:30:39.301 { 00:30:39.301 "code": -13, 00:30:39.301 "message": "Permission denied" 00:30:39.301 } 00:30:39.301 07:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:39.301 07:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:30:39.301 07:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:39.301 07:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:39.301 07:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:39.301 07:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:30:39.301 07:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:30:39.302 07:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.302 07:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.302 07:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.302 07:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:30:39.302 07:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:30:40.236 07:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:30:40.236 07:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:30:40.236 07:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.236 07:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.237 07:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.237 07:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:30:40.237 07:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:30:41.176 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:30:41.176 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:30:41.176 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.176 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.176 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUwYzAzYWY0YTUwYWNhYjA4ZjJmZjMwYzUyNmM5NjE4ZjQ0MzhjZjM5MGZmODhjkk3TPw==: 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUwYzAzYWY0YTUwYWNhYjA4ZjJmZjMwYzUyNmM5NjE4ZjQ0MzhjZjM5MGZmODhjkk3TPw==: 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: ]] 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGM4NDVmNzBiMGE5NjgzZjg1NTJhYjJlY2VjMTZhOTUyYmYxN2ViM2NhN2MzMDkwwsXf/g==: 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.436 nvme0n1 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2M2ZmU0ZGUyZWYwMzE5NGUxZDdhMzNmZDUzZWQ3MjCEeDtM: 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2M2ZmU0ZGUyZWYwMzE5NGUxZDdhMzNmZDUzZWQ3MjCEeDtM: 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: ]] 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWI5OGY2YjEzMmY0OWQyYjMwMzU0MmMwZDJlMGNiODHBqKrF: 00:30:41.436 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:30:41.437 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:30:41.437 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:30:41.437 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:41.437 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:41.437 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:41.437 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:41.437 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:30:41.437 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.437 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.437 request: 00:30:41.437 { 00:30:41.437 "name": "nvme0", 00:30:41.437 "dhchap_key": "key2", 00:30:41.437 "dhchap_ctrlr_key": "ckey1", 00:30:41.437 "method": "bdev_nvme_set_keys", 00:30:41.437 "req_id": 1 00:30:41.437 } 00:30:41.437 Got JSON-RPC error response 00:30:41.437 response: 00:30:41.437 { 00:30:41.437 "code": -13, 00:30:41.437 "message": "Permission denied" 00:30:41.437 } 00:30:41.437 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:41.437 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:30:41.437 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:41.437 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:41.437 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:41.696 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:30:41.696 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:30:41.696 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.696 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.696 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.696 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:30:41.696 07:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:30:42.638 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:30:42.638 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:30:42.639 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.639 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.639 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.639 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:30:42.639 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:30:42.639 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:30:42.639 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:30:42.639 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:42.639 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:30:42.639 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:42.639 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:30:42.639 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:42.639 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:42.639 rmmod nvme_tcp 00:30:42.639 rmmod nvme_fabrics 00:30:42.639 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:42.639 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:30:42.639 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:30:42.639 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2278844 ']' 00:30:42.639 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2278844 00:30:42.639 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2278844 ']' 00:30:42.639 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2278844 00:30:42.639 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:30:42.639 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:42.639 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2278844 00:30:42.899 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:42.899 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:42.899 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2278844' 00:30:42.899 killing process with pid 2278844 00:30:42.899 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2278844 00:30:42.899 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2278844 00:30:42.899 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:42.899 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:42.899 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:42.899 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:30:42.899 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:30:42.899 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:42.899 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:30:42.899 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:42.899 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:42.899 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:42.899 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:42.899 07:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:45.444 07:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:45.444 07:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:30:45.444 07:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:45.444 07:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:30:45.444 07:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:30:45.444 07:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:30:45.444 07:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:45.444 07:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:45.444 07:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:45.444 07:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:45.444 07:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:30:45.444 07:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:30:45.444 07:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:49.651 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:30:49.651 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:30:49.651 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:30:49.651 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:30:49.651 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:30:49.651 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:30:49.651 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:30:49.651 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:30:49.651 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:30:49.651 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:30:49.651 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:30:49.651 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:30:49.651 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:30:49.651 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:30:49.651 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:30:49.651 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:30:49.651 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:30:49.651 07:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.cfB /tmp/spdk.key-null.lvc /tmp/spdk.key-sha256.IWh /tmp/spdk.key-sha384.XD7 /tmp/spdk.key-sha512.g45 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:30:49.651 07:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:52.952 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:52.952 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:52.952 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:52.952 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:52.952 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:52.952 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:52.952 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:52.952 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:52.952 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:52.952 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:30:52.952 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:52.952 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:52.952 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:52.952 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:52.952 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:52.952 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:52.952 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:53.522 00:30:53.522 real 1m5.370s 00:30:53.522 user 0m58.035s 00:30:53.522 sys 0m17.471s 00:30:53.522 07:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:53.522 07:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.522 ************************************ 00:30:53.522 END TEST nvmf_auth_host 00:30:53.522 ************************************ 00:30:53.522 07:40:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:30:53.522 07:40:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:53.522 07:40:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:53.522 07:40:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:53.522 07:40:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.522 ************************************ 00:30:53.522 START TEST nvmf_digest 00:30:53.522 ************************************ 00:30:53.522 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:53.522 * Looking for test storage... 00:30:53.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:53.522 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:53.522 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:30:53.523 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:53.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:53.785 --rc genhtml_branch_coverage=1 00:30:53.785 --rc genhtml_function_coverage=1 00:30:53.785 --rc genhtml_legend=1 00:30:53.785 --rc geninfo_all_blocks=1 00:30:53.785 --rc geninfo_unexecuted_blocks=1 00:30:53.785 00:30:53.785 ' 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:53.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:53.785 --rc genhtml_branch_coverage=1 00:30:53.785 --rc genhtml_function_coverage=1 00:30:53.785 --rc genhtml_legend=1 00:30:53.785 --rc geninfo_all_blocks=1 00:30:53.785 --rc geninfo_unexecuted_blocks=1 00:30:53.785 00:30:53.785 ' 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:53.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:53.785 --rc genhtml_branch_coverage=1 00:30:53.785 --rc genhtml_function_coverage=1 00:30:53.785 --rc genhtml_legend=1 00:30:53.785 --rc geninfo_all_blocks=1 00:30:53.785 --rc geninfo_unexecuted_blocks=1 00:30:53.785 00:30:53.785 ' 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:53.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:53.785 --rc genhtml_branch_coverage=1 00:30:53.785 --rc genhtml_function_coverage=1 00:30:53.785 --rc genhtml_legend=1 00:30:53.785 --rc geninfo_all_blocks=1 00:30:53.785 --rc geninfo_unexecuted_blocks=1 00:30:53.785 00:30:53.785 ' 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:53.785 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.786 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.786 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.786 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:30:53.786 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.786 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:30:53.786 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:53.786 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:53.786 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:53.786 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:53.786 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:53.786 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:53.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:53.786 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:53.786 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:53.786 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:53.786 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:53.786 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:30:53.786 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:30:53.786 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:30:53.786 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:30:53.786 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:53.786 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:53.786 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:53.786 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:53.786 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:53.786 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:53.786 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:53.786 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:53.786 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:53.786 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:53.786 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:30:53.786 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:01.928 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:01.928 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:01.928 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:01.929 Found net devices under 0000:31:00.0: cvl_0_0 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:01.929 Found net devices under 0000:31:00.1: cvl_0_1 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:01.929 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:01.929 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.686 ms 00:31:01.929 00:31:01.929 --- 10.0.0.2 ping statistics --- 00:31:01.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.929 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:01.929 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:01.929 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:31:01.929 00:31:01.929 --- 10.0.0.1 ping statistics --- 00:31:01.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.929 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:01.929 ************************************ 00:31:01.929 START TEST nvmf_digest_clean 00:31:01.929 ************************************ 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2297489 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2297489 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2297489 ']' 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:01.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:01.929 07:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:01.929 [2024-11-26 07:40:45.974368] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:31:01.929 [2024-11-26 07:40:45.974418] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:02.190 [2024-11-26 07:40:46.061119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.190 [2024-11-26 07:40:46.098326] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:02.190 [2024-11-26 07:40:46.098359] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:02.190 [2024-11-26 07:40:46.098367] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:02.190 [2024-11-26 07:40:46.098378] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:02.190 [2024-11-26 07:40:46.098384] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:02.190 [2024-11-26 07:40:46.098975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.760 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:02.760 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:31:02.760 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:02.760 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:02.760 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:02.760 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:02.760 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:31:02.760 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:31:02.760 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:31:02.760 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.760 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:02.760 null0 00:31:02.760 [2024-11-26 07:40:46.879567] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:03.021 [2024-11-26 07:40:46.903785] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:03.021 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.021 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:31:03.021 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:03.021 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:03.021 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:31:03.021 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:31:03.021 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:31:03.021 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:03.021 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2297534 00:31:03.021 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2297534 /var/tmp/bperf.sock 00:31:03.021 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2297534 ']' 00:31:03.021 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:31:03.021 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:03.021 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:03.021 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:03.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:03.022 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:03.022 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:03.022 [2024-11-26 07:40:46.958557] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:31:03.022 [2024-11-26 07:40:46.958609] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2297534 ] 00:31:03.022 [2024-11-26 07:40:47.053833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:03.022 [2024-11-26 07:40:47.089867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:03.962 07:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:03.963 07:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:31:03.963 07:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:03.963 07:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:03.963 07:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:03.963 07:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:03.963 07:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:04.223 nvme0n1 00:31:04.223 07:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:04.223 07:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:04.223 Running I/O for 2 seconds... 00:31:06.552 18893.00 IOPS, 73.80 MiB/s [2024-11-26T06:40:50.689Z] 19528.50 IOPS, 76.28 MiB/s 00:31:06.552 Latency(us) 00:31:06.552 [2024-11-26T06:40:50.689Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:06.552 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:06.552 nvme0n1 : 2.00 19559.60 76.40 0.00 0.00 6538.70 2921.81 21299.20 00:31:06.552 [2024-11-26T06:40:50.689Z] =================================================================================================================== 00:31:06.552 [2024-11-26T06:40:50.689Z] Total : 19559.60 76.40 0.00 0.00 6538.70 2921.81 21299.20 00:31:06.552 { 00:31:06.552 "results": [ 00:31:06.552 { 00:31:06.552 "job": "nvme0n1", 00:31:06.552 "core_mask": "0x2", 00:31:06.552 "workload": "randread", 00:31:06.552 "status": "finished", 00:31:06.552 "queue_depth": 128, 00:31:06.552 "io_size": 4096, 00:31:06.552 "runtime": 2.003364, 00:31:06.552 "iops": 19559.600751535916, 00:31:06.552 "mibps": 76.40469043568717, 00:31:06.552 "io_failed": 0, 00:31:06.552 "io_timeout": 0, 00:31:06.552 "avg_latency_us": 6538.69945200119, 00:31:06.552 "min_latency_us": 2921.8133333333335, 00:31:06.552 "max_latency_us": 21299.2 00:31:06.552 } 00:31:06.552 ], 00:31:06.552 "core_count": 1 00:31:06.552 } 00:31:06.552 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:06.552 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:06.552 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:06.552 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:06.552 | select(.opcode=="crc32c") 00:31:06.552 | "\(.module_name) \(.executed)"' 00:31:06.552 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:06.552 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:06.552 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:06.552 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:06.552 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:06.552 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2297534 00:31:06.552 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2297534 ']' 00:31:06.552 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2297534 00:31:06.552 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:31:06.552 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:06.552 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2297534 00:31:06.552 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:06.552 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:06.552 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2297534' 00:31:06.552 killing process with pid 2297534 00:31:06.552 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2297534 00:31:06.552 Received shutdown signal, test time was about 2.000000 seconds 00:31:06.552 00:31:06.552 Latency(us) 00:31:06.552 [2024-11-26T06:40:50.689Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:06.552 [2024-11-26T06:40:50.689Z] =================================================================================================================== 00:31:06.552 [2024-11-26T06:40:50.689Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:06.552 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2297534 00:31:06.814 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:31:06.814 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:06.814 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:06.814 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:31:06.814 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:31:06.814 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:31:06.814 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:06.814 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2298328 00:31:06.814 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2298328 /var/tmp/bperf.sock 00:31:06.814 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2298328 ']' 00:31:06.814 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:06.814 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:06.814 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:06.814 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:06.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:06.814 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:06.814 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:06.814 [2024-11-26 07:40:50.766047] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:31:06.814 [2024-11-26 07:40:50.766106] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2298328 ] 00:31:06.814 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:06.814 Zero copy mechanism will not be used. 00:31:06.814 [2024-11-26 07:40:50.854496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.814 [2024-11-26 07:40:50.883871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:07.755 07:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:07.755 07:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:31:07.755 07:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:07.755 07:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:07.755 07:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:07.755 07:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:07.755 07:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:08.016 nvme0n1 00:31:08.277 07:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:08.277 07:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:08.277 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:08.277 Zero copy mechanism will not be used. 00:31:08.277 Running I/O for 2 seconds... 00:31:10.159 3584.00 IOPS, 448.00 MiB/s [2024-11-26T06:40:54.296Z] 3352.50 IOPS, 419.06 MiB/s 00:31:10.159 Latency(us) 00:31:10.159 [2024-11-26T06:40:54.296Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:10.159 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:31:10.159 nvme0n1 : 2.00 3354.11 419.26 0.00 0.00 4767.42 1174.19 8301.23 00:31:10.159 [2024-11-26T06:40:54.296Z] =================================================================================================================== 00:31:10.159 [2024-11-26T06:40:54.296Z] Total : 3354.11 419.26 0.00 0.00 4767.42 1174.19 8301.23 00:31:10.159 { 00:31:10.159 "results": [ 00:31:10.159 { 00:31:10.159 "job": "nvme0n1", 00:31:10.159 "core_mask": "0x2", 00:31:10.159 "workload": "randread", 00:31:10.159 "status": "finished", 00:31:10.159 "queue_depth": 16, 00:31:10.159 "io_size": 131072, 00:31:10.159 "runtime": 2.003809, 00:31:10.159 "iops": 3354.112093517895, 00:31:10.159 "mibps": 419.2640116897369, 00:31:10.159 "io_failed": 0, 00:31:10.159 "io_timeout": 0, 00:31:10.159 "avg_latency_us": 4767.419578435749, 00:31:10.159 "min_latency_us": 1174.1866666666667, 00:31:10.159 "max_latency_us": 8301.226666666667 00:31:10.159 } 00:31:10.159 ], 00:31:10.159 "core_count": 1 00:31:10.159 } 00:31:10.159 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:10.159 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:10.160 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:10.160 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:10.160 | select(.opcode=="crc32c") 00:31:10.160 | "\(.module_name) \(.executed)"' 00:31:10.160 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:10.419 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:10.419 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:10.419 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:10.419 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:10.419 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2298328 00:31:10.419 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2298328 ']' 00:31:10.419 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2298328 00:31:10.419 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:31:10.419 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:10.419 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2298328 00:31:10.420 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:10.420 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:10.420 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2298328' 00:31:10.420 killing process with pid 2298328 00:31:10.420 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2298328 00:31:10.420 Received shutdown signal, test time was about 2.000000 seconds 00:31:10.420 00:31:10.420 Latency(us) 00:31:10.420 [2024-11-26T06:40:54.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:10.420 [2024-11-26T06:40:54.557Z] =================================================================================================================== 00:31:10.420 [2024-11-26T06:40:54.557Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:10.420 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2298328 00:31:10.680 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:31:10.680 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:10.680 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:10.680 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:31:10.680 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:31:10.680 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:31:10.680 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:10.680 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2299155 00:31:10.680 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2299155 /var/tmp/bperf.sock 00:31:10.680 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2299155 ']' 00:31:10.680 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:31:10.680 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:10.680 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:10.680 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:10.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:10.680 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:10.680 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:10.680 [2024-11-26 07:40:54.676119] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:31:10.680 [2024-11-26 07:40:54.676179] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2299155 ] 00:31:10.680 [2024-11-26 07:40:54.767526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:10.680 [2024-11-26 07:40:54.797517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:11.622 07:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:11.622 07:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:31:11.622 07:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:11.622 07:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:11.622 07:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:11.622 07:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:11.622 07:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:11.882 nvme0n1 00:31:11.882 07:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:11.882 07:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:12.142 Running I/O for 2 seconds... 00:31:14.189 21079.00 IOPS, 82.34 MiB/s [2024-11-26T06:40:58.326Z] 21182.00 IOPS, 82.74 MiB/s 00:31:14.189 Latency(us) 00:31:14.189 [2024-11-26T06:40:58.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:14.189 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:14.189 nvme0n1 : 2.00 21207.21 82.84 0.00 0.00 6029.25 2252.80 16711.68 00:31:14.189 [2024-11-26T06:40:58.326Z] =================================================================================================================== 00:31:14.189 [2024-11-26T06:40:58.326Z] Total : 21207.21 82.84 0.00 0.00 6029.25 2252.80 16711.68 00:31:14.189 { 00:31:14.189 "results": [ 00:31:14.189 { 00:31:14.189 "job": "nvme0n1", 00:31:14.189 "core_mask": "0x2", 00:31:14.189 "workload": "randwrite", 00:31:14.189 "status": "finished", 00:31:14.189 "queue_depth": 128, 00:31:14.189 "io_size": 4096, 00:31:14.189 "runtime": 2.003658, 00:31:14.189 "iops": 21207.21200923511, 00:31:14.189 "mibps": 82.84067191107465, 00:31:14.189 "io_failed": 0, 00:31:14.189 "io_timeout": 0, 00:31:14.189 "avg_latency_us": 6029.248315290722, 00:31:14.189 "min_latency_us": 2252.8, 00:31:14.189 "max_latency_us": 16711.68 00:31:14.189 } 00:31:14.189 ], 00:31:14.189 "core_count": 1 00:31:14.189 } 00:31:14.189 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:14.189 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:14.189 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:14.189 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:14.189 | select(.opcode=="crc32c") 00:31:14.189 | "\(.module_name) \(.executed)"' 00:31:14.189 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:14.189 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:14.189 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:14.189 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:14.189 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:14.189 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2299155 00:31:14.189 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2299155 ']' 00:31:14.189 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2299155 00:31:14.189 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:31:14.189 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:14.189 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2299155 00:31:14.451 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:14.451 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:14.451 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2299155' 00:31:14.451 killing process with pid 2299155 00:31:14.451 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2299155 00:31:14.451 Received shutdown signal, test time was about 2.000000 seconds 00:31:14.451 00:31:14.451 Latency(us) 00:31:14.451 [2024-11-26T06:40:58.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:14.451 [2024-11-26T06:40:58.588Z] =================================================================================================================== 00:31:14.452 [2024-11-26T06:40:58.589Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:14.452 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2299155 00:31:14.452 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:31:14.452 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:14.452 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:14.452 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:31:14.452 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:31:14.452 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:31:14.452 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:14.452 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2299890 00:31:14.452 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2299890 /var/tmp/bperf.sock 00:31:14.452 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2299890 ']' 00:31:14.452 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:14.452 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:14.452 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:14.452 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:14.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:14.452 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:14.452 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:14.452 [2024-11-26 07:40:58.511177] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:31:14.452 [2024-11-26 07:40:58.511248] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2299890 ] 00:31:14.452 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:14.452 Zero copy mechanism will not be used. 00:31:14.712 [2024-11-26 07:40:58.601909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.712 [2024-11-26 07:40:58.630492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:15.283 07:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:15.283 07:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:31:15.283 07:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:15.283 07:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:15.283 07:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:15.544 07:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:15.544 07:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:15.805 nvme0n1 00:31:15.805 07:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:15.805 07:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:15.805 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:15.805 Zero copy mechanism will not be used. 00:31:15.805 Running I/O for 2 seconds... 00:31:18.132 6216.00 IOPS, 777.00 MiB/s [2024-11-26T06:41:02.269Z] 6406.50 IOPS, 800.81 MiB/s 00:31:18.132 Latency(us) 00:31:18.132 [2024-11-26T06:41:02.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:18.132 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:31:18.132 nvme0n1 : 2.00 6406.31 800.79 0.00 0.00 2494.16 1331.20 12069.55 00:31:18.132 [2024-11-26T06:41:02.269Z] =================================================================================================================== 00:31:18.132 [2024-11-26T06:41:02.269Z] Total : 6406.31 800.79 0.00 0.00 2494.16 1331.20 12069.55 00:31:18.132 { 00:31:18.132 "results": [ 00:31:18.132 { 00:31:18.132 "job": "nvme0n1", 00:31:18.132 "core_mask": "0x2", 00:31:18.132 "workload": "randwrite", 00:31:18.132 "status": "finished", 00:31:18.132 "queue_depth": 16, 00:31:18.132 "io_size": 131072, 00:31:18.132 "runtime": 2.002558, 00:31:18.132 "iops": 6406.30633419856, 00:31:18.132 "mibps": 800.78829177482, 00:31:18.132 "io_failed": 0, 00:31:18.132 "io_timeout": 0, 00:31:18.132 "avg_latency_us": 2494.158771533245, 00:31:18.132 "min_latency_us": 1331.2, 00:31:18.132 "max_latency_us": 12069.546666666667 00:31:18.132 } 00:31:18.132 ], 00:31:18.132 "core_count": 1 00:31:18.132 } 00:31:18.132 07:41:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:18.132 07:41:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:18.132 07:41:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:18.132 07:41:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:18.132 | select(.opcode=="crc32c") 00:31:18.132 | "\(.module_name) \(.executed)"' 00:31:18.132 07:41:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:18.132 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:18.132 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:18.132 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:18.132 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:18.132 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2299890 00:31:18.132 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2299890 ']' 00:31:18.132 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2299890 00:31:18.132 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:31:18.132 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:18.132 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2299890 00:31:18.132 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:18.132 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:18.132 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2299890' 00:31:18.132 killing process with pid 2299890 00:31:18.132 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2299890 00:31:18.132 Received shutdown signal, test time was about 2.000000 seconds 00:31:18.132 00:31:18.132 Latency(us) 00:31:18.132 [2024-11-26T06:41:02.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:18.132 [2024-11-26T06:41:02.269Z] =================================================================================================================== 00:31:18.132 [2024-11-26T06:41:02.269Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:18.132 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2299890 00:31:18.132 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2297489 00:31:18.132 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2297489 ']' 00:31:18.132 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2297489 00:31:18.132 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:31:18.132 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:18.132 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2297489 00:31:18.393 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:18.393 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:18.393 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2297489' 00:31:18.393 killing process with pid 2297489 00:31:18.393 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2297489 00:31:18.393 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2297489 00:31:18.393 00:31:18.393 real 0m16.499s 00:31:18.393 user 0m32.640s 00:31:18.393 sys 0m3.610s 00:31:18.393 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:18.393 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:18.393 ************************************ 00:31:18.393 END TEST nvmf_digest_clean 00:31:18.393 ************************************ 00:31:18.393 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:31:18.393 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:18.393 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:18.393 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:18.393 ************************************ 00:31:18.393 START TEST nvmf_digest_error 00:31:18.393 ************************************ 00:31:18.393 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:31:18.393 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:31:18.393 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:18.393 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:18.393 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:18.393 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2300606 00:31:18.393 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2300606 00:31:18.393 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:31:18.393 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2300606 ']' 00:31:18.393 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:18.393 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:18.393 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:18.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:18.393 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:18.393 07:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:18.654 [2024-11-26 07:41:02.555691] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:31:18.654 [2024-11-26 07:41:02.555744] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:18.654 [2024-11-26 07:41:02.641302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:18.654 [2024-11-26 07:41:02.678128] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:18.654 [2024-11-26 07:41:02.678163] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:18.654 [2024-11-26 07:41:02.678171] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:18.654 [2024-11-26 07:41:02.678177] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:18.654 [2024-11-26 07:41:02.678183] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:18.654 [2024-11-26 07:41:02.678788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:19.225 07:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:19.225 07:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:31:19.225 07:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:19.225 07:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:19.225 07:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:19.486 07:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:19.486 07:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:31:19.486 07:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.486 07:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:19.486 [2024-11-26 07:41:03.384815] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:31:19.486 07:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.486 07:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:31:19.486 07:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:31:19.486 07:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.486 07:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:19.486 null0 00:31:19.486 [2024-11-26 07:41:03.467075] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:19.486 [2024-11-26 07:41:03.491292] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:19.486 07:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.486 07:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:31:19.486 07:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:19.486 07:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:31:19.487 07:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:31:19.487 07:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:31:19.487 07:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2300904 00:31:19.487 07:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2300904 /var/tmp/bperf.sock 00:31:19.487 07:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2300904 ']' 00:31:19.487 07:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:31:19.487 07:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:19.487 07:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:19.487 07:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:19.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:19.487 07:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:19.487 07:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:19.487 [2024-11-26 07:41:03.559182] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:31:19.487 [2024-11-26 07:41:03.559230] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2300904 ] 00:31:19.746 [2024-11-26 07:41:03.648943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:19.746 [2024-11-26 07:41:03.678696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:20.317 07:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:20.317 07:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:31:20.317 07:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:20.317 07:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:20.578 07:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:20.578 07:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.578 07:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:20.578 07:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.578 07:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:20.578 07:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:20.839 nvme0n1 00:31:20.839 07:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:20.839 07:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.839 07:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:20.839 07:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.839 07:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:20.839 07:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:21.100 Running I/O for 2 seconds... 00:31:21.101 [2024-11-26 07:41:05.028432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.101 [2024-11-26 07:41:05.028463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.101 [2024-11-26 07:41:05.028472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.101 [2024-11-26 07:41:05.042259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.101 [2024-11-26 07:41:05.042278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.101 [2024-11-26 07:41:05.042286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.101 [2024-11-26 07:41:05.055139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.101 [2024-11-26 07:41:05.055158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.101 [2024-11-26 07:41:05.055166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.101 [2024-11-26 07:41:05.068002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.101 [2024-11-26 07:41:05.068020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.101 [2024-11-26 07:41:05.068027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.101 [2024-11-26 07:41:05.081829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.101 [2024-11-26 07:41:05.081847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.101 [2024-11-26 07:41:05.081854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.101 [2024-11-26 07:41:05.093716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.101 [2024-11-26 07:41:05.093733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.101 [2024-11-26 07:41:05.093740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.101 [2024-11-26 07:41:05.105273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.101 [2024-11-26 07:41:05.105290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.101 [2024-11-26 07:41:05.105297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.101 [2024-11-26 07:41:05.117535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.101 [2024-11-26 07:41:05.117553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.101 [2024-11-26 07:41:05.117560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.101 [2024-11-26 07:41:05.131994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.101 [2024-11-26 07:41:05.132011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.101 [2024-11-26 07:41:05.132018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.101 [2024-11-26 07:41:05.143722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.101 [2024-11-26 07:41:05.143739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.101 [2024-11-26 07:41:05.143745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.101 [2024-11-26 07:41:05.154663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.101 [2024-11-26 07:41:05.154680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.101 [2024-11-26 07:41:05.154687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.101 [2024-11-26 07:41:05.168317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.101 [2024-11-26 07:41:05.168335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.101 [2024-11-26 07:41:05.168342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.101 [2024-11-26 07:41:05.182080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.101 [2024-11-26 07:41:05.182097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.101 [2024-11-26 07:41:05.182107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.101 [2024-11-26 07:41:05.195097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.101 [2024-11-26 07:41:05.195114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.101 [2024-11-26 07:41:05.195121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.101 [2024-11-26 07:41:05.204422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.101 [2024-11-26 07:41:05.204440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.101 [2024-11-26 07:41:05.204446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.101 [2024-11-26 07:41:05.220105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.101 [2024-11-26 07:41:05.220122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.101 [2024-11-26 07:41:05.220129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.363 [2024-11-26 07:41:05.232045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.363 [2024-11-26 07:41:05.232062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.363 [2024-11-26 07:41:05.232069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.363 [2024-11-26 07:41:05.244633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.363 [2024-11-26 07:41:05.244650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.363 [2024-11-26 07:41:05.244657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.363 [2024-11-26 07:41:05.257190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.363 [2024-11-26 07:41:05.257207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.363 [2024-11-26 07:41:05.257214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.363 [2024-11-26 07:41:05.268731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.363 [2024-11-26 07:41:05.268748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.363 [2024-11-26 07:41:05.268755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.363 [2024-11-26 07:41:05.281230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.363 [2024-11-26 07:41:05.281248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.363 [2024-11-26 07:41:05.281255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.363 [2024-11-26 07:41:05.294104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.363 [2024-11-26 07:41:05.294121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.363 [2024-11-26 07:41:05.294128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.363 [2024-11-26 07:41:05.307803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.363 [2024-11-26 07:41:05.307820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.363 [2024-11-26 07:41:05.307827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.363 [2024-11-26 07:41:05.320983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.363 [2024-11-26 07:41:05.321000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.363 [2024-11-26 07:41:05.321007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.363 [2024-11-26 07:41:05.332548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.363 [2024-11-26 07:41:05.332566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.363 [2024-11-26 07:41:05.332572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.363 [2024-11-26 07:41:05.344061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.363 [2024-11-26 07:41:05.344078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.363 [2024-11-26 07:41:05.344085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.363 [2024-11-26 07:41:05.357418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.363 [2024-11-26 07:41:05.357435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.363 [2024-11-26 07:41:05.357441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.363 [2024-11-26 07:41:05.369167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.363 [2024-11-26 07:41:05.369183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.363 [2024-11-26 07:41:05.369190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.363 [2024-11-26 07:41:05.382040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.363 [2024-11-26 07:41:05.382057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.363 [2024-11-26 07:41:05.382063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.363 [2024-11-26 07:41:05.394717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.363 [2024-11-26 07:41:05.394734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.363 [2024-11-26 07:41:05.394744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.363 [2024-11-26 07:41:05.407185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.363 [2024-11-26 07:41:05.407201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.363 [2024-11-26 07:41:05.407207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.363 [2024-11-26 07:41:05.418506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.363 [2024-11-26 07:41:05.418522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.363 [2024-11-26 07:41:05.418529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.363 [2024-11-26 07:41:05.431372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.363 [2024-11-26 07:41:05.431388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.363 [2024-11-26 07:41:05.431395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.363 [2024-11-26 07:41:05.444202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.363 [2024-11-26 07:41:05.444218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.363 [2024-11-26 07:41:05.444225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.363 [2024-11-26 07:41:05.457188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.363 [2024-11-26 07:41:05.457205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.363 [2024-11-26 07:41:05.457211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.363 [2024-11-26 07:41:05.470775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.363 [2024-11-26 07:41:05.470791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.363 [2024-11-26 07:41:05.470797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.363 [2024-11-26 07:41:05.481222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.364 [2024-11-26 07:41:05.481239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.364 [2024-11-26 07:41:05.481245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.625 [2024-11-26 07:41:05.494655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.625 [2024-11-26 07:41:05.494672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.625 [2024-11-26 07:41:05.494679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.625 [2024-11-26 07:41:05.505717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.625 [2024-11-26 07:41:05.505737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.625 [2024-11-26 07:41:05.505744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.625 [2024-11-26 07:41:05.518033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.625 [2024-11-26 07:41:05.518050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.625 [2024-11-26 07:41:05.518056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.625 [2024-11-26 07:41:05.532165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.625 [2024-11-26 07:41:05.532181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.625 [2024-11-26 07:41:05.532187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.625 [2024-11-26 07:41:05.546090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.625 [2024-11-26 07:41:05.546107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.625 [2024-11-26 07:41:05.546113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.625 [2024-11-26 07:41:05.559022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.625 [2024-11-26 07:41:05.559039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.625 [2024-11-26 07:41:05.559045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.625 [2024-11-26 07:41:05.570416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.625 [2024-11-26 07:41:05.570432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.625 [2024-11-26 07:41:05.570439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.625 [2024-11-26 07:41:05.580612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.625 [2024-11-26 07:41:05.580629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.625 [2024-11-26 07:41:05.580635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.625 [2024-11-26 07:41:05.595776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.625 [2024-11-26 07:41:05.595793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.625 [2024-11-26 07:41:05.595799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.625 [2024-11-26 07:41:05.607898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.625 [2024-11-26 07:41:05.607915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.625 [2024-11-26 07:41:05.607922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.625 [2024-11-26 07:41:05.620774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.625 [2024-11-26 07:41:05.620791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.626 [2024-11-26 07:41:05.620797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.626 [2024-11-26 07:41:05.634265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.626 [2024-11-26 07:41:05.634281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.626 [2024-11-26 07:41:05.634288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.626 [2024-11-26 07:41:05.646331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.626 [2024-11-26 07:41:05.646347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.626 [2024-11-26 07:41:05.646354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.626 [2024-11-26 07:41:05.658804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.626 [2024-11-26 07:41:05.658821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.626 [2024-11-26 07:41:05.658827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.626 [2024-11-26 07:41:05.668677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.626 [2024-11-26 07:41:05.668694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.626 [2024-11-26 07:41:05.668700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.626 [2024-11-26 07:41:05.683551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.626 [2024-11-26 07:41:05.683568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.626 [2024-11-26 07:41:05.683574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.626 [2024-11-26 07:41:05.696938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.626 [2024-11-26 07:41:05.696955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.626 [2024-11-26 07:41:05.696961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.626 [2024-11-26 07:41:05.708460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.626 [2024-11-26 07:41:05.708477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.626 [2024-11-26 07:41:05.708484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.626 [2024-11-26 07:41:05.723075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.626 [2024-11-26 07:41:05.723092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.626 [2024-11-26 07:41:05.723101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.626 [2024-11-26 07:41:05.733780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.626 [2024-11-26 07:41:05.733796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.626 [2024-11-26 07:41:05.733803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.626 [2024-11-26 07:41:05.747543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.626 [2024-11-26 07:41:05.747560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.626 [2024-11-26 07:41:05.747566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.887 [2024-11-26 07:41:05.761137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.887 [2024-11-26 07:41:05.761154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.887 [2024-11-26 07:41:05.761161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.887 [2024-11-26 07:41:05.771501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.887 [2024-11-26 07:41:05.771518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.887 [2024-11-26 07:41:05.771524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.887 [2024-11-26 07:41:05.783752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.887 [2024-11-26 07:41:05.783768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.887 [2024-11-26 07:41:05.783774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.887 [2024-11-26 07:41:05.795960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.887 [2024-11-26 07:41:05.795976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.887 [2024-11-26 07:41:05.795983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.887 [2024-11-26 07:41:05.808963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.887 [2024-11-26 07:41:05.808980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.887 [2024-11-26 07:41:05.808987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.887 [2024-11-26 07:41:05.820732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.887 [2024-11-26 07:41:05.820749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.887 [2024-11-26 07:41:05.820755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.887 [2024-11-26 07:41:05.834510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.887 [2024-11-26 07:41:05.834530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.887 [2024-11-26 07:41:05.834537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.887 [2024-11-26 07:41:05.847162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.887 [2024-11-26 07:41:05.847178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.887 [2024-11-26 07:41:05.847185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.887 [2024-11-26 07:41:05.859529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.888 [2024-11-26 07:41:05.859546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.888 [2024-11-26 07:41:05.859552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.888 [2024-11-26 07:41:05.872211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.888 [2024-11-26 07:41:05.872228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.888 [2024-11-26 07:41:05.872235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.888 [2024-11-26 07:41:05.885077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.888 [2024-11-26 07:41:05.885094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.888 [2024-11-26 07:41:05.885100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.888 [2024-11-26 07:41:05.896649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.888 [2024-11-26 07:41:05.896666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.888 [2024-11-26 07:41:05.896672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.888 [2024-11-26 07:41:05.907737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.888 [2024-11-26 07:41:05.907753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.888 [2024-11-26 07:41:05.907760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.888 [2024-11-26 07:41:05.921779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.888 [2024-11-26 07:41:05.921796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.888 [2024-11-26 07:41:05.921802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.888 [2024-11-26 07:41:05.933420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.888 [2024-11-26 07:41:05.933437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.888 [2024-11-26 07:41:05.933447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.888 [2024-11-26 07:41:05.946283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.888 [2024-11-26 07:41:05.946300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.888 [2024-11-26 07:41:05.946306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.888 [2024-11-26 07:41:05.958263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.888 [2024-11-26 07:41:05.958280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.888 [2024-11-26 07:41:05.958286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.888 [2024-11-26 07:41:05.969295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.888 [2024-11-26 07:41:05.969311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.888 [2024-11-26 07:41:05.969317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.888 [2024-11-26 07:41:05.984373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.888 [2024-11-26 07:41:05.984390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.888 [2024-11-26 07:41:05.984397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.888 [2024-11-26 07:41:05.997998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.888 [2024-11-26 07:41:05.998014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.888 [2024-11-26 07:41:05.998020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.888 [2024-11-26 07:41:06.007175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:21.888 [2024-11-26 07:41:06.007191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.888 [2024-11-26 07:41:06.007198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.150 20176.00 IOPS, 78.81 MiB/s [2024-11-26T06:41:06.287Z] [2024-11-26 07:41:06.023750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.150 [2024-11-26 07:41:06.023768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.150 [2024-11-26 07:41:06.023775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.150 [2024-11-26 07:41:06.036394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.150 [2024-11-26 07:41:06.036410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.150 [2024-11-26 07:41:06.036417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.150 [2024-11-26 07:41:06.049023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.150 [2024-11-26 07:41:06.049043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.150 [2024-11-26 07:41:06.049050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.150 [2024-11-26 07:41:06.061549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.150 [2024-11-26 07:41:06.061565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.150 [2024-11-26 07:41:06.061572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.150 [2024-11-26 07:41:06.074147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.150 [2024-11-26 07:41:06.074164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.150 [2024-11-26 07:41:06.074170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.150 [2024-11-26 07:41:06.087457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.150 [2024-11-26 07:41:06.087474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.150 [2024-11-26 07:41:06.087480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.150 [2024-11-26 07:41:06.099074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.150 [2024-11-26 07:41:06.099091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.150 [2024-11-26 07:41:06.099097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.151 [2024-11-26 07:41:06.110974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.151 [2024-11-26 07:41:06.110990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.151 [2024-11-26 07:41:06.110997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.151 [2024-11-26 07:41:06.124264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.151 [2024-11-26 07:41:06.124281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.151 [2024-11-26 07:41:06.124287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.151 [2024-11-26 07:41:06.137135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.151 [2024-11-26 07:41:06.137151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.151 [2024-11-26 07:41:06.137157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.151 [2024-11-26 07:41:06.149895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.151 [2024-11-26 07:41:06.149911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.151 [2024-11-26 07:41:06.149917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.151 [2024-11-26 07:41:06.160509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.151 [2024-11-26 07:41:06.160526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.151 [2024-11-26 07:41:06.160533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.151 [2024-11-26 07:41:06.172195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.151 [2024-11-26 07:41:06.172212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.151 [2024-11-26 07:41:06.172218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.151 [2024-11-26 07:41:06.186191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.151 [2024-11-26 07:41:06.186208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.151 [2024-11-26 07:41:06.186214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.151 [2024-11-26 07:41:06.198470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.151 [2024-11-26 07:41:06.198487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.151 [2024-11-26 07:41:06.198493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.151 [2024-11-26 07:41:06.211811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.151 [2024-11-26 07:41:06.211827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.151 [2024-11-26 07:41:06.211833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.151 [2024-11-26 07:41:06.224492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.151 [2024-11-26 07:41:06.224508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.151 [2024-11-26 07:41:06.224515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.151 [2024-11-26 07:41:06.236637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.151 [2024-11-26 07:41:06.236653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.151 [2024-11-26 07:41:06.236660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.151 [2024-11-26 07:41:06.248934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.151 [2024-11-26 07:41:06.248951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.151 [2024-11-26 07:41:06.248958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.151 [2024-11-26 07:41:06.260842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.151 [2024-11-26 07:41:06.260858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.151 [2024-11-26 07:41:06.260871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.151 [2024-11-26 07:41:06.274525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.151 [2024-11-26 07:41:06.274542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.151 [2024-11-26 07:41:06.274549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.413 [2024-11-26 07:41:06.285331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.413 [2024-11-26 07:41:06.285348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.413 [2024-11-26 07:41:06.285355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.413 [2024-11-26 07:41:06.298134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.413 [2024-11-26 07:41:06.298151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.413 [2024-11-26 07:41:06.298158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.413 [2024-11-26 07:41:06.311707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.414 [2024-11-26 07:41:06.311724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.414 [2024-11-26 07:41:06.311730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.414 [2024-11-26 07:41:06.324070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.414 [2024-11-26 07:41:06.324087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.414 [2024-11-26 07:41:06.324093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.414 [2024-11-26 07:41:06.337084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.414 [2024-11-26 07:41:06.337100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.414 [2024-11-26 07:41:06.337106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.414 [2024-11-26 07:41:06.349218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.414 [2024-11-26 07:41:06.349235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.414 [2024-11-26 07:41:06.349241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.414 [2024-11-26 07:41:06.362093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.414 [2024-11-26 07:41:06.362110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.414 [2024-11-26 07:41:06.362116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.414 [2024-11-26 07:41:06.372787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.414 [2024-11-26 07:41:06.372804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.414 [2024-11-26 07:41:06.372810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.414 [2024-11-26 07:41:06.386021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.414 [2024-11-26 07:41:06.386038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.414 [2024-11-26 07:41:06.386045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.414 [2024-11-26 07:41:06.399870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.414 [2024-11-26 07:41:06.399887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:25594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.414 [2024-11-26 07:41:06.399894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.414 [2024-11-26 07:41:06.413155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.414 [2024-11-26 07:41:06.413172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.414 [2024-11-26 07:41:06.413178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.414 [2024-11-26 07:41:06.424771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.414 [2024-11-26 07:41:06.424788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.414 [2024-11-26 07:41:06.424794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.414 [2024-11-26 07:41:06.437034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.414 [2024-11-26 07:41:06.437051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.414 [2024-11-26 07:41:06.437058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.414 [2024-11-26 07:41:06.449511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.414 [2024-11-26 07:41:06.449528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.414 [2024-11-26 07:41:06.449534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.414 [2024-11-26 07:41:06.463297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.414 [2024-11-26 07:41:06.463314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.414 [2024-11-26 07:41:06.463321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.414 [2024-11-26 07:41:06.475803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.414 [2024-11-26 07:41:06.475820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.414 [2024-11-26 07:41:06.475829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.414 [2024-11-26 07:41:06.487424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.414 [2024-11-26 07:41:06.487440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.414 [2024-11-26 07:41:06.487447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.414 [2024-11-26 07:41:06.498886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.414 [2024-11-26 07:41:06.498903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.414 [2024-11-26 07:41:06.498910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.414 [2024-11-26 07:41:06.513572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.414 [2024-11-26 07:41:06.513589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.414 [2024-11-26 07:41:06.513595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.414 [2024-11-26 07:41:06.525052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.414 [2024-11-26 07:41:06.525069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.414 [2024-11-26 07:41:06.525076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.414 [2024-11-26 07:41:06.537369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.414 [2024-11-26 07:41:06.537386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.414 [2024-11-26 07:41:06.537392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.677 [2024-11-26 07:41:06.551260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.677 [2024-11-26 07:41:06.551277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.677 [2024-11-26 07:41:06.551284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.677 [2024-11-26 07:41:06.562929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.677 [2024-11-26 07:41:06.562945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.677 [2024-11-26 07:41:06.562952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.677 [2024-11-26 07:41:06.574557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.677 [2024-11-26 07:41:06.574575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.677 [2024-11-26 07:41:06.574581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.677 [2024-11-26 07:41:06.588446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.677 [2024-11-26 07:41:06.588466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.677 [2024-11-26 07:41:06.588473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.677 [2024-11-26 07:41:06.600688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.677 [2024-11-26 07:41:06.600705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.677 [2024-11-26 07:41:06.600712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.677 [2024-11-26 07:41:06.612833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.677 [2024-11-26 07:41:06.612850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.677 [2024-11-26 07:41:06.612857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.677 [2024-11-26 07:41:06.623331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.677 [2024-11-26 07:41:06.623349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.677 [2024-11-26 07:41:06.623355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.677 [2024-11-26 07:41:06.636380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.677 [2024-11-26 07:41:06.636396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.677 [2024-11-26 07:41:06.636403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.677 [2024-11-26 07:41:06.649723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.677 [2024-11-26 07:41:06.649740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.677 [2024-11-26 07:41:06.649747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.677 [2024-11-26 07:41:06.663839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.677 [2024-11-26 07:41:06.663856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.677 [2024-11-26 07:41:06.663867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.677 [2024-11-26 07:41:06.676410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.677 [2024-11-26 07:41:06.676428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.677 [2024-11-26 07:41:06.676435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.677 [2024-11-26 07:41:06.688013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.677 [2024-11-26 07:41:06.688030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.677 [2024-11-26 07:41:06.688037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.677 [2024-11-26 07:41:06.702343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.677 [2024-11-26 07:41:06.702360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.677 [2024-11-26 07:41:06.702367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.677 [2024-11-26 07:41:06.713984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.677 [2024-11-26 07:41:06.714001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.677 [2024-11-26 07:41:06.714007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.677 [2024-11-26 07:41:06.725784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.677 [2024-11-26 07:41:06.725802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.677 [2024-11-26 07:41:06.725808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.677 [2024-11-26 07:41:06.740080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.677 [2024-11-26 07:41:06.740097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.677 [2024-11-26 07:41:06.740103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.677 [2024-11-26 07:41:06.750705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.677 [2024-11-26 07:41:06.750721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.677 [2024-11-26 07:41:06.750728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.677 [2024-11-26 07:41:06.764410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.677 [2024-11-26 07:41:06.764426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.677 [2024-11-26 07:41:06.764433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.677 [2024-11-26 07:41:06.776359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.677 [2024-11-26 07:41:06.776376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.677 [2024-11-26 07:41:06.776382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.677 [2024-11-26 07:41:06.788500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.677 [2024-11-26 07:41:06.788518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.677 [2024-11-26 07:41:06.788524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.677 [2024-11-26 07:41:06.801017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.677 [2024-11-26 07:41:06.801034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.677 [2024-11-26 07:41:06.801043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.940 [2024-11-26 07:41:06.811669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.940 [2024-11-26 07:41:06.811687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.940 [2024-11-26 07:41:06.811694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.940 [2024-11-26 07:41:06.826037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.940 [2024-11-26 07:41:06.826055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.940 [2024-11-26 07:41:06.826062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.940 [2024-11-26 07:41:06.840003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.940 [2024-11-26 07:41:06.840021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.940 [2024-11-26 07:41:06.840027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.940 [2024-11-26 07:41:06.852544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.940 [2024-11-26 07:41:06.852561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.940 [2024-11-26 07:41:06.852568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.940 [2024-11-26 07:41:06.865814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.940 [2024-11-26 07:41:06.865831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.940 [2024-11-26 07:41:06.865838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.940 [2024-11-26 07:41:06.877685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.940 [2024-11-26 07:41:06.877702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.940 [2024-11-26 07:41:06.877709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.940 [2024-11-26 07:41:06.888949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.940 [2024-11-26 07:41:06.888966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.940 [2024-11-26 07:41:06.888973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.940 [2024-11-26 07:41:06.902499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.940 [2024-11-26 07:41:06.902516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.940 [2024-11-26 07:41:06.902523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.940 [2024-11-26 07:41:06.913691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.940 [2024-11-26 07:41:06.913711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.940 [2024-11-26 07:41:06.913718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.940 [2024-11-26 07:41:06.925937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.940 [2024-11-26 07:41:06.925954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.940 [2024-11-26 07:41:06.925962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.940 [2024-11-26 07:41:06.938944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.940 [2024-11-26 07:41:06.938961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.940 [2024-11-26 07:41:06.938967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.940 [2024-11-26 07:41:06.951946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.940 [2024-11-26 07:41:06.951963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.940 [2024-11-26 07:41:06.951970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.940 [2024-11-26 07:41:06.965651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.940 [2024-11-26 07:41:06.965669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.940 [2024-11-26 07:41:06.965675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.940 [2024-11-26 07:41:06.978211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.940 [2024-11-26 07:41:06.978228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.940 [2024-11-26 07:41:06.978235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.940 [2024-11-26 07:41:06.990114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.940 [2024-11-26 07:41:06.990131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.940 [2024-11-26 07:41:06.990137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.940 [2024-11-26 07:41:07.000708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.940 [2024-11-26 07:41:07.000725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.941 [2024-11-26 07:41:07.000732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.941 [2024-11-26 07:41:07.014116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15272b0) 00:31:22.941 [2024-11-26 07:41:07.014133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.941 [2024-11-26 07:41:07.014143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:22.941 20282.50 IOPS, 79.23 MiB/s 00:31:22.941 Latency(us) 00:31:22.941 [2024-11-26T06:41:07.078Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:22.941 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:22.941 nvme0n1 : 2.00 20306.50 79.32 0.00 0.00 6297.53 2198.19 17913.17 00:31:22.941 [2024-11-26T06:41:07.078Z] =================================================================================================================== 00:31:22.941 [2024-11-26T06:41:07.078Z] Total : 20306.50 79.32 0.00 0.00 6297.53 2198.19 17913.17 00:31:22.941 { 00:31:22.941 "results": [ 00:31:22.941 { 00:31:22.941 "job": "nvme0n1", 00:31:22.941 "core_mask": "0x2", 00:31:22.941 "workload": "randread", 00:31:22.941 "status": "finished", 00:31:22.941 "queue_depth": 128, 00:31:22.941 "io_size": 4096, 00:31:22.941 "runtime": 2.00394, 00:31:22.941 "iops": 20306.496202481114, 00:31:22.941 "mibps": 79.32225079094185, 00:31:22.941 "io_failed": 0, 00:31:22.941 "io_timeout": 0, 00:31:22.941 "avg_latency_us": 6297.525430581837, 00:31:22.941 "min_latency_us": 2198.1866666666665, 00:31:22.941 "max_latency_us": 17913.173333333332 00:31:22.941 } 00:31:22.941 ], 00:31:22.941 "core_count": 1 00:31:22.941 } 00:31:22.941 07:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:22.941 07:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:22.941 07:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:22.941 | .driver_specific 00:31:22.941 | .nvme_error 00:31:22.941 | .status_code 00:31:22.941 | .command_transient_transport_error' 00:31:22.941 07:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:23.203 07:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 159 > 0 )) 00:31:23.203 07:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2300904 00:31:23.203 07:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2300904 ']' 00:31:23.203 07:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2300904 00:31:23.203 07:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:31:23.203 07:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:23.203 07:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2300904 00:31:23.203 07:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:23.203 07:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:23.203 07:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2300904' 00:31:23.203 killing process with pid 2300904 00:31:23.203 07:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2300904 00:31:23.203 Received shutdown signal, test time was about 2.000000 seconds 00:31:23.203 00:31:23.203 Latency(us) 00:31:23.203 [2024-11-26T06:41:07.340Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:23.203 [2024-11-26T06:41:07.340Z] =================================================================================================================== 00:31:23.203 [2024-11-26T06:41:07.340Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:23.203 07:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2300904 00:31:23.464 07:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:31:23.464 07:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:23.464 07:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:31:23.464 07:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:31:23.464 07:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:31:23.464 07:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2301640 00:31:23.464 07:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2301640 /var/tmp/bperf.sock 00:31:23.464 07:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2301640 ']' 00:31:23.465 07:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:31:23.465 07:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:23.465 07:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:23.465 07:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:23.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:23.465 07:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:23.465 07:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:23.465 [2024-11-26 07:41:07.432635] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:31:23.465 [2024-11-26 07:41:07.432692] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2301640 ] 00:31:23.465 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:23.465 Zero copy mechanism will not be used. 00:31:23.465 [2024-11-26 07:41:07.522277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:23.465 [2024-11-26 07:41:07.551692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:24.406 07:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:24.406 07:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:31:24.406 07:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:24.406 07:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:24.406 07:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:24.406 07:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.406 07:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:24.406 07:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.406 07:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:24.406 07:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:24.666 nvme0n1 00:31:24.666 07:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:31:24.666 07:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.666 07:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:24.666 07:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.666 07:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:24.666 07:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:24.666 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:24.666 Zero copy mechanism will not be used. 00:31:24.666 Running I/O for 2 seconds... 00:31:24.666 [2024-11-26 07:41:08.748227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.666 [2024-11-26 07:41:08.748260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.666 [2024-11-26 07:41:08.748269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:24.666 [2024-11-26 07:41:08.753434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.666 [2024-11-26 07:41:08.753456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.666 [2024-11-26 07:41:08.753463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:24.666 [2024-11-26 07:41:08.756736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.666 [2024-11-26 07:41:08.756755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.666 [2024-11-26 07:41:08.756762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:24.666 [2024-11-26 07:41:08.761714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.666 [2024-11-26 07:41:08.761733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.666 [2024-11-26 07:41:08.761739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:24.666 [2024-11-26 07:41:08.766003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.666 [2024-11-26 07:41:08.766021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.666 [2024-11-26 07:41:08.766027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:24.666 [2024-11-26 07:41:08.771219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.666 [2024-11-26 07:41:08.771238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.666 [2024-11-26 07:41:08.771245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:24.666 [2024-11-26 07:41:08.780217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.666 [2024-11-26 07:41:08.780235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.666 [2024-11-26 07:41:08.780241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:24.666 [2024-11-26 07:41:08.790473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.666 [2024-11-26 07:41:08.790500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.666 [2024-11-26 07:41:08.790506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:24.927 [2024-11-26 07:41:08.798792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.928 [2024-11-26 07:41:08.798811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.928 [2024-11-26 07:41:08.798818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:24.928 [2024-11-26 07:41:08.804271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.928 [2024-11-26 07:41:08.804289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.928 [2024-11-26 07:41:08.804296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:24.928 [2024-11-26 07:41:08.812424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.928 [2024-11-26 07:41:08.812442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.928 [2024-11-26 07:41:08.812449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:24.928 [2024-11-26 07:41:08.822354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.928 [2024-11-26 07:41:08.822372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.928 [2024-11-26 07:41:08.822379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:24.928 [2024-11-26 07:41:08.828261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.928 [2024-11-26 07:41:08.828279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.928 [2024-11-26 07:41:08.828286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:24.928 [2024-11-26 07:41:08.835617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.928 [2024-11-26 07:41:08.835635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.928 [2024-11-26 07:41:08.835642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:24.928 [2024-11-26 07:41:08.843724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.928 [2024-11-26 07:41:08.843743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.928 [2024-11-26 07:41:08.843750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:24.928 [2024-11-26 07:41:08.849141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.928 [2024-11-26 07:41:08.849159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.928 [2024-11-26 07:41:08.849165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:24.928 [2024-11-26 07:41:08.858648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.928 [2024-11-26 07:41:08.858667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.928 [2024-11-26 07:41:08.858674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:24.928 [2024-11-26 07:41:08.867666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.928 [2024-11-26 07:41:08.867685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.928 [2024-11-26 07:41:08.867692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:24.928 [2024-11-26 07:41:08.878879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.928 [2024-11-26 07:41:08.878898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.928 [2024-11-26 07:41:08.878904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:24.928 [2024-11-26 07:41:08.886292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.928 [2024-11-26 07:41:08.886311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.928 [2024-11-26 07:41:08.886318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:24.928 [2024-11-26 07:41:08.893839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.928 [2024-11-26 07:41:08.893857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.928 [2024-11-26 07:41:08.893869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:24.928 [2024-11-26 07:41:08.902355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.928 [2024-11-26 07:41:08.902373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.928 [2024-11-26 07:41:08.902380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:24.928 [2024-11-26 07:41:08.909965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.928 [2024-11-26 07:41:08.909984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.928 [2024-11-26 07:41:08.909990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:24.928 [2024-11-26 07:41:08.921759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.928 [2024-11-26 07:41:08.921779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.928 [2024-11-26 07:41:08.921785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:24.928 [2024-11-26 07:41:08.932763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.928 [2024-11-26 07:41:08.932782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.928 [2024-11-26 07:41:08.932792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:24.928 [2024-11-26 07:41:08.944480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.928 [2024-11-26 07:41:08.944498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.928 [2024-11-26 07:41:08.944504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:24.928 [2024-11-26 07:41:08.948760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.928 [2024-11-26 07:41:08.948778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.928 [2024-11-26 07:41:08.948785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:24.928 [2024-11-26 07:41:08.953062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.928 [2024-11-26 07:41:08.953079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.928 [2024-11-26 07:41:08.953086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:24.928 [2024-11-26 07:41:08.956478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.928 [2024-11-26 07:41:08.956495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.928 [2024-11-26 07:41:08.956502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:24.928 [2024-11-26 07:41:08.964565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.928 [2024-11-26 07:41:08.964583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.928 [2024-11-26 07:41:08.964590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:24.928 [2024-11-26 07:41:08.973152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.928 [2024-11-26 07:41:08.973169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.928 [2024-11-26 07:41:08.973176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:24.928 [2024-11-26 07:41:08.980825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.928 [2024-11-26 07:41:08.980842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.928 [2024-11-26 07:41:08.980848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:24.928 [2024-11-26 07:41:08.986132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.928 [2024-11-26 07:41:08.986148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.928 [2024-11-26 07:41:08.986155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:24.928 [2024-11-26 07:41:08.997227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.928 [2024-11-26 07:41:08.997247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.928 [2024-11-26 07:41:08.997254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:24.928 [2024-11-26 07:41:09.005808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.928 [2024-11-26 07:41:09.005826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.929 [2024-11-26 07:41:09.005832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:24.929 [2024-11-26 07:41:09.014046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.929 [2024-11-26 07:41:09.014063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.929 [2024-11-26 07:41:09.014071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:24.929 [2024-11-26 07:41:09.020422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.929 [2024-11-26 07:41:09.020439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.929 [2024-11-26 07:41:09.020445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:24.929 [2024-11-26 07:41:09.030229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.929 [2024-11-26 07:41:09.030246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.929 [2024-11-26 07:41:09.030253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:24.929 [2024-11-26 07:41:09.040505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.929 [2024-11-26 07:41:09.040522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.929 [2024-11-26 07:41:09.040528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:24.929 [2024-11-26 07:41:09.050700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:24.929 [2024-11-26 07:41:09.050717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.929 [2024-11-26 07:41:09.050724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:25.193 [2024-11-26 07:41:09.058306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.193 [2024-11-26 07:41:09.058323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.193 [2024-11-26 07:41:09.058330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:25.193 [2024-11-26 07:41:09.068993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.193 [2024-11-26 07:41:09.069010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.193 [2024-11-26 07:41:09.069016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:25.193 [2024-11-26 07:41:09.077724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.193 [2024-11-26 07:41:09.077742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.193 [2024-11-26 07:41:09.077748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:25.193 [2024-11-26 07:41:09.082894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.193 [2024-11-26 07:41:09.082911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.193 [2024-11-26 07:41:09.082917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:25.193 [2024-11-26 07:41:09.090950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.193 [2024-11-26 07:41:09.090967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.193 [2024-11-26 07:41:09.090974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:25.193 [2024-11-26 07:41:09.101457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.193 [2024-11-26 07:41:09.101476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.193 [2024-11-26 07:41:09.101483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:25.193 [2024-11-26 07:41:09.112264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.193 [2024-11-26 07:41:09.112282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.193 [2024-11-26 07:41:09.112288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:25.193 [2024-11-26 07:41:09.119135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.193 [2024-11-26 07:41:09.119152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.193 [2024-11-26 07:41:09.119159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:25.193 [2024-11-26 07:41:09.127682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.193 [2024-11-26 07:41:09.127698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.193 [2024-11-26 07:41:09.127705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:25.193 [2024-11-26 07:41:09.137444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.193 [2024-11-26 07:41:09.137461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.193 [2024-11-26 07:41:09.137468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:25.193 [2024-11-26 07:41:09.147742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.193 [2024-11-26 07:41:09.147762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.193 [2024-11-26 07:41:09.147768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:25.193 [2024-11-26 07:41:09.158180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.193 [2024-11-26 07:41:09.158198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.193 [2024-11-26 07:41:09.158204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:25.193 [2024-11-26 07:41:09.170513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.193 [2024-11-26 07:41:09.170530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.193 [2024-11-26 07:41:09.170537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:25.193 [2024-11-26 07:41:09.179270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.193 [2024-11-26 07:41:09.179287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.193 [2024-11-26 07:41:09.179294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:25.193 [2024-11-26 07:41:09.190849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.193 [2024-11-26 07:41:09.190872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.193 [2024-11-26 07:41:09.190879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:25.193 [2024-11-26 07:41:09.200926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.193 [2024-11-26 07:41:09.200943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.193 [2024-11-26 07:41:09.200950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:25.193 [2024-11-26 07:41:09.212306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.193 [2024-11-26 07:41:09.212324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.193 [2024-11-26 07:41:09.212330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:25.193 [2024-11-26 07:41:09.222134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.193 [2024-11-26 07:41:09.222151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.193 [2024-11-26 07:41:09.222157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:25.193 [2024-11-26 07:41:09.232820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.193 [2024-11-26 07:41:09.232838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.193 [2024-11-26 07:41:09.232844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:25.193 [2024-11-26 07:41:09.246516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.193 [2024-11-26 07:41:09.246534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.193 [2024-11-26 07:41:09.246540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:25.193 [2024-11-26 07:41:09.256698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.193 [2024-11-26 07:41:09.256716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.193 [2024-11-26 07:41:09.256722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:25.193 [2024-11-26 07:41:09.269558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.194 [2024-11-26 07:41:09.269576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.194 [2024-11-26 07:41:09.269582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:25.194 [2024-11-26 07:41:09.280426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.194 [2024-11-26 07:41:09.280443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.194 [2024-11-26 07:41:09.280450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:25.194 [2024-11-26 07:41:09.290676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.194 [2024-11-26 07:41:09.290693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.194 [2024-11-26 07:41:09.290700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:25.194 [2024-11-26 07:41:09.300552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.194 [2024-11-26 07:41:09.300569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.194 [2024-11-26 07:41:09.300576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:25.194 [2024-11-26 07:41:09.310722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.194 [2024-11-26 07:41:09.310740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.194 [2024-11-26 07:41:09.310747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:25.194 [2024-11-26 07:41:09.318420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.194 [2024-11-26 07:41:09.318438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.194 [2024-11-26 07:41:09.318445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:25.456 [2024-11-26 07:41:09.328330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.456 [2024-11-26 07:41:09.328349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.456 [2024-11-26 07:41:09.328359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:25.456 [2024-11-26 07:41:09.339134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.457 [2024-11-26 07:41:09.339153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.457 [2024-11-26 07:41:09.339159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:25.457 [2024-11-26 07:41:09.349708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.457 [2024-11-26 07:41:09.349727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.457 [2024-11-26 07:41:09.349733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:25.457 [2024-11-26 07:41:09.360693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.457 [2024-11-26 07:41:09.360711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.457 [2024-11-26 07:41:09.360718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:25.457 [2024-11-26 07:41:09.368439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.457 [2024-11-26 07:41:09.368457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.457 [2024-11-26 07:41:09.368464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:25.457 [2024-11-26 07:41:09.378922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.457 [2024-11-26 07:41:09.378940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.457 [2024-11-26 07:41:09.378947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:25.457 [2024-11-26 07:41:09.387173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.457 [2024-11-26 07:41:09.387191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.457 [2024-11-26 07:41:09.387198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:25.457 [2024-11-26 07:41:09.396436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.457 [2024-11-26 07:41:09.396454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.457 [2024-11-26 07:41:09.396461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:25.457 [2024-11-26 07:41:09.407102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.457 [2024-11-26 07:41:09.407120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.457 [2024-11-26 07:41:09.407127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:25.457 [2024-11-26 07:41:09.417046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.457 [2024-11-26 07:41:09.417066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.457 [2024-11-26 07:41:09.417072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:25.457 [2024-11-26 07:41:09.428419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.457 [2024-11-26 07:41:09.428437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.457 [2024-11-26 07:41:09.428444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:25.457 [2024-11-26 07:41:09.440007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.457 [2024-11-26 07:41:09.440025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.457 [2024-11-26 07:41:09.440032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:25.457 [2024-11-26 07:41:09.450372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.457 [2024-11-26 07:41:09.450390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.457 [2024-11-26 07:41:09.450396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:25.457 [2024-11-26 07:41:09.461385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.457 [2024-11-26 07:41:09.461402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.457 [2024-11-26 07:41:09.461409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:25.457 [2024-11-26 07:41:09.468621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.457 [2024-11-26 07:41:09.468640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.457 [2024-11-26 07:41:09.468646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:25.457 [2024-11-26 07:41:09.480373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.457 [2024-11-26 07:41:09.480392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.457 [2024-11-26 07:41:09.480398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:25.457 [2024-11-26 07:41:09.490926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.457 [2024-11-26 07:41:09.490944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.457 [2024-11-26 07:41:09.490950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:25.457 [2024-11-26 07:41:09.502753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.457 [2024-11-26 07:41:09.502771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.457 [2024-11-26 07:41:09.502778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:25.457 [2024-11-26 07:41:09.515983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.457 [2024-11-26 07:41:09.516002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.457 [2024-11-26 07:41:09.516008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:25.457 [2024-11-26 07:41:09.529028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.457 [2024-11-26 07:41:09.529046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.457 [2024-11-26 07:41:09.529053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:25.457 [2024-11-26 07:41:09.541770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.457 [2024-11-26 07:41:09.541788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.457 [2024-11-26 07:41:09.541794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:25.457 [2024-11-26 07:41:09.553682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.457 [2024-11-26 07:41:09.553701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.457 [2024-11-26 07:41:09.553707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:25.458 [2024-11-26 07:41:09.565606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.458 [2024-11-26 07:41:09.565625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.458 [2024-11-26 07:41:09.565631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:25.458 [2024-11-26 07:41:09.576470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.458 [2024-11-26 07:41:09.576489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.458 [2024-11-26 07:41:09.576495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:25.719 [2024-11-26 07:41:09.587801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.719 [2024-11-26 07:41:09.587819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.719 [2024-11-26 07:41:09.587826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:25.719 [2024-11-26 07:41:09.597608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.719 [2024-11-26 07:41:09.597626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.719 [2024-11-26 07:41:09.597633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:25.719 [2024-11-26 07:41:09.608589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.719 [2024-11-26 07:41:09.608607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.719 [2024-11-26 07:41:09.608617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:25.719 [2024-11-26 07:41:09.616296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.719 [2024-11-26 07:41:09.616314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.719 [2024-11-26 07:41:09.616321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:25.719 [2024-11-26 07:41:09.626806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.719 [2024-11-26 07:41:09.626825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.719 [2024-11-26 07:41:09.626831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:25.719 [2024-11-26 07:41:09.637858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.719 [2024-11-26 07:41:09.637881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.719 [2024-11-26 07:41:09.637888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:25.719 [2024-11-26 07:41:09.649480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.719 [2024-11-26 07:41:09.649498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.719 [2024-11-26 07:41:09.649505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:25.719 [2024-11-26 07:41:09.658762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.719 [2024-11-26 07:41:09.658780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.719 [2024-11-26 07:41:09.658787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:25.719 [2024-11-26 07:41:09.668946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.719 [2024-11-26 07:41:09.668965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.719 [2024-11-26 07:41:09.668972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:25.719 [2024-11-26 07:41:09.678954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.719 [2024-11-26 07:41:09.678972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.719 [2024-11-26 07:41:09.678979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:25.719 [2024-11-26 07:41:09.689760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.719 [2024-11-26 07:41:09.689778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.719 [2024-11-26 07:41:09.689785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:25.719 [2024-11-26 07:41:09.700601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.719 [2024-11-26 07:41:09.700623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.719 [2024-11-26 07:41:09.700629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:25.719 [2024-11-26 07:41:09.711093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.719 [2024-11-26 07:41:09.711111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.719 [2024-11-26 07:41:09.711118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:25.719 [2024-11-26 07:41:09.721070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.719 [2024-11-26 07:41:09.721088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.719 [2024-11-26 07:41:09.721095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:25.719 [2024-11-26 07:41:09.728889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.719 [2024-11-26 07:41:09.728907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.719 [2024-11-26 07:41:09.728914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:25.719 [2024-11-26 07:41:09.737238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.719 [2024-11-26 07:41:09.737256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.719 [2024-11-26 07:41:09.737263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:25.719 3306.00 IOPS, 413.25 MiB/s [2024-11-26T06:41:09.856Z] [2024-11-26 07:41:09.748077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.719 [2024-11-26 07:41:09.748095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.719 [2024-11-26 07:41:09.748102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:25.719 [2024-11-26 07:41:09.759633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.719 [2024-11-26 07:41:09.759651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.719 [2024-11-26 07:41:09.759657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:25.719 [2024-11-26 07:41:09.769660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.719 [2024-11-26 07:41:09.769678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.720 [2024-11-26 07:41:09.769684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:25.720 [2024-11-26 07:41:09.780971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.720 [2024-11-26 07:41:09.780989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.720 [2024-11-26 07:41:09.780998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:25.720 [2024-11-26 07:41:09.791150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.720 [2024-11-26 07:41:09.791168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.720 [2024-11-26 07:41:09.791175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:25.720 [2024-11-26 07:41:09.800548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.720 [2024-11-26 07:41:09.800566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.720 [2024-11-26 07:41:09.800573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:25.720 [2024-11-26 07:41:09.811927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.720 [2024-11-26 07:41:09.811946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.720 [2024-11-26 07:41:09.811952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:25.720 [2024-11-26 07:41:09.823742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.720 [2024-11-26 07:41:09.823760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.720 [2024-11-26 07:41:09.823766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:25.720 [2024-11-26 07:41:09.835043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.720 [2024-11-26 07:41:09.835062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.720 [2024-11-26 07:41:09.835068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:25.720 [2024-11-26 07:41:09.846474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.720 [2024-11-26 07:41:09.846493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.720 [2024-11-26 07:41:09.846499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:25.981 [2024-11-26 07:41:09.856373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.981 [2024-11-26 07:41:09.856391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.981 [2024-11-26 07:41:09.856398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:25.981 [2024-11-26 07:41:09.867233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.981 [2024-11-26 07:41:09.867251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.981 [2024-11-26 07:41:09.867258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:25.981 [2024-11-26 07:41:09.875629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.981 [2024-11-26 07:41:09.875649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.981 [2024-11-26 07:41:09.875656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:25.981 [2024-11-26 07:41:09.884327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.981 [2024-11-26 07:41:09.884345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.981 [2024-11-26 07:41:09.884352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:25.981 [2024-11-26 07:41:09.895547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.981 [2024-11-26 07:41:09.895565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.981 [2024-11-26 07:41:09.895572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:25.981 [2024-11-26 07:41:09.907094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.981 [2024-11-26 07:41:09.907113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.981 [2024-11-26 07:41:09.907119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:25.982 [2024-11-26 07:41:09.918277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.982 [2024-11-26 07:41:09.918295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.982 [2024-11-26 07:41:09.918301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:25.982 [2024-11-26 07:41:09.928454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.982 [2024-11-26 07:41:09.928472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.982 [2024-11-26 07:41:09.928479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:25.982 [2024-11-26 07:41:09.937306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.982 [2024-11-26 07:41:09.937324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.982 [2024-11-26 07:41:09.937331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:25.982 [2024-11-26 07:41:09.946410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.982 [2024-11-26 07:41:09.946428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.982 [2024-11-26 07:41:09.946435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:25.982 [2024-11-26 07:41:09.957347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.982 [2024-11-26 07:41:09.957365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.982 [2024-11-26 07:41:09.957373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:25.982 [2024-11-26 07:41:09.966537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.982 [2024-11-26 07:41:09.966556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.982 [2024-11-26 07:41:09.966563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:25.982 [2024-11-26 07:41:09.978433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.982 [2024-11-26 07:41:09.978452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.982 [2024-11-26 07:41:09.978459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:25.982 [2024-11-26 07:41:09.988575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.982 [2024-11-26 07:41:09.988594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.982 [2024-11-26 07:41:09.988600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:25.982 [2024-11-26 07:41:09.998271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.982 [2024-11-26 07:41:09.998291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.982 [2024-11-26 07:41:09.998297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:25.982 [2024-11-26 07:41:10.009587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.982 [2024-11-26 07:41:10.009607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.982 [2024-11-26 07:41:10.009614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:25.982 [2024-11-26 07:41:10.015952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.982 [2024-11-26 07:41:10.015971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.982 [2024-11-26 07:41:10.015978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:25.982 [2024-11-26 07:41:10.022027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.982 [2024-11-26 07:41:10.022046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.982 [2024-11-26 07:41:10.022053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:25.982 [2024-11-26 07:41:10.032463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.982 [2024-11-26 07:41:10.032481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.982 [2024-11-26 07:41:10.032489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:25.982 [2024-11-26 07:41:10.043836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.982 [2024-11-26 07:41:10.043853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.982 [2024-11-26 07:41:10.043871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:25.982 [2024-11-26 07:41:10.054270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.982 [2024-11-26 07:41:10.054288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.982 [2024-11-26 07:41:10.054294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:25.982 [2024-11-26 07:41:10.066026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.982 [2024-11-26 07:41:10.066044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.982 [2024-11-26 07:41:10.066050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:25.982 [2024-11-26 07:41:10.074028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.982 [2024-11-26 07:41:10.074046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.982 [2024-11-26 07:41:10.074054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:25.982 [2024-11-26 07:41:10.085412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.982 [2024-11-26 07:41:10.085430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.982 [2024-11-26 07:41:10.085436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:25.982 [2024-11-26 07:41:10.097209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.982 [2024-11-26 07:41:10.097228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.982 [2024-11-26 07:41:10.097235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:25.982 [2024-11-26 07:41:10.108127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:25.982 [2024-11-26 07:41:10.108146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.982 [2024-11-26 07:41:10.108152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.243 [2024-11-26 07:41:10.118600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.243 [2024-11-26 07:41:10.118619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.243 [2024-11-26 07:41:10.118626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.243 [2024-11-26 07:41:10.128949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.243 [2024-11-26 07:41:10.128967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.243 [2024-11-26 07:41:10.128974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.243 [2024-11-26 07:41:10.139249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.243 [2024-11-26 07:41:10.139272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.243 [2024-11-26 07:41:10.139279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.243 [2024-11-26 07:41:10.148052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.243 [2024-11-26 07:41:10.148071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.243 [2024-11-26 07:41:10.148077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.243 [2024-11-26 07:41:10.157665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.243 [2024-11-26 07:41:10.157684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.243 [2024-11-26 07:41:10.157690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.243 [2024-11-26 07:41:10.168107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.243 [2024-11-26 07:41:10.168126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.244 [2024-11-26 07:41:10.168133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.244 [2024-11-26 07:41:10.179994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.244 [2024-11-26 07:41:10.180012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.244 [2024-11-26 07:41:10.180019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.244 [2024-11-26 07:41:10.191772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.244 [2024-11-26 07:41:10.191791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.244 [2024-11-26 07:41:10.191798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.244 [2024-11-26 07:41:10.203425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.244 [2024-11-26 07:41:10.203443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.244 [2024-11-26 07:41:10.203449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.244 [2024-11-26 07:41:10.214912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.244 [2024-11-26 07:41:10.214931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.244 [2024-11-26 07:41:10.214937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.244 [2024-11-26 07:41:10.223784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.244 [2024-11-26 07:41:10.223803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.244 [2024-11-26 07:41:10.223809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.244 [2024-11-26 07:41:10.234314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.244 [2024-11-26 07:41:10.234333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.244 [2024-11-26 07:41:10.234340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.244 [2024-11-26 07:41:10.243263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.244 [2024-11-26 07:41:10.243281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.244 [2024-11-26 07:41:10.243288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.244 [2024-11-26 07:41:10.255280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.244 [2024-11-26 07:41:10.255298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.244 [2024-11-26 07:41:10.255305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.244 [2024-11-26 07:41:10.267062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.244 [2024-11-26 07:41:10.267080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.244 [2024-11-26 07:41:10.267087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.244 [2024-11-26 07:41:10.276765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.244 [2024-11-26 07:41:10.276784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.244 [2024-11-26 07:41:10.276790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.244 [2024-11-26 07:41:10.287006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.244 [2024-11-26 07:41:10.287024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.244 [2024-11-26 07:41:10.287031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.244 [2024-11-26 07:41:10.296603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.244 [2024-11-26 07:41:10.296621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.244 [2024-11-26 07:41:10.296628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.244 [2024-11-26 07:41:10.307304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.244 [2024-11-26 07:41:10.307322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.244 [2024-11-26 07:41:10.307329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.244 [2024-11-26 07:41:10.317553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.244 [2024-11-26 07:41:10.317571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.244 [2024-11-26 07:41:10.317580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.244 [2024-11-26 07:41:10.329411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.244 [2024-11-26 07:41:10.329430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.244 [2024-11-26 07:41:10.329437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.244 [2024-11-26 07:41:10.337291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.244 [2024-11-26 07:41:10.337309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.244 [2024-11-26 07:41:10.337315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.244 [2024-11-26 07:41:10.348091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.244 [2024-11-26 07:41:10.348110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.244 [2024-11-26 07:41:10.348116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.244 [2024-11-26 07:41:10.357966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.244 [2024-11-26 07:41:10.357984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.244 [2024-11-26 07:41:10.357991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.244 [2024-11-26 07:41:10.367607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.244 [2024-11-26 07:41:10.367625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.244 [2024-11-26 07:41:10.367631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.505 [2024-11-26 07:41:10.378175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.505 [2024-11-26 07:41:10.378194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.505 [2024-11-26 07:41:10.378200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.505 [2024-11-26 07:41:10.388979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.505 [2024-11-26 07:41:10.388997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.505 [2024-11-26 07:41:10.389004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.505 [2024-11-26 07:41:10.396344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.505 [2024-11-26 07:41:10.396363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.505 [2024-11-26 07:41:10.396369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.505 [2024-11-26 07:41:10.407730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.505 [2024-11-26 07:41:10.407752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.505 [2024-11-26 07:41:10.407758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.505 [2024-11-26 07:41:10.417666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.505 [2024-11-26 07:41:10.417684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.505 [2024-11-26 07:41:10.417690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.505 [2024-11-26 07:41:10.427807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.506 [2024-11-26 07:41:10.427826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.506 [2024-11-26 07:41:10.427833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.506 [2024-11-26 07:41:10.437045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.506 [2024-11-26 07:41:10.437064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.506 [2024-11-26 07:41:10.437072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.506 [2024-11-26 07:41:10.448930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.506 [2024-11-26 07:41:10.448949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.506 [2024-11-26 07:41:10.448955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.506 [2024-11-26 07:41:10.456822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.506 [2024-11-26 07:41:10.456840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.506 [2024-11-26 07:41:10.456848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.506 [2024-11-26 07:41:10.467906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.506 [2024-11-26 07:41:10.467925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.506 [2024-11-26 07:41:10.467932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.506 [2024-11-26 07:41:10.477682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.506 [2024-11-26 07:41:10.477700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.506 [2024-11-26 07:41:10.477707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.506 [2024-11-26 07:41:10.488554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.506 [2024-11-26 07:41:10.488572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.506 [2024-11-26 07:41:10.488579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.506 [2024-11-26 07:41:10.497972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.506 [2024-11-26 07:41:10.497990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.506 [2024-11-26 07:41:10.497996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.506 [2024-11-26 07:41:10.508137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.506 [2024-11-26 07:41:10.508155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.506 [2024-11-26 07:41:10.508162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.506 [2024-11-26 07:41:10.517263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.506 [2024-11-26 07:41:10.517281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.506 [2024-11-26 07:41:10.517288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.506 [2024-11-26 07:41:10.527524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.506 [2024-11-26 07:41:10.527543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.506 [2024-11-26 07:41:10.527550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.506 [2024-11-26 07:41:10.538140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.506 [2024-11-26 07:41:10.538159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.506 [2024-11-26 07:41:10.538165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.506 [2024-11-26 07:41:10.548268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.506 [2024-11-26 07:41:10.548287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.506 [2024-11-26 07:41:10.548294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.506 [2024-11-26 07:41:10.560143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.506 [2024-11-26 07:41:10.560162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.506 [2024-11-26 07:41:10.560168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.506 [2024-11-26 07:41:10.571806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.506 [2024-11-26 07:41:10.571825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.506 [2024-11-26 07:41:10.571831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.506 [2024-11-26 07:41:10.582050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.506 [2024-11-26 07:41:10.582068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.506 [2024-11-26 07:41:10.582078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.506 [2024-11-26 07:41:10.591872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.506 [2024-11-26 07:41:10.591889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.506 [2024-11-26 07:41:10.591896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.506 [2024-11-26 07:41:10.598240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.506 [2024-11-26 07:41:10.598258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.506 [2024-11-26 07:41:10.598264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.506 [2024-11-26 07:41:10.608542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.506 [2024-11-26 07:41:10.608559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.506 [2024-11-26 07:41:10.608566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.506 [2024-11-26 07:41:10.617772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.506 [2024-11-26 07:41:10.617790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.506 [2024-11-26 07:41:10.617797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.506 [2024-11-26 07:41:10.629524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.506 [2024-11-26 07:41:10.629542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.506 [2024-11-26 07:41:10.629548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.768 [2024-11-26 07:41:10.641418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.768 [2024-11-26 07:41:10.641436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.768 [2024-11-26 07:41:10.641443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.768 [2024-11-26 07:41:10.652457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.768 [2024-11-26 07:41:10.652475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.768 [2024-11-26 07:41:10.652481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.768 [2024-11-26 07:41:10.659278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.768 [2024-11-26 07:41:10.659295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.768 [2024-11-26 07:41:10.659302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.768 [2024-11-26 07:41:10.669790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.768 [2024-11-26 07:41:10.669807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.768 [2024-11-26 07:41:10.669814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.768 [2024-11-26 07:41:10.679997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.768 [2024-11-26 07:41:10.680014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.768 [2024-11-26 07:41:10.680020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.768 [2024-11-26 07:41:10.690533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.768 [2024-11-26 07:41:10.690552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.768 [2024-11-26 07:41:10.690558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.768 [2024-11-26 07:41:10.701588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.768 [2024-11-26 07:41:10.701606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.768 [2024-11-26 07:41:10.701612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.768 [2024-11-26 07:41:10.712050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.768 [2024-11-26 07:41:10.712068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.768 [2024-11-26 07:41:10.712074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.768 [2024-11-26 07:41:10.720324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.768 [2024-11-26 07:41:10.720341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.768 [2024-11-26 07:41:10.720347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.768 [2024-11-26 07:41:10.730517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.768 [2024-11-26 07:41:10.730534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.768 [2024-11-26 07:41:10.730541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.768 [2024-11-26 07:41:10.740772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.768 [2024-11-26 07:41:10.740790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.768 [2024-11-26 07:41:10.740796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.768 3170.00 IOPS, 396.25 MiB/s [2024-11-26T06:41:10.905Z] [2024-11-26 07:41:10.752351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc75790) 00:31:26.768 [2024-11-26 07:41:10.752369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.768 [2024-11-26 07:41:10.752378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.768 00:31:26.768 Latency(us) 00:31:26.768 [2024-11-26T06:41:10.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:26.768 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:31:26.768 nvme0n1 : 2.00 3170.19 396.27 0.00 0.00 5042.09 1153.71 13544.11 00:31:26.768 [2024-11-26T06:41:10.905Z] =================================================================================================================== 00:31:26.768 [2024-11-26T06:41:10.905Z] Total : 3170.19 396.27 0.00 0.00 5042.09 1153.71 13544.11 00:31:26.768 { 00:31:26.768 "results": [ 00:31:26.768 { 00:31:26.768 "job": "nvme0n1", 00:31:26.768 "core_mask": "0x2", 00:31:26.768 "workload": "randread", 00:31:26.768 "status": "finished", 00:31:26.768 "queue_depth": 16, 00:31:26.768 "io_size": 131072, 00:31:26.768 "runtime": 2.004926, 00:31:26.768 "iops": 3170.191817553366, 00:31:26.768 "mibps": 396.27397719417075, 00:31:26.768 "io_failed": 0, 00:31:26.768 "io_timeout": 0, 00:31:26.768 "avg_latency_us": 5042.092653660584, 00:31:26.768 "min_latency_us": 1153.7066666666667, 00:31:26.768 "max_latency_us": 13544.106666666667 00:31:26.768 } 00:31:26.768 ], 00:31:26.768 "core_count": 1 00:31:26.768 } 00:31:26.768 07:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:26.768 07:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:26.768 07:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:26.768 | .driver_specific 00:31:26.768 | .nvme_error 00:31:26.768 | .status_code 00:31:26.768 | .command_transient_transport_error' 00:31:26.768 07:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:27.029 07:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 206 > 0 )) 00:31:27.029 07:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2301640 00:31:27.029 07:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2301640 ']' 00:31:27.029 07:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2301640 00:31:27.029 07:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:31:27.029 07:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:27.029 07:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2301640 00:31:27.029 07:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:27.029 07:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:27.029 07:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2301640' 00:31:27.029 killing process with pid 2301640 00:31:27.029 07:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2301640 00:31:27.029 Received shutdown signal, test time was about 2.000000 seconds 00:31:27.029 00:31:27.029 Latency(us) 00:31:27.029 [2024-11-26T06:41:11.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:27.029 [2024-11-26T06:41:11.166Z] =================================================================================================================== 00:31:27.029 [2024-11-26T06:41:11.166Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:27.029 07:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2301640 00:31:27.029 07:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:31:27.029 07:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:27.029 07:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:31:27.029 07:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:31:27.029 07:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:31:27.029 07:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2302327 00:31:27.029 07:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2302327 /var/tmp/bperf.sock 00:31:27.029 07:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2302327 ']' 00:31:27.029 07:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:31:27.029 07:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:27.029 07:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:27.029 07:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:27.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:27.029 07:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:27.029 07:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:27.289 [2024-11-26 07:41:11.174200] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:31:27.289 [2024-11-26 07:41:11.174254] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2302327 ] 00:31:27.289 [2024-11-26 07:41:11.261407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:27.289 [2024-11-26 07:41:11.289830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:27.860 07:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:27.860 07:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:31:27.860 07:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:27.860 07:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:28.120 07:41:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:28.120 07:41:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.120 07:41:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:28.120 07:41:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.120 07:41:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:28.120 07:41:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:28.694 nvme0n1 00:31:28.694 07:41:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:28.694 07:41:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.694 07:41:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:28.694 07:41:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.694 07:41:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:28.694 07:41:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:28.694 Running I/O for 2 seconds... 00:31:28.694 [2024-11-26 07:41:12.665418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e88f8 00:31:28.694 [2024-11-26 07:41:12.667152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.694 [2024-11-26 07:41:12.667179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:28.694 [2024-11-26 07:41:12.675921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166f5be8 00:31:28.694 [2024-11-26 07:41:12.676958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.694 [2024-11-26 07:41:12.676976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:28.694 [2024-11-26 07:41:12.687908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166ed920 00:31:28.694 [2024-11-26 07:41:12.688931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.694 [2024-11-26 07:41:12.688948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:28.694 [2024-11-26 07:41:12.701447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166ec840 00:31:28.694 [2024-11-26 07:41:12.703154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.694 [2024-11-26 07:41:12.703171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:28.694 [2024-11-26 07:41:12.711852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166fbcf0 00:31:28.694 [2024-11-26 07:41:12.712933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.694 [2024-11-26 07:41:12.712949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:28.694 [2024-11-26 07:41:12.723027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e6fa8 00:31:28.694 [2024-11-26 07:41:12.724090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.694 [2024-11-26 07:41:12.724107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:28.694 [2024-11-26 07:41:12.735795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e8088 00:31:28.694 [2024-11-26 07:41:12.736807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.694 [2024-11-26 07:41:12.736824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:28.694 [2024-11-26 07:41:12.747770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e9168 00:31:28.694 [2024-11-26 07:41:12.748824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.694 [2024-11-26 07:41:12.748844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:28.694 [2024-11-26 07:41:12.759738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166ea248 00:31:28.694 [2024-11-26 07:41:12.760753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.694 [2024-11-26 07:41:12.760768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:28.694 [2024-11-26 07:41:12.771702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166eb328 00:31:28.694 [2024-11-26 07:41:12.772753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.694 [2024-11-26 07:41:12.772769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:28.694 [2024-11-26 07:41:12.783646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166f6cc8 00:31:28.694 [2024-11-26 07:41:12.784721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.694 [2024-11-26 07:41:12.784737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:28.694 [2024-11-26 07:41:12.795598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166f5be8 00:31:28.694 [2024-11-26 07:41:12.796666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.695 [2024-11-26 07:41:12.796682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:28.695 [2024-11-26 07:41:12.807549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166f4b08 00:31:28.695 [2024-11-26 07:41:12.808617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.695 [2024-11-26 07:41:12.808633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:28.695 [2024-11-26 07:41:12.819493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e01f8 00:31:28.695 [2024-11-26 07:41:12.820555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.695 [2024-11-26 07:41:12.820571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:28.956 [2024-11-26 07:41:12.830645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166f1430 00:31:28.956 [2024-11-26 07:41:12.831662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.956 [2024-11-26 07:41:12.831677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:28.956 [2024-11-26 07:41:12.843340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166f8a50 00:31:28.956 [2024-11-26 07:41:12.844408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.956 [2024-11-26 07:41:12.844424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:28.956 [2024-11-26 07:41:12.854500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166eb328 00:31:28.956 [2024-11-26 07:41:12.855549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.956 [2024-11-26 07:41:12.855567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:28.956 [2024-11-26 07:41:12.867434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166ec408 00:31:28.956 [2024-11-26 07:41:12.868483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.956 [2024-11-26 07:41:12.868499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:28.956 [2024-11-26 07:41:12.878529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166f9b30 00:31:28.956 [2024-11-26 07:41:12.879572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.956 [2024-11-26 07:41:12.879587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:28.956 [2024-11-26 07:41:12.891234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166f8a50 00:31:28.956 [2024-11-26 07:41:12.892267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.956 [2024-11-26 07:41:12.892283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:28.956 [2024-11-26 07:41:12.904719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166f7970 00:31:28.956 [2024-11-26 07:41:12.906410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.956 [2024-11-26 07:41:12.906425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:28.956 [2024-11-26 07:41:12.914302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e5ec8 00:31:28.956 [2024-11-26 07:41:12.915350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.957 [2024-11-26 07:41:12.915366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:28.957 [2024-11-26 07:41:12.928571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e12d8 00:31:28.957 [2024-11-26 07:41:12.930260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.957 [2024-11-26 07:41:12.930276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:28.957 [2024-11-26 07:41:12.938941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166ec408 00:31:28.957 [2024-11-26 07:41:12.940005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.957 [2024-11-26 07:41:12.940021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:28.957 [2024-11-26 07:41:12.950856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166eb328 00:31:28.957 [2024-11-26 07:41:12.951915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.957 [2024-11-26 07:41:12.951930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:28.957 [2024-11-26 07:41:12.962795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166ea248 00:31:28.957 [2024-11-26 07:41:12.963855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.957 [2024-11-26 07:41:12.963873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:28.957 [2024-11-26 07:41:12.976234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e9168 00:31:28.957 [2024-11-26 07:41:12.977910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.957 [2024-11-26 07:41:12.977926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:28.957 [2024-11-26 07:41:12.986580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e1b48 00:31:28.957 [2024-11-26 07:41:12.987620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.957 [2024-11-26 07:41:12.987635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:28.957 [2024-11-26 07:41:13.000007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e1b48 00:31:28.957 [2024-11-26 07:41:13.001679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.957 [2024-11-26 07:41:13.001695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:28.957 [2024-11-26 07:41:13.009612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166f92c0 00:31:28.957 [2024-11-26 07:41:13.010634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.957 [2024-11-26 07:41:13.010650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:28.957 [2024-11-26 07:41:13.022322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166f81e0 00:31:28.957 [2024-11-26 07:41:13.023337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.957 [2024-11-26 07:41:13.023353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:28.957 [2024-11-26 07:41:13.034245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166efae0 00:31:28.957 [2024-11-26 07:41:13.035287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.957 [2024-11-26 07:41:13.035303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:28.957 [2024-11-26 07:41:13.047727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e6738 00:31:28.957 [2024-11-26 07:41:13.049371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.957 [2024-11-26 07:41:13.049387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:28.957 [2024-11-26 07:41:13.058090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e4de8 00:31:28.957 [2024-11-26 07:41:13.059104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.957 [2024-11-26 07:41:13.059123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:28.957 [2024-11-26 07:41:13.070004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166fda78 00:31:28.957 [2024-11-26 07:41:13.071027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.957 [2024-11-26 07:41:13.071043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:28.957 [2024-11-26 07:41:13.081115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166f81e0 00:31:28.957 [2024-11-26 07:41:13.082106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:28.957 [2024-11-26 07:41:13.082122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:29.218 [2024-11-26 07:41:13.093791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166f7100 00:31:29.218 [2024-11-26 07:41:13.094819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.218 [2024-11-26 07:41:13.094836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:29.218 [2024-11-26 07:41:13.107292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166f6020 00:31:29.218 [2024-11-26 07:41:13.108918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.218 [2024-11-26 07:41:13.108934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:29.218 [2024-11-26 07:41:13.117597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166f6890 00:31:29.218 [2024-11-26 07:41:13.118629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.218 [2024-11-26 07:41:13.118644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:29.218 [2024-11-26 07:41:13.131072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166fe2e8 00:31:29.218 [2024-11-26 07:41:13.132707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.218 [2024-11-26 07:41:13.132722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:29.218 [2024-11-26 07:41:13.141444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e6fa8 00:31:29.218 [2024-11-26 07:41:13.142474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.218 [2024-11-26 07:41:13.142490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:29.218 [2024-11-26 07:41:13.154921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166ed920 00:31:29.218 [2024-11-26 07:41:13.156581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.218 [2024-11-26 07:41:13.156597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:29.218 [2024-11-26 07:41:13.165285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166f6890 00:31:29.218 [2024-11-26 07:41:13.166317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.218 [2024-11-26 07:41:13.166336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:29.218 [2024-11-26 07:41:13.178751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166fe2e8 00:31:29.218 [2024-11-26 07:41:13.180410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.218 [2024-11-26 07:41:13.180426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:29.218 [2024-11-26 07:41:13.189099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166f2510 00:31:29.218 [2024-11-26 07:41:13.190123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.218 [2024-11-26 07:41:13.190138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:29.218 [2024-11-26 07:41:13.201005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166f2510 00:31:29.218 [2024-11-26 07:41:13.202018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.218 [2024-11-26 07:41:13.202033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:29.218 [2024-11-26 07:41:13.212886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166f2510 00:31:29.218 [2024-11-26 07:41:13.213905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.218 [2024-11-26 07:41:13.213922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:29.218 [2024-11-26 07:41:13.224768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166f2510 00:31:29.218 [2024-11-26 07:41:13.225745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.218 [2024-11-26 07:41:13.225761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:29.218 [2024-11-26 07:41:13.236614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166ee5c8 00:31:29.218 [2024-11-26 07:41:13.237622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.218 [2024-11-26 07:41:13.237638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:29.218 [2024-11-26 07:41:13.248524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166ee5c8 00:31:29.218 [2024-11-26 07:41:13.249492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.218 [2024-11-26 07:41:13.249507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:29.218 [2024-11-26 07:41:13.261964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166f2510 00:31:29.218 [2024-11-26 07:41:13.263599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.218 [2024-11-26 07:41:13.263614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:29.218 [2024-11-26 07:41:13.272348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166df988 00:31:29.218 [2024-11-26 07:41:13.273348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.218 [2024-11-26 07:41:13.273364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:29.218 [2024-11-26 07:41:13.285784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166f7100 00:31:29.218 [2024-11-26 07:41:13.287424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.219 [2024-11-26 07:41:13.287440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:29.219 [2024-11-26 07:41:13.295378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e6fa8 00:31:29.219 [2024-11-26 07:41:13.296367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.219 [2024-11-26 07:41:13.296382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:29.219 [2024-11-26 07:41:13.308114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166df988 00:31:29.219 [2024-11-26 07:41:13.309103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.219 [2024-11-26 07:41:13.309118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:29.219 [2024-11-26 07:41:13.319967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e6fa8 00:31:29.219 [2024-11-26 07:41:13.320962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.219 [2024-11-26 07:41:13.320978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:29.219 [2024-11-26 07:41:13.331109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166edd58 00:31:29.219 [2024-11-26 07:41:13.332119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.219 [2024-11-26 07:41:13.332134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:29.219 [2024-11-26 07:41:13.343764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166edd58 00:31:29.219 [2024-11-26 07:41:13.344756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.219 [2024-11-26 07:41:13.344772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:29.479 [2024-11-26 07:41:13.355675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166edd58 00:31:29.479 [2024-11-26 07:41:13.356627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.479 [2024-11-26 07:41:13.356643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:29.479 [2024-11-26 07:41:13.367604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166f7538 00:31:29.479 [2024-11-26 07:41:13.368610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.479 [2024-11-26 07:41:13.368626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:29.479 [2024-11-26 07:41:13.378803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e7c50 00:31:29.479 [2024-11-26 07:41:13.379773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.479 [2024-11-26 07:41:13.379789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:29.479 [2024-11-26 07:41:13.391446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e7c50 00:31:29.479 [2024-11-26 07:41:13.392427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.479 [2024-11-26 07:41:13.392442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:29.479 [2024-11-26 07:41:13.403358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e7c50 00:31:29.479 [2024-11-26 07:41:13.404334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.479 [2024-11-26 07:41:13.404349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:29.479 [2024-11-26 07:41:13.415237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e7c50 00:31:29.479 [2024-11-26 07:41:13.416209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.479 [2024-11-26 07:41:13.416224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:29.479 [2024-11-26 07:41:13.427139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e7c50 00:31:29.479 [2024-11-26 07:41:13.428109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.479 [2024-11-26 07:41:13.428125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:29.479 [2024-11-26 07:41:13.439025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e7c50 00:31:29.479 [2024-11-26 07:41:13.440000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.479 [2024-11-26 07:41:13.440017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:29.479 [2024-11-26 07:41:13.450127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166f8618 00:31:29.479 [2024-11-26 07:41:13.451088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.479 [2024-11-26 07:41:13.451103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:29.479 [2024-11-26 07:41:13.462844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166f7538 00:31:29.479 [2024-11-26 07:41:13.463771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.479 [2024-11-26 07:41:13.463786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:29.479 [2024-11-26 07:41:13.474718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e7c50 00:31:29.479 [2024-11-26 07:41:13.475670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.479 [2024-11-26 07:41:13.475689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:29.479 [2024-11-26 07:41:13.486642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e1f80 00:31:29.479 [2024-11-26 07:41:13.487625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.479 [2024-11-26 07:41:13.487641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:29.479 [2024-11-26 07:41:13.497784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166ef6a8 00:31:29.479 [2024-11-26 07:41:13.498740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.479 [2024-11-26 07:41:13.498755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:29.479 [2024-11-26 07:41:13.510519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166ee5c8 00:31:29.479 [2024-11-26 07:41:13.511466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.479 [2024-11-26 07:41:13.511482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:29.479 [2024-11-26 07:41:13.522468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166f6cc8 00:31:29.479 [2024-11-26 07:41:13.523458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.479 [2024-11-26 07:41:13.523474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:29.479 [2024-11-26 07:41:13.533633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166fc128 00:31:29.479 [2024-11-26 07:41:13.534582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.479 [2024-11-26 07:41:13.534598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:29.479 [2024-11-26 07:41:13.547926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e7c50 00:31:29.479 [2024-11-26 07:41:13.549517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.479 [2024-11-26 07:41:13.549533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:29.479 [2024-11-26 07:41:13.558693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166fbcf0 00:31:29.479 [2024-11-26 07:41:13.559816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.479 [2024-11-26 07:41:13.559832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:29.479 [2024-11-26 07:41:13.572431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e99d8 00:31:29.480 [2024-11-26 07:41:13.574165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.480 [2024-11-26 07:41:13.574180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:29.480 [2024-11-26 07:41:13.582802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e1f80 00:31:29.480 [2024-11-26 07:41:13.583914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.480 [2024-11-26 07:41:13.583930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:29.480 [2024-11-26 07:41:13.593952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166fd208 00:31:29.480 [2024-11-26 07:41:13.595066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.480 [2024-11-26 07:41:13.595082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:29.480 [2024-11-26 07:41:13.606646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e6fa8 00:31:29.480 [2024-11-26 07:41:13.607793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.480 [2024-11-26 07:41:13.607808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:29.740 [2024-11-26 07:41:13.617804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166ea248 00:31:29.740 [2024-11-26 07:41:13.618915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.740 [2024-11-26 07:41:13.618930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:29.740 [2024-11-26 07:41:13.630539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e1f80 00:31:29.740 [2024-11-26 07:41:13.631674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.740 [2024-11-26 07:41:13.631690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:29.740 [2024-11-26 07:41:13.642494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e49b0 00:31:29.740 [2024-11-26 07:41:13.643609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.740 [2024-11-26 07:41:13.643625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:29.740 21190.00 IOPS, 82.77 MiB/s [2024-11-26T06:41:13.877Z] [2024-11-26 07:41:13.654459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e8d30 00:31:29.740 [2024-11-26 07:41:13.655571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.740 [2024-11-26 07:41:13.655587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:29.740 [2024-11-26 07:41:13.666411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166f2510 00:31:29.740 [2024-11-26 07:41:13.667553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.740 [2024-11-26 07:41:13.667569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:29.740 [2024-11-26 07:41:13.678370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166ddc00 00:31:29.740 [2024-11-26 07:41:13.679494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.740 [2024-11-26 07:41:13.679512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:29.740 [2024-11-26 07:41:13.691956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166eb328 00:31:29.740 [2024-11-26 07:41:13.693712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.740 [2024-11-26 07:41:13.693729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:29.740 [2024-11-26 07:41:13.702364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166df118 00:31:29.740 [2024-11-26 07:41:13.703481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.740 [2024-11-26 07:41:13.703497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:29.740 [2024-11-26 07:41:13.714285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e95a0 00:31:29.740 [2024-11-26 07:41:13.715401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.741 [2024-11-26 07:41:13.715417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:29.741 [2024-11-26 07:41:13.726222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166f1ca0 00:31:29.741 [2024-11-26 07:41:13.727350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.741 [2024-11-26 07:41:13.727366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:29.741 [2024-11-26 07:41:13.737367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166ec408 00:31:29.741 [2024-11-26 07:41:13.738471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.741 [2024-11-26 07:41:13.738487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:29.741 [2024-11-26 07:41:13.750068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166de8a8 00:31:29.741 [2024-11-26 07:41:13.751198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.741 [2024-11-26 07:41:13.751213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:29.741 [2024-11-26 07:41:13.761997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e9e10 00:31:29.741 [2024-11-26 07:41:13.763102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.741 [2024-11-26 07:41:13.763117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:29.741 [2024-11-26 07:41:13.773918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166ddc00 00:31:29.741 [2024-11-26 07:41:13.775040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.741 [2024-11-26 07:41:13.775056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:29.741 [2024-11-26 07:41:13.785887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e6738 00:31:29.741 [2024-11-26 07:41:13.786978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.741 [2024-11-26 07:41:13.786997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:29.741 [2024-11-26 07:41:13.799358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166ec408 00:31:29.741 [2024-11-26 07:41:13.801106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.741 [2024-11-26 07:41:13.801121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:29.741 [2024-11-26 07:41:13.808886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e99d8 00:31:29.741 [2024-11-26 07:41:13.809988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.741 [2024-11-26 07:41:13.810004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:29.741 [2024-11-26 07:41:13.823204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166eaab8 00:31:29.741 [2024-11-26 07:41:13.824945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.741 [2024-11-26 07:41:13.824961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:29.741 [2024-11-26 07:41:13.833552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e5ec8 00:31:29.741 [2024-11-26 07:41:13.834655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.741 [2024-11-26 07:41:13.834670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:29.741 [2024-11-26 07:41:13.845446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e5ec8 00:31:29.741 [2024-11-26 07:41:13.846528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.741 [2024-11-26 07:41:13.846544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:29.741 [2024-11-26 07:41:13.857540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e5ec8 00:31:29.741 [2024-11-26 07:41:13.858650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.741 [2024-11-26 07:41:13.858666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:29.741 [2024-11-26 07:41:13.869452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166ea248 00:31:30.003 [2024-11-26 07:41:13.870524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.003 [2024-11-26 07:41:13.870540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:30.003 [2024-11-26 07:41:13.881410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e9168 00:31:30.003 [2024-11-26 07:41:13.882531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.003 [2024-11-26 07:41:13.882546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:30.003 [2024-11-26 07:41:13.894916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e6b70 00:31:30.003 [2024-11-26 07:41:13.896607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.003 [2024-11-26 07:41:13.896623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:30.003 [2024-11-26 07:41:13.905306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166ed0b0 00:31:30.003 [2024-11-26 07:41:13.906411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.003 [2024-11-26 07:41:13.906427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:30.003 [2024-11-26 07:41:13.918772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e5ec8 00:31:30.003 [2024-11-26 07:41:13.920503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.003 [2024-11-26 07:41:13.920518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:30.003 [2024-11-26 07:41:13.929163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e9168 00:31:30.003 [2024-11-26 07:41:13.930264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.003 [2024-11-26 07:41:13.930280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:30.003 [2024-11-26 07:41:13.941097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e6b70 00:31:30.003 [2024-11-26 07:41:13.942198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.003 [2024-11-26 07:41:13.942215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:30.003 [2024-11-26 07:41:13.953029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166ec840 00:31:30.003 [2024-11-26 07:41:13.954161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.003 [2024-11-26 07:41:13.954177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:30.003 [2024-11-26 07:41:13.966461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166ed920 00:31:30.003 [2024-11-26 07:41:13.968184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.003 [2024-11-26 07:41:13.968199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:30.003 [2024-11-26 07:41:13.976851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166ff3c8 00:31:30.003 [2024-11-26 07:41:13.977930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.003 [2024-11-26 07:41:13.977946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:30.003 [2024-11-26 07:41:13.987989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e99d8 00:31:30.003 [2024-11-26 07:41:13.989075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.003 [2024-11-26 07:41:13.989091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:30.003 [2024-11-26 07:41:14.000699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e5ec8 00:31:30.003 [2024-11-26 07:41:14.001808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.003 [2024-11-26 07:41:14.001825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:30.003 [2024-11-26 07:41:14.014184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166ed920 00:31:30.003 [2024-11-26 07:41:14.015924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.003 [2024-11-26 07:41:14.015940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:30.003 [2024-11-26 07:41:14.023791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e99d8 00:31:30.003 [2024-11-26 07:41:14.024873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.003 [2024-11-26 07:41:14.024888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:30.003 [2024-11-26 07:41:14.036471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e99d8 00:31:30.003 [2024-11-26 07:41:14.037567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.003 [2024-11-26 07:41:14.037583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:30.003 [2024-11-26 07:41:14.048388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e99d8 00:31:30.003 [2024-11-26 07:41:14.049471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.003 [2024-11-26 07:41:14.049487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:30.003 [2024-11-26 07:41:14.060285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e99d8 00:31:30.003 [2024-11-26 07:41:14.061380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.003 [2024-11-26 07:41:14.061396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:30.003 [2024-11-26 07:41:14.072192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e99d8 00:31:30.003 [2024-11-26 07:41:14.073289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.003 [2024-11-26 07:41:14.073305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:30.003 [2024-11-26 07:41:14.084106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e99d8 00:31:30.003 [2024-11-26 07:41:14.085161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.003 [2024-11-26 07:41:14.085177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:30.003 [2024-11-26 07:41:14.096003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e5a90 00:31:30.003 [2024-11-26 07:41:14.097113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.003 [2024-11-26 07:41:14.097132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:30.003 [2024-11-26 07:41:14.107923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166edd58 00:31:30.003 [2024-11-26 07:41:14.109006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.003 [2024-11-26 07:41:14.109022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:30.003 [2024-11-26 07:41:14.119845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166eee38 00:31:30.003 [2024-11-26 07:41:14.120932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.003 [2024-11-26 07:41:14.120948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:30.003 [2024-11-26 07:41:14.131782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166f3a28 00:31:30.265 [2024-11-26 07:41:14.132872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.265 [2024-11-26 07:41:14.132888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:30.265 [2024-11-26 07:41:14.143715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e3d08 00:31:30.265 [2024-11-26 07:41:14.144771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.265 [2024-11-26 07:41:14.144787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:30.265 [2024-11-26 07:41:14.155633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166ed920 00:31:30.265 [2024-11-26 07:41:14.156713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.265 [2024-11-26 07:41:14.156729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:30.265 [2024-11-26 07:41:14.167600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e5ec8 00:31:30.265 [2024-11-26 07:41:14.168694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.265 [2024-11-26 07:41:14.168710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:30.265 [2024-11-26 07:41:14.179553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e3498 00:31:30.265 [2024-11-26 07:41:14.180629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.265 [2024-11-26 07:41:14.180645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:30.265 [2024-11-26 07:41:14.191486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166f1ca0 00:31:30.265 [2024-11-26 07:41:14.192523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.265 [2024-11-26 07:41:14.192539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:30.265 [2024-11-26 07:41:14.203482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e9168 00:31:30.265 [2024-11-26 07:41:14.204561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.265 [2024-11-26 07:41:14.204577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:30.265 [2024-11-26 07:41:14.216998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166ea248 00:31:30.265 [2024-11-26 07:41:14.218710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.265 [2024-11-26 07:41:14.218726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:30.265 [2024-11-26 07:41:14.226584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e01f8 00:31:30.265 [2024-11-26 07:41:14.227658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.265 [2024-11-26 07:41:14.227673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:30.265 [2024-11-26 07:41:14.239280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166f1ca0 00:31:30.265 [2024-11-26 07:41:14.240376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.265 [2024-11-26 07:41:14.240391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:30.265 [2024-11-26 07:41:14.251231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e6fa8 00:31:30.265 [2024-11-26 07:41:14.252278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.265 [2024-11-26 07:41:14.252293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:30.265 [2024-11-26 07:41:14.263156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166ea248 00:31:30.265 [2024-11-26 07:41:14.264228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.265 [2024-11-26 07:41:14.264243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:30.265 [2024-11-26 07:41:14.275088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166edd58 00:31:30.265 [2024-11-26 07:41:14.276185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.265 [2024-11-26 07:41:14.276200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:30.265 [2024-11-26 07:41:14.288543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e01f8 00:31:30.265 [2024-11-26 07:41:14.290262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.265 [2024-11-26 07:41:14.290277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:30.265 [2024-11-26 07:41:14.298913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166feb58 00:31:30.265 [2024-11-26 07:41:14.300000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.265 [2024-11-26 07:41:14.300017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:30.265 [2024-11-26 07:41:14.310825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166dece0 00:31:30.265 [2024-11-26 07:41:14.311912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.265 [2024-11-26 07:41:14.311928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:30.265 [2024-11-26 07:41:14.322745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166f7da8 00:31:30.265 [2024-11-26 07:41:14.323819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.265 [2024-11-26 07:41:14.323834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:30.265 [2024-11-26 07:41:14.336206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166eaab8 00:31:30.265 [2024-11-26 07:41:14.337923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.265 [2024-11-26 07:41:14.337938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:30.265 [2024-11-26 07:41:14.346581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e3060 00:31:30.265 [2024-11-26 07:41:14.347659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.265 [2024-11-26 07:41:14.347674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:30.265 [2024-11-26 07:41:14.358474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e01f8 00:31:30.265 [2024-11-26 07:41:14.359546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.265 [2024-11-26 07:41:14.359561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:30.265 [2024-11-26 07:41:14.370417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166ed920 00:31:30.265 [2024-11-26 07:41:14.371488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.265 [2024-11-26 07:41:14.371504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:30.265 [2024-11-26 07:41:14.382328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166ec840 00:31:30.265 [2024-11-26 07:41:14.383399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.265 [2024-11-26 07:41:14.383415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:30.265 [2024-11-26 07:41:14.394246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e23b8 00:31:30.527 [2024-11-26 07:41:14.395298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.527 [2024-11-26 07:41:14.395314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:30.527 [2024-11-26 07:41:14.405446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166fcdd0 00:31:30.527 [2024-11-26 07:41:14.406474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.527 [2024-11-26 07:41:14.406492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:30.527 [2024-11-26 07:41:14.418179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166de038 00:31:30.527 [2024-11-26 07:41:14.419267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.527 [2024-11-26 07:41:14.419283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:30.527 [2024-11-26 07:41:14.430128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e6300 00:31:30.527 [2024-11-26 07:41:14.431220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.527 [2024-11-26 07:41:14.431235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:30.527 [2024-11-26 07:41:14.442073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e1710 00:31:30.527 [2024-11-26 07:41:14.443161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.527 [2024-11-26 07:41:14.443177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:30.527 [2024-11-26 07:41:14.453205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166ec840 00:31:30.527 [2024-11-26 07:41:14.454278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.527 [2024-11-26 07:41:14.454294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:30.527 [2024-11-26 07:41:14.465902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e6fa8 00:31:30.527 [2024-11-26 07:41:14.466992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.527 [2024-11-26 07:41:14.467008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:30.527 [2024-11-26 07:41:14.477822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166de470 00:31:30.527 [2024-11-26 07:41:14.478873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.527 [2024-11-26 07:41:14.478889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:30.527 [2024-11-26 07:41:14.489712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e4de8 00:31:30.527 [2024-11-26 07:41:14.490787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.527 [2024-11-26 07:41:14.490803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:30.527 [2024-11-26 07:41:14.503211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e9168 00:31:30.527 [2024-11-26 07:41:14.504923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.527 [2024-11-26 07:41:14.504938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:30.527 [2024-11-26 07:41:14.514585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166f5378 00:31:30.527 [2024-11-26 07:41:14.515955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.527 [2024-11-26 07:41:14.515971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:30.527 [2024-11-26 07:41:14.526505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166f4298 00:31:30.527 [2024-11-26 07:41:14.527876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.527 [2024-11-26 07:41:14.527891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:30.527 [2024-11-26 07:41:14.538450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e0a68 00:31:30.527 [2024-11-26 07:41:14.539781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.527 [2024-11-26 07:41:14.539796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:30.528 [2024-11-26 07:41:14.550334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166f3a28 00:31:30.528 [2024-11-26 07:41:14.551690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.528 [2024-11-26 07:41:14.551705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:30.528 [2024-11-26 07:41:14.562271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e3d08 00:31:30.528 [2024-11-26 07:41:14.563649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.528 [2024-11-26 07:41:14.563665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:30.528 [2024-11-26 07:41:14.574282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166fac10 00:31:30.528 [2024-11-26 07:41:14.575645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.528 [2024-11-26 07:41:14.575661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:30.528 [2024-11-26 07:41:14.587757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166fb048 00:31:30.528 [2024-11-26 07:41:14.589769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.528 [2024-11-26 07:41:14.589784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:30.528 [2024-11-26 07:41:14.598817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166e1710 00:31:30.528 [2024-11-26 07:41:14.600429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.528 [2024-11-26 07:41:14.600445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:30.528 [2024-11-26 07:41:14.608372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166f5be8 00:31:30.528 [2024-11-26 07:41:14.609370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.528 [2024-11-26 07:41:14.609385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:30.528 [2024-11-26 07:41:14.621450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166fb8b8 00:31:30.528 [2024-11-26 07:41:14.622763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.528 [2024-11-26 07:41:14.622778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:30.528 [2024-11-26 07:41:14.634443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166ecc78 00:31:30.528 [2024-11-26 07:41:14.636102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.528 [2024-11-26 07:41:14.636117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:30.528 [2024-11-26 07:41:14.643993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11919c0) with pdu=0x2000166fac10 00:31:30.528 [2024-11-26 07:41:14.644986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:30.528 [2024-11-26 07:41:14.645001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:30.528 21299.50 IOPS, 83.20 MiB/s 00:31:30.528 Latency(us) 00:31:30.528 [2024-11-26T06:41:14.665Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:30.528 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:30.528 nvme0n1 : 2.01 21306.60 83.23 0.00 0.00 6000.48 2102.61 17803.95 00:31:30.528 [2024-11-26T06:41:14.665Z] =================================================================================================================== 00:31:30.528 [2024-11-26T06:41:14.665Z] Total : 21306.60 83.23 0.00 0.00 6000.48 2102.61 17803.95 00:31:30.528 { 00:31:30.528 "results": [ 00:31:30.528 { 00:31:30.528 "job": "nvme0n1", 00:31:30.528 "core_mask": "0x2", 00:31:30.528 "workload": "randwrite", 00:31:30.528 "status": "finished", 00:31:30.528 "queue_depth": 128, 00:31:30.528 "io_size": 4096, 00:31:30.528 "runtime": 2.005341, 00:31:30.528 "iops": 21306.600722769843, 00:31:30.528 "mibps": 83.2289090733197, 00:31:30.528 "io_failed": 0, 00:31:30.528 "io_timeout": 0, 00:31:30.528 "avg_latency_us": 6000.475592482505, 00:31:30.528 "min_latency_us": 2102.6133333333332, 00:31:30.528 "max_latency_us": 17803.946666666667 00:31:30.528 } 00:31:30.528 ], 00:31:30.528 "core_count": 1 00:31:30.528 } 00:31:30.788 07:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:30.788 07:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:30.788 07:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:30.788 | .driver_specific 00:31:30.788 | .nvme_error 00:31:30.788 | .status_code 00:31:30.788 | .command_transient_transport_error' 00:31:30.788 07:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:30.788 07:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 167 > 0 )) 00:31:30.788 07:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2302327 00:31:30.788 07:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2302327 ']' 00:31:30.788 07:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2302327 00:31:30.788 07:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:31:30.788 07:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:30.788 07:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2302327 00:31:31.049 07:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:31.049 07:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:31.049 07:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2302327' 00:31:31.049 killing process with pid 2302327 00:31:31.049 07:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2302327 00:31:31.049 Received shutdown signal, test time was about 2.000000 seconds 00:31:31.049 00:31:31.049 Latency(us) 00:31:31.049 [2024-11-26T06:41:15.186Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:31.049 [2024-11-26T06:41:15.186Z] =================================================================================================================== 00:31:31.049 [2024-11-26T06:41:15.186Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:31.049 07:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2302327 00:31:31.049 07:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:31:31.049 07:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:31.049 07:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:31:31.049 07:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:31:31.049 07:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:31:31.049 07:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2303010 00:31:31.049 07:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2303010 /var/tmp/bperf.sock 00:31:31.049 07:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2303010 ']' 00:31:31.049 07:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:31:31.049 07:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:31.049 07:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:31.049 07:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:31.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:31.049 07:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:31.049 07:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:31.049 [2024-11-26 07:41:15.071478] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:31:31.050 [2024-11-26 07:41:15.071532] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2303010 ] 00:31:31.050 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:31.050 Zero copy mechanism will not be used. 00:31:31.050 [2024-11-26 07:41:15.162538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:31.310 [2024-11-26 07:41:15.191747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:31.883 07:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:31.883 07:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:31:31.883 07:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:31.883 07:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:32.144 07:41:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:32.144 07:41:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.144 07:41:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:32.144 07:41:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.144 07:41:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:32.144 07:41:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:32.406 nvme0n1 00:31:32.406 07:41:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:31:32.406 07:41:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.406 07:41:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:32.406 07:41:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.406 07:41:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:32.406 07:41:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:32.668 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:32.668 Zero copy mechanism will not be used. 00:31:32.668 Running I/O for 2 seconds... 00:31:32.668 [2024-11-26 07:41:16.548042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.668 [2024-11-26 07:41:16.548145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.668 [2024-11-26 07:41:16.548173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:32.668 [2024-11-26 07:41:16.555348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.668 [2024-11-26 07:41:16.555429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.668 [2024-11-26 07:41:16.555450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:32.668 [2024-11-26 07:41:16.560761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.668 [2024-11-26 07:41:16.561011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.668 [2024-11-26 07:41:16.561029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:32.668 [2024-11-26 07:41:16.568308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.668 [2024-11-26 07:41:16.568639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.668 [2024-11-26 07:41:16.568656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:32.668 [2024-11-26 07:41:16.573010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.668 [2024-11-26 07:41:16.573199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.668 [2024-11-26 07:41:16.573219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:32.668 [2024-11-26 07:41:16.577045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.668 [2024-11-26 07:41:16.577243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.668 [2024-11-26 07:41:16.577260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:32.668 [2024-11-26 07:41:16.581365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.668 [2024-11-26 07:41:16.581504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.668 [2024-11-26 07:41:16.581521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:32.668 [2024-11-26 07:41:16.586428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.668 [2024-11-26 07:41:16.586736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.668 [2024-11-26 07:41:16.586753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:32.668 [2024-11-26 07:41:16.593773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.668 [2024-11-26 07:41:16.593954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.668 [2024-11-26 07:41:16.593971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:32.668 [2024-11-26 07:41:16.597717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.668 [2024-11-26 07:41:16.597892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.668 [2024-11-26 07:41:16.597909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:32.668 [2024-11-26 07:41:16.601540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.668 [2024-11-26 07:41:16.601843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.668 [2024-11-26 07:41:16.601861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:32.668 [2024-11-26 07:41:16.606304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.668 [2024-11-26 07:41:16.606604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.668 [2024-11-26 07:41:16.606621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:32.668 [2024-11-26 07:41:16.612719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.668 [2024-11-26 07:41:16.613102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.668 [2024-11-26 07:41:16.613119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:32.668 [2024-11-26 07:41:16.621431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.668 [2024-11-26 07:41:16.621804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.668 [2024-11-26 07:41:16.621821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:32.668 [2024-11-26 07:41:16.629926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.668 [2024-11-26 07:41:16.630222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.668 [2024-11-26 07:41:16.630239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:32.668 [2024-11-26 07:41:16.634003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.668 [2024-11-26 07:41:16.634183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.668 [2024-11-26 07:41:16.634200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:32.668 [2024-11-26 07:41:16.638268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.668 [2024-11-26 07:41:16.638442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.668 [2024-11-26 07:41:16.638459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:32.668 [2024-11-26 07:41:16.644985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.668 [2024-11-26 07:41:16.645216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.668 [2024-11-26 07:41:16.645232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:32.668 [2024-11-26 07:41:16.653293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.668 [2024-11-26 07:41:16.653563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.668 [2024-11-26 07:41:16.653580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:32.668 [2024-11-26 07:41:16.662121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.668 [2024-11-26 07:41:16.662372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.668 [2024-11-26 07:41:16.662389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:32.668 [2024-11-26 07:41:16.667829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.668 [2024-11-26 07:41:16.668118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.669 [2024-11-26 07:41:16.668134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:32.669 [2024-11-26 07:41:16.676571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.669 [2024-11-26 07:41:16.676884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.669 [2024-11-26 07:41:16.676901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:32.669 [2024-11-26 07:41:16.685280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.669 [2024-11-26 07:41:16.685524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.669 [2024-11-26 07:41:16.685541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:32.669 [2024-11-26 07:41:16.692648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.669 [2024-11-26 07:41:16.692817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.669 [2024-11-26 07:41:16.692833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:32.669 [2024-11-26 07:41:16.698778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.669 [2024-11-26 07:41:16.698856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.669 [2024-11-26 07:41:16.698887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:32.669 [2024-11-26 07:41:16.706407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.669 [2024-11-26 07:41:16.706697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.669 [2024-11-26 07:41:16.706713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:32.669 [2024-11-26 07:41:16.714514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.669 [2024-11-26 07:41:16.714769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.669 [2024-11-26 07:41:16.714786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:32.669 [2024-11-26 07:41:16.723290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.669 [2024-11-26 07:41:16.723464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.669 [2024-11-26 07:41:16.723480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:32.669 [2024-11-26 07:41:16.729400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.669 [2024-11-26 07:41:16.729614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.669 [2024-11-26 07:41:16.729631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:32.669 [2024-11-26 07:41:16.738125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.669 [2024-11-26 07:41:16.738300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.669 [2024-11-26 07:41:16.738317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:32.669 [2024-11-26 07:41:16.743519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.669 [2024-11-26 07:41:16.743675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.669 [2024-11-26 07:41:16.743694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:32.669 [2024-11-26 07:41:16.749611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.669 [2024-11-26 07:41:16.749799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.669 [2024-11-26 07:41:16.749813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:32.669 [2024-11-26 07:41:16.759011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.669 [2024-11-26 07:41:16.759256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.669 [2024-11-26 07:41:16.759272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:32.669 [2024-11-26 07:41:16.766188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.669 [2024-11-26 07:41:16.766368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.669 [2024-11-26 07:41:16.766384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:32.669 [2024-11-26 07:41:16.772223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.669 [2024-11-26 07:41:16.772360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.669 [2024-11-26 07:41:16.772375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:32.669 [2024-11-26 07:41:16.777436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.669 [2024-11-26 07:41:16.777675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.669 [2024-11-26 07:41:16.777692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:32.669 [2024-11-26 07:41:16.785124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.669 [2024-11-26 07:41:16.785311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.669 [2024-11-26 07:41:16.785327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:32.669 [2024-11-26 07:41:16.793886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.669 [2024-11-26 07:41:16.794065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.669 [2024-11-26 07:41:16.794081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:32.932 [2024-11-26 07:41:16.799132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.932 [2024-11-26 07:41:16.799294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.932 [2024-11-26 07:41:16.799309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:32.932 [2024-11-26 07:41:16.807689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.932 [2024-11-26 07:41:16.807943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.932 [2024-11-26 07:41:16.807960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:32.932 [2024-11-26 07:41:16.813937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.932 [2024-11-26 07:41:16.814195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.932 [2024-11-26 07:41:16.814211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:32.932 [2024-11-26 07:41:16.821164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.932 [2024-11-26 07:41:16.821430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.932 [2024-11-26 07:41:16.821446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:32.932 [2024-11-26 07:41:16.826313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.933 [2024-11-26 07:41:16.826581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.933 [2024-11-26 07:41:16.826598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:32.933 [2024-11-26 07:41:16.833129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.933 [2024-11-26 07:41:16.833303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.933 [2024-11-26 07:41:16.833319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:32.933 [2024-11-26 07:41:16.839207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.933 [2024-11-26 07:41:16.839567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.933 [2024-11-26 07:41:16.839583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:32.933 [2024-11-26 07:41:16.843400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.933 [2024-11-26 07:41:16.843603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.933 [2024-11-26 07:41:16.843619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:32.933 [2024-11-26 07:41:16.849980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.933 [2024-11-26 07:41:16.850276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.933 [2024-11-26 07:41:16.850292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:32.933 [2024-11-26 07:41:16.854927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.933 [2024-11-26 07:41:16.855261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.933 [2024-11-26 07:41:16.855278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:32.933 [2024-11-26 07:41:16.862368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.933 [2024-11-26 07:41:16.862628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.933 [2024-11-26 07:41:16.862644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:32.933 [2024-11-26 07:41:16.868290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.933 [2024-11-26 07:41:16.868726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.933 [2024-11-26 07:41:16.868743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:32.933 [2024-11-26 07:41:16.873360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.933 [2024-11-26 07:41:16.873566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.933 [2024-11-26 07:41:16.873582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:32.933 [2024-11-26 07:41:16.880747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.933 [2024-11-26 07:41:16.880973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.933 [2024-11-26 07:41:16.880990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:32.933 [2024-11-26 07:41:16.888109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.933 [2024-11-26 07:41:16.888268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.933 [2024-11-26 07:41:16.888285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:32.933 [2024-11-26 07:41:16.892306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.933 [2024-11-26 07:41:16.892484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.933 [2024-11-26 07:41:16.892500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:32.933 [2024-11-26 07:41:16.898259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.933 [2024-11-26 07:41:16.898582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.933 [2024-11-26 07:41:16.898598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:32.933 [2024-11-26 07:41:16.904946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.933 [2024-11-26 07:41:16.905265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.933 [2024-11-26 07:41:16.905282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:32.933 [2024-11-26 07:41:16.910129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.933 [2024-11-26 07:41:16.910391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.933 [2024-11-26 07:41:16.910410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:32.933 [2024-11-26 07:41:16.915696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.933 [2024-11-26 07:41:16.915984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.933 [2024-11-26 07:41:16.916001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:32.933 [2024-11-26 07:41:16.921992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.933 [2024-11-26 07:41:16.922251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.933 [2024-11-26 07:41:16.922268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:32.933 [2024-11-26 07:41:16.926821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.933 [2024-11-26 07:41:16.926999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.933 [2024-11-26 07:41:16.927015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:32.933 [2024-11-26 07:41:16.931122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.933 [2024-11-26 07:41:16.931300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.933 [2024-11-26 07:41:16.931316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:32.933 [2024-11-26 07:41:16.935579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.933 [2024-11-26 07:41:16.935756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.933 [2024-11-26 07:41:16.935772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:32.933 [2024-11-26 07:41:16.940376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.933 [2024-11-26 07:41:16.940574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.933 [2024-11-26 07:41:16.940591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:32.933 [2024-11-26 07:41:16.947831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.933 [2024-11-26 07:41:16.948011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.933 [2024-11-26 07:41:16.948027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:32.933 [2024-11-26 07:41:16.953538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.933 [2024-11-26 07:41:16.953721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.933 [2024-11-26 07:41:16.953738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:32.933 [2024-11-26 07:41:16.960406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.933 [2024-11-26 07:41:16.960727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.933 [2024-11-26 07:41:16.960743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:32.933 [2024-11-26 07:41:16.966802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.933 [2024-11-26 07:41:16.966993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.933 [2024-11-26 07:41:16.967010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:32.933 [2024-11-26 07:41:16.970941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.933 [2024-11-26 07:41:16.971122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.933 [2024-11-26 07:41:16.971138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:32.933 [2024-11-26 07:41:16.974857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.933 [2024-11-26 07:41:16.975031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.933 [2024-11-26 07:41:16.975047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:32.934 [2024-11-26 07:41:16.978633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.934 [2024-11-26 07:41:16.978807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.934 [2024-11-26 07:41:16.978824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:32.934 [2024-11-26 07:41:16.986783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.934 [2024-11-26 07:41:16.987000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.934 [2024-11-26 07:41:16.987016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:32.934 [2024-11-26 07:41:16.995114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.934 [2024-11-26 07:41:16.995441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.934 [2024-11-26 07:41:16.995458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:32.934 [2024-11-26 07:41:17.002156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.934 [2024-11-26 07:41:17.002424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.934 [2024-11-26 07:41:17.002441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:32.934 [2024-11-26 07:41:17.010062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.934 [2024-11-26 07:41:17.010342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.934 [2024-11-26 07:41:17.010359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:32.934 [2024-11-26 07:41:17.015981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.934 [2024-11-26 07:41:17.016255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.934 [2024-11-26 07:41:17.016272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:32.934 [2024-11-26 07:41:17.023001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.934 [2024-11-26 07:41:17.023312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.934 [2024-11-26 07:41:17.023328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:32.934 [2024-11-26 07:41:17.029136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.934 [2024-11-26 07:41:17.029308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.934 [2024-11-26 07:41:17.029324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:32.934 [2024-11-26 07:41:17.033427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.934 [2024-11-26 07:41:17.033596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.934 [2024-11-26 07:41:17.033613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:32.934 [2024-11-26 07:41:17.039054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.934 [2024-11-26 07:41:17.039232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.934 [2024-11-26 07:41:17.039249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:32.934 [2024-11-26 07:41:17.043269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.934 [2024-11-26 07:41:17.043444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.934 [2024-11-26 07:41:17.043461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:32.934 [2024-11-26 07:41:17.047209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.934 [2024-11-26 07:41:17.047387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.934 [2024-11-26 07:41:17.047403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:32.934 [2024-11-26 07:41:17.051312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.934 [2024-11-26 07:41:17.051487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.934 [2024-11-26 07:41:17.051504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:32.934 [2024-11-26 07:41:17.058681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:32.934 [2024-11-26 07:41:17.058937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.934 [2024-11-26 07:41:17.058956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:33.196 [2024-11-26 07:41:17.066622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.196 [2024-11-26 07:41:17.066795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.197 [2024-11-26 07:41:17.066812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:33.197 [2024-11-26 07:41:17.072802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.197 [2024-11-26 07:41:17.072985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.197 [2024-11-26 07:41:17.073002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:33.197 [2024-11-26 07:41:17.076677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.197 [2024-11-26 07:41:17.076852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.197 [2024-11-26 07:41:17.076880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:33.197 [2024-11-26 07:41:17.081163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.197 [2024-11-26 07:41:17.081337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.197 [2024-11-26 07:41:17.081353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:33.197 [2024-11-26 07:41:17.085299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.197 [2024-11-26 07:41:17.085478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.197 [2024-11-26 07:41:17.085494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:33.197 [2024-11-26 07:41:17.089239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.197 [2024-11-26 07:41:17.089413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.197 [2024-11-26 07:41:17.089429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:33.197 [2024-11-26 07:41:17.093224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.197 [2024-11-26 07:41:17.093401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.197 [2024-11-26 07:41:17.093417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:33.197 [2024-11-26 07:41:17.097232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.197 [2024-11-26 07:41:17.097402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.197 [2024-11-26 07:41:17.097418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:33.197 [2024-11-26 07:41:17.100951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.197 [2024-11-26 07:41:17.101125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.197 [2024-11-26 07:41:17.101140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:33.197 [2024-11-26 07:41:17.106941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.197 [2024-11-26 07:41:17.107209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.197 [2024-11-26 07:41:17.107225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:33.197 [2024-11-26 07:41:17.111013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.197 [2024-11-26 07:41:17.111277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.197 [2024-11-26 07:41:17.111293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:33.197 [2024-11-26 07:41:17.117460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.197 [2024-11-26 07:41:17.117749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.197 [2024-11-26 07:41:17.117766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:33.197 [2024-11-26 07:41:17.125168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.197 [2024-11-26 07:41:17.125444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.197 [2024-11-26 07:41:17.125460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:33.197 [2024-11-26 07:41:17.131342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.197 [2024-11-26 07:41:17.131518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.197 [2024-11-26 07:41:17.131534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:33.197 [2024-11-26 07:41:17.136986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.197 [2024-11-26 07:41:17.137113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.197 [2024-11-26 07:41:17.137128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:33.197 [2024-11-26 07:41:17.142230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.197 [2024-11-26 07:41:17.142403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.197 [2024-11-26 07:41:17.142419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:33.197 [2024-11-26 07:41:17.148099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.197 [2024-11-26 07:41:17.148407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.197 [2024-11-26 07:41:17.148424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:33.197 [2024-11-26 07:41:17.155755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.197 [2024-11-26 07:41:17.156007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.197 [2024-11-26 07:41:17.156023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:33.197 [2024-11-26 07:41:17.162623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.197 [2024-11-26 07:41:17.162939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.197 [2024-11-26 07:41:17.162955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:33.197 [2024-11-26 07:41:17.169200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.197 [2024-11-26 07:41:17.169463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.197 [2024-11-26 07:41:17.169479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:33.197 [2024-11-26 07:41:17.175041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.197 [2024-11-26 07:41:17.175217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.197 [2024-11-26 07:41:17.175233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:33.197 [2024-11-26 07:41:17.179263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.197 [2024-11-26 07:41:17.179441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.197 [2024-11-26 07:41:17.179457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:33.197 [2024-11-26 07:41:17.183338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.197 [2024-11-26 07:41:17.183516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.197 [2024-11-26 07:41:17.183533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:33.197 [2024-11-26 07:41:17.188188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.197 [2024-11-26 07:41:17.188426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.197 [2024-11-26 07:41:17.188442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:33.197 [2024-11-26 07:41:17.194419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.197 [2024-11-26 07:41:17.194733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.197 [2024-11-26 07:41:17.194749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:33.197 [2024-11-26 07:41:17.198708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.197 [2024-11-26 07:41:17.198946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.197 [2024-11-26 07:41:17.198966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:33.197 [2024-11-26 07:41:17.204664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.197 [2024-11-26 07:41:17.204891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.197 [2024-11-26 07:41:17.204907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:33.198 [2024-11-26 07:41:17.210699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.198 [2024-11-26 07:41:17.210970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.198 [2024-11-26 07:41:17.210987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:33.198 [2024-11-26 07:41:17.218194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.198 [2024-11-26 07:41:17.218466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.198 [2024-11-26 07:41:17.218483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:33.198 [2024-11-26 07:41:17.224220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.198 [2024-11-26 07:41:17.224526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.198 [2024-11-26 07:41:17.224543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:33.198 [2024-11-26 07:41:17.231941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.198 [2024-11-26 07:41:17.232199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.198 [2024-11-26 07:41:17.232215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:33.198 [2024-11-26 07:41:17.237440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.198 [2024-11-26 07:41:17.237616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.198 [2024-11-26 07:41:17.237632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:33.198 [2024-11-26 07:41:17.244631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.198 [2024-11-26 07:41:17.244893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.198 [2024-11-26 07:41:17.244909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:33.198 [2024-11-26 07:41:17.252344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.198 [2024-11-26 07:41:17.252678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.198 [2024-11-26 07:41:17.252694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:33.198 [2024-11-26 07:41:17.260269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.198 [2024-11-26 07:41:17.260585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.198 [2024-11-26 07:41:17.260601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:33.198 [2024-11-26 07:41:17.268281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.198 [2024-11-26 07:41:17.268580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.198 [2024-11-26 07:41:17.268597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:33.198 [2024-11-26 07:41:17.277756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.198 [2024-11-26 07:41:17.278045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.198 [2024-11-26 07:41:17.278062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:33.198 [2024-11-26 07:41:17.284772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.198 [2024-11-26 07:41:17.285116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.198 [2024-11-26 07:41:17.285133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:33.198 [2024-11-26 07:41:17.292339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.198 [2024-11-26 07:41:17.292610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.198 [2024-11-26 07:41:17.292627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:33.198 [2024-11-26 07:41:17.300017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.198 [2024-11-26 07:41:17.300187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.198 [2024-11-26 07:41:17.300203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:33.198 [2024-11-26 07:41:17.307486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.198 [2024-11-26 07:41:17.307730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.198 [2024-11-26 07:41:17.307746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:33.198 [2024-11-26 07:41:17.317010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.198 [2024-11-26 07:41:17.317298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.198 [2024-11-26 07:41:17.317314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:33.198 [2024-11-26 07:41:17.324520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.198 [2024-11-26 07:41:17.324738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.198 [2024-11-26 07:41:17.324754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:33.460 [2024-11-26 07:41:17.331637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.460 [2024-11-26 07:41:17.331938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.460 [2024-11-26 07:41:17.331954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:33.460 [2024-11-26 07:41:17.340435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.460 [2024-11-26 07:41:17.340677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.460 [2024-11-26 07:41:17.340694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:33.460 [2024-11-26 07:41:17.349541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.460 [2024-11-26 07:41:17.349723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.460 [2024-11-26 07:41:17.349739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:33.460 [2024-11-26 07:41:17.359793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.460 [2024-11-26 07:41:17.360074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.460 [2024-11-26 07:41:17.360090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:33.461 [2024-11-26 07:41:17.370504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.461 [2024-11-26 07:41:17.370804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.461 [2024-11-26 07:41:17.370821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:33.461 [2024-11-26 07:41:17.380767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.461 [2024-11-26 07:41:17.381033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.461 [2024-11-26 07:41:17.381050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:33.461 [2024-11-26 07:41:17.390736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.461 [2024-11-26 07:41:17.391039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.461 [2024-11-26 07:41:17.391055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:33.461 [2024-11-26 07:41:17.401594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.461 [2024-11-26 07:41:17.401882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.461 [2024-11-26 07:41:17.401898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:33.461 [2024-11-26 07:41:17.412491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.461 [2024-11-26 07:41:17.412706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.461 [2024-11-26 07:41:17.412728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:33.461 [2024-11-26 07:41:17.422687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.461 [2024-11-26 07:41:17.423116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.461 [2024-11-26 07:41:17.423133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:33.461 [2024-11-26 07:41:17.433592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.461 [2024-11-26 07:41:17.433870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.461 [2024-11-26 07:41:17.433886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:33.461 [2024-11-26 07:41:17.442040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.461 [2024-11-26 07:41:17.442273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.461 [2024-11-26 07:41:17.442289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:33.461 [2024-11-26 07:41:17.450818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.461 [2024-11-26 07:41:17.451087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.461 [2024-11-26 07:41:17.451104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:33.461 [2024-11-26 07:41:17.456497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.461 [2024-11-26 07:41:17.456668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.461 [2024-11-26 07:41:17.456684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:33.461 [2024-11-26 07:41:17.464663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.461 [2024-11-26 07:41:17.464960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.461 [2024-11-26 07:41:17.464976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:33.461 [2024-11-26 07:41:17.472507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.461 [2024-11-26 07:41:17.472783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.461 [2024-11-26 07:41:17.472799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:33.461 [2024-11-26 07:41:17.480288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.461 [2024-11-26 07:41:17.480579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.461 [2024-11-26 07:41:17.480596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:33.461 [2024-11-26 07:41:17.487832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.461 [2024-11-26 07:41:17.488101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.461 [2024-11-26 07:41:17.488117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:33.461 [2024-11-26 07:41:17.495991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.461 [2024-11-26 07:41:17.496258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.461 [2024-11-26 07:41:17.496274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:33.461 [2024-11-26 07:41:17.504595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.461 [2024-11-26 07:41:17.504841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.461 [2024-11-26 07:41:17.504857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:33.461 [2024-11-26 07:41:17.512769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.461 [2024-11-26 07:41:17.513072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.461 [2024-11-26 07:41:17.513088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:33.461 [2024-11-26 07:41:17.521847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.461 [2024-11-26 07:41:17.522131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.461 [2024-11-26 07:41:17.522148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:33.461 [2024-11-26 07:41:17.530143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.461 [2024-11-26 07:41:17.530516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.461 [2024-11-26 07:41:17.530532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:33.461 [2024-11-26 07:41:17.536295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.461 [2024-11-26 07:41:17.536464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.461 [2024-11-26 07:41:17.536481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:33.461 4631.00 IOPS, 578.88 MiB/s [2024-11-26T06:41:17.598Z] [2024-11-26 07:41:17.547294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.461 [2024-11-26 07:41:17.547618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.461 [2024-11-26 07:41:17.547635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:33.461 [2024-11-26 07:41:17.557908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.461 [2024-11-26 07:41:17.558152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.461 [2024-11-26 07:41:17.558171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:33.461 [2024-11-26 07:41:17.568727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.461 [2024-11-26 07:41:17.568991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.461 [2024-11-26 07:41:17.569008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:33.461 [2024-11-26 07:41:17.579022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.461 [2024-11-26 07:41:17.579304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.461 [2024-11-26 07:41:17.579321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:33.461 [2024-11-26 07:41:17.589613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.461 [2024-11-26 07:41:17.589812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.461 [2024-11-26 07:41:17.589829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:33.724 [2024-11-26 07:41:17.600314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.724 [2024-11-26 07:41:17.600585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.724 [2024-11-26 07:41:17.600601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:33.724 [2024-11-26 07:41:17.611094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.724 [2024-11-26 07:41:17.611462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.724 [2024-11-26 07:41:17.611479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:33.724 [2024-11-26 07:41:17.621899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.724 [2024-11-26 07:41:17.622166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.724 [2024-11-26 07:41:17.622182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:33.724 [2024-11-26 07:41:17.632452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.724 [2024-11-26 07:41:17.632680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.724 [2024-11-26 07:41:17.632696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:33.724 [2024-11-26 07:41:17.643076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.724 [2024-11-26 07:41:17.643372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.724 [2024-11-26 07:41:17.643388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:33.724 [2024-11-26 07:41:17.653490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.724 [2024-11-26 07:41:17.653771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.724 [2024-11-26 07:41:17.653787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:33.724 [2024-11-26 07:41:17.664525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.724 [2024-11-26 07:41:17.664809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.724 [2024-11-26 07:41:17.664825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:33.724 [2024-11-26 07:41:17.674754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.724 [2024-11-26 07:41:17.674948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.724 [2024-11-26 07:41:17.674964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:33.724 [2024-11-26 07:41:17.685349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.724 [2024-11-26 07:41:17.685633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.724 [2024-11-26 07:41:17.685650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:33.724 [2024-11-26 07:41:17.696031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.724 [2024-11-26 07:41:17.696305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.724 [2024-11-26 07:41:17.696321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:33.724 [2024-11-26 07:41:17.706495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.724 [2024-11-26 07:41:17.706785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.724 [2024-11-26 07:41:17.706802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:33.724 [2024-11-26 07:41:17.717096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.724 [2024-11-26 07:41:17.717509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.724 [2024-11-26 07:41:17.717525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:33.724 [2024-11-26 07:41:17.727667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.724 [2024-11-26 07:41:17.727956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.724 [2024-11-26 07:41:17.727972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:33.724 [2024-11-26 07:41:17.738691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.724 [2024-11-26 07:41:17.738971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.724 [2024-11-26 07:41:17.738987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:33.724 [2024-11-26 07:41:17.749889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.724 [2024-11-26 07:41:17.750124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.724 [2024-11-26 07:41:17.750140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:33.724 [2024-11-26 07:41:17.759744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.724 [2024-11-26 07:41:17.759946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.724 [2024-11-26 07:41:17.759962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:33.724 [2024-11-26 07:41:17.767937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.724 [2024-11-26 07:41:17.768211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.724 [2024-11-26 07:41:17.768227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:33.724 [2024-11-26 07:41:17.778603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.724 [2024-11-26 07:41:17.778758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.724 [2024-11-26 07:41:17.778773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:33.724 [2024-11-26 07:41:17.787009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.724 [2024-11-26 07:41:17.787264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.724 [2024-11-26 07:41:17.787280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:33.724 [2024-11-26 07:41:17.795300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.725 [2024-11-26 07:41:17.795536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.725 [2024-11-26 07:41:17.795551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:33.725 [2024-11-26 07:41:17.800411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.725 [2024-11-26 07:41:17.800672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.725 [2024-11-26 07:41:17.800688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:33.725 [2024-11-26 07:41:17.806686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.725 [2024-11-26 07:41:17.806887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.725 [2024-11-26 07:41:17.806904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:33.725 [2024-11-26 07:41:17.814680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.725 [2024-11-26 07:41:17.814986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.725 [2024-11-26 07:41:17.815005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:33.725 [2024-11-26 07:41:17.819648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.725 [2024-11-26 07:41:17.819823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.725 [2024-11-26 07:41:17.819840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:33.725 [2024-11-26 07:41:17.828735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.725 [2024-11-26 07:41:17.829068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.725 [2024-11-26 07:41:17.829085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:33.725 [2024-11-26 07:41:17.834158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.725 [2024-11-26 07:41:17.834340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.725 [2024-11-26 07:41:17.834356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:33.725 [2024-11-26 07:41:17.838920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.725 [2024-11-26 07:41:17.839256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.725 [2024-11-26 07:41:17.839272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:33.725 [2024-11-26 07:41:17.844295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.725 [2024-11-26 07:41:17.844467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.725 [2024-11-26 07:41:17.844483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:33.725 [2024-11-26 07:41:17.852080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.725 [2024-11-26 07:41:17.852331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.725 [2024-11-26 07:41:17.852348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:33.988 [2024-11-26 07:41:17.859922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.988 [2024-11-26 07:41:17.860152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.988 [2024-11-26 07:41:17.860169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:33.988 [2024-11-26 07:41:17.865455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.988 [2024-11-26 07:41:17.865758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.988 [2024-11-26 07:41:17.865774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:33.988 [2024-11-26 07:41:17.872250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.988 [2024-11-26 07:41:17.872477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.988 [2024-11-26 07:41:17.872497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:33.988 [2024-11-26 07:41:17.879166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.988 [2024-11-26 07:41:17.879411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.988 [2024-11-26 07:41:17.879427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:33.988 [2024-11-26 07:41:17.886129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.988 [2024-11-26 07:41:17.886418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.988 [2024-11-26 07:41:17.886434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:33.988 [2024-11-26 07:41:17.892712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.988 [2024-11-26 07:41:17.892897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.988 [2024-11-26 07:41:17.892914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:33.988 [2024-11-26 07:41:17.901745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.988 [2024-11-26 07:41:17.902020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.988 [2024-11-26 07:41:17.902036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:33.988 [2024-11-26 07:41:17.912016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.988 [2024-11-26 07:41:17.912282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.988 [2024-11-26 07:41:17.912299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:33.988 [2024-11-26 07:41:17.923163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.988 [2024-11-26 07:41:17.923416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.988 [2024-11-26 07:41:17.923432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:33.988 [2024-11-26 07:41:17.934027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.988 [2024-11-26 07:41:17.934275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.988 [2024-11-26 07:41:17.934291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:33.988 [2024-11-26 07:41:17.944504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.988 [2024-11-26 07:41:17.944651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.988 [2024-11-26 07:41:17.944667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:33.988 [2024-11-26 07:41:17.955058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.988 [2024-11-26 07:41:17.955401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.988 [2024-11-26 07:41:17.955418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:33.988 [2024-11-26 07:41:17.966021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.988 [2024-11-26 07:41:17.966286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.988 [2024-11-26 07:41:17.966303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:33.988 [2024-11-26 07:41:17.977017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.988 [2024-11-26 07:41:17.977256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.988 [2024-11-26 07:41:17.977272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:33.988 [2024-11-26 07:41:17.988153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.988 [2024-11-26 07:41:17.988653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.988 [2024-11-26 07:41:17.988670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:33.988 [2024-11-26 07:41:17.997413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.989 [2024-11-26 07:41:17.997695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.989 [2024-11-26 07:41:17.997711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:33.989 [2024-11-26 07:41:18.007996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.989 [2024-11-26 07:41:18.008266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.989 [2024-11-26 07:41:18.008282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:33.989 [2024-11-26 07:41:18.014762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.989 [2024-11-26 07:41:18.014934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.989 [2024-11-26 07:41:18.014950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:33.989 [2024-11-26 07:41:18.022486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.989 [2024-11-26 07:41:18.022821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.989 [2024-11-26 07:41:18.022837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:33.989 [2024-11-26 07:41:18.030250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.989 [2024-11-26 07:41:18.030506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.989 [2024-11-26 07:41:18.030526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:33.989 [2024-11-26 07:41:18.037587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.989 [2024-11-26 07:41:18.037866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.989 [2024-11-26 07:41:18.037883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:33.989 [2024-11-26 07:41:18.046225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.989 [2024-11-26 07:41:18.046464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.989 [2024-11-26 07:41:18.046480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:33.989 [2024-11-26 07:41:18.055506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.989 [2024-11-26 07:41:18.055790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.989 [2024-11-26 07:41:18.055807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:33.989 [2024-11-26 07:41:18.063933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.989 [2024-11-26 07:41:18.064190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.989 [2024-11-26 07:41:18.064206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:33.989 [2024-11-26 07:41:18.072101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.989 [2024-11-26 07:41:18.072342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.989 [2024-11-26 07:41:18.072358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:33.989 [2024-11-26 07:41:18.081304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.989 [2024-11-26 07:41:18.081597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.989 [2024-11-26 07:41:18.081614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:33.989 [2024-11-26 07:41:18.089204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.989 [2024-11-26 07:41:18.089381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.989 [2024-11-26 07:41:18.089397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:33.989 [2024-11-26 07:41:18.096898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.989 [2024-11-26 07:41:18.097138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.989 [2024-11-26 07:41:18.097155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:33.989 [2024-11-26 07:41:18.105374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.989 [2024-11-26 07:41:18.105827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.989 [2024-11-26 07:41:18.105844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:33.989 [2024-11-26 07:41:18.113414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:33.989 [2024-11-26 07:41:18.113609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.989 [2024-11-26 07:41:18.113626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:34.253 [2024-11-26 07:41:18.120593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.253 [2024-11-26 07:41:18.120867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.253 [2024-11-26 07:41:18.120883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:34.253 [2024-11-26 07:41:18.128779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.253 [2024-11-26 07:41:18.129068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.253 [2024-11-26 07:41:18.129084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:34.253 [2024-11-26 07:41:18.136777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.253 [2024-11-26 07:41:18.137071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.253 [2024-11-26 07:41:18.137087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:34.253 [2024-11-26 07:41:18.145277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.253 [2024-11-26 07:41:18.145455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.253 [2024-11-26 07:41:18.145471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:34.253 [2024-11-26 07:41:18.154261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.253 [2024-11-26 07:41:18.154487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.253 [2024-11-26 07:41:18.154503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:34.253 [2024-11-26 07:41:18.163879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.253 [2024-11-26 07:41:18.164151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.253 [2024-11-26 07:41:18.164168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:34.253 [2024-11-26 07:41:18.172850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.253 [2024-11-26 07:41:18.173139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.253 [2024-11-26 07:41:18.173155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:34.253 [2024-11-26 07:41:18.182540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.253 [2024-11-26 07:41:18.182902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.253 [2024-11-26 07:41:18.182919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:34.253 [2024-11-26 07:41:18.190699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.253 [2024-11-26 07:41:18.190927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.253 [2024-11-26 07:41:18.190943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:34.253 [2024-11-26 07:41:18.198916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.253 [2024-11-26 07:41:18.199141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.253 [2024-11-26 07:41:18.199157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:34.253 [2024-11-26 07:41:18.207047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.253 [2024-11-26 07:41:18.207349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.253 [2024-11-26 07:41:18.207365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:34.253 [2024-11-26 07:41:18.214895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.253 [2024-11-26 07:41:18.215236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.253 [2024-11-26 07:41:18.215252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:34.253 [2024-11-26 07:41:18.222723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.253 [2024-11-26 07:41:18.222989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.253 [2024-11-26 07:41:18.223006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:34.253 [2024-11-26 07:41:18.231381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.253 [2024-11-26 07:41:18.231667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.253 [2024-11-26 07:41:18.231683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:34.253 [2024-11-26 07:41:18.240453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.253 [2024-11-26 07:41:18.240710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.253 [2024-11-26 07:41:18.240726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:34.253 [2024-11-26 07:41:18.250117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.253 [2024-11-26 07:41:18.250386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.253 [2024-11-26 07:41:18.250406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:34.253 [2024-11-26 07:41:18.258295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.253 [2024-11-26 07:41:18.258612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.253 [2024-11-26 07:41:18.258629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:34.253 [2024-11-26 07:41:18.266814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.253 [2024-11-26 07:41:18.267113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.253 [2024-11-26 07:41:18.267130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:34.253 [2024-11-26 07:41:18.274222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.253 [2024-11-26 07:41:18.274541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.253 [2024-11-26 07:41:18.274557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:34.253 [2024-11-26 07:41:18.281153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.253 [2024-11-26 07:41:18.281469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.253 [2024-11-26 07:41:18.281485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:34.253 [2024-11-26 07:41:18.289300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.253 [2024-11-26 07:41:18.289524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.253 [2024-11-26 07:41:18.289540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:34.253 [2024-11-26 07:41:18.296387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.253 [2024-11-26 07:41:18.296566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.253 [2024-11-26 07:41:18.296582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:34.253 [2024-11-26 07:41:18.303632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.253 [2024-11-26 07:41:18.303921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.253 [2024-11-26 07:41:18.303937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:34.253 [2024-11-26 07:41:18.309300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.253 [2024-11-26 07:41:18.309472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.253 [2024-11-26 07:41:18.309488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:34.253 [2024-11-26 07:41:18.316736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.253 [2024-11-26 07:41:18.316982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.254 [2024-11-26 07:41:18.317001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:34.254 [2024-11-26 07:41:18.324194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.254 [2024-11-26 07:41:18.324445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.254 [2024-11-26 07:41:18.324461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:34.254 [2024-11-26 07:41:18.330188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.254 [2024-11-26 07:41:18.330339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.254 [2024-11-26 07:41:18.330354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:34.254 [2024-11-26 07:41:18.336473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.254 [2024-11-26 07:41:18.336744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.254 [2024-11-26 07:41:18.336760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:34.254 [2024-11-26 07:41:18.342497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.254 [2024-11-26 07:41:18.342669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.254 [2024-11-26 07:41:18.342685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:34.254 [2024-11-26 07:41:18.346900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.254 [2024-11-26 07:41:18.347054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.254 [2024-11-26 07:41:18.347070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:34.254 [2024-11-26 07:41:18.353874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.254 [2024-11-26 07:41:18.354076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.254 [2024-11-26 07:41:18.354091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:34.254 [2024-11-26 07:41:18.359822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.254 [2024-11-26 07:41:18.360002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.254 [2024-11-26 07:41:18.360018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:34.254 [2024-11-26 07:41:18.369023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.254 [2024-11-26 07:41:18.369219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.254 [2024-11-26 07:41:18.369235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:34.254 [2024-11-26 07:41:18.378846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.254 [2024-11-26 07:41:18.379063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.254 [2024-11-26 07:41:18.379078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:34.515 [2024-11-26 07:41:18.388879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.515 [2024-11-26 07:41:18.389159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.516 [2024-11-26 07:41:18.389175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:34.516 [2024-11-26 07:41:18.399142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.516 [2024-11-26 07:41:18.399371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.516 [2024-11-26 07:41:18.399387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:34.516 [2024-11-26 07:41:18.409594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.516 [2024-11-26 07:41:18.409849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.516 [2024-11-26 07:41:18.409869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:34.516 [2024-11-26 07:41:18.419921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.516 [2024-11-26 07:41:18.420167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.516 [2024-11-26 07:41:18.420183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:34.516 [2024-11-26 07:41:18.430512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.516 [2024-11-26 07:41:18.430773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.516 [2024-11-26 07:41:18.430789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:34.516 [2024-11-26 07:41:18.440625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.516 [2024-11-26 07:41:18.440916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.516 [2024-11-26 07:41:18.440932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:34.516 [2024-11-26 07:41:18.450482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.516 [2024-11-26 07:41:18.450909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.516 [2024-11-26 07:41:18.450924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:34.516 [2024-11-26 07:41:18.460669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.516 [2024-11-26 07:41:18.460967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.516 [2024-11-26 07:41:18.460986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:34.516 [2024-11-26 07:41:18.470767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.516 [2024-11-26 07:41:18.471074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.516 [2024-11-26 07:41:18.471090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:34.516 [2024-11-26 07:41:18.481305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.516 [2024-11-26 07:41:18.481655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.516 [2024-11-26 07:41:18.481671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:34.516 [2024-11-26 07:41:18.491358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.516 [2024-11-26 07:41:18.491622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.516 [2024-11-26 07:41:18.491638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:34.516 [2024-11-26 07:41:18.501386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.516 [2024-11-26 07:41:18.501591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.516 [2024-11-26 07:41:18.501607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:34.516 [2024-11-26 07:41:18.511814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.516 [2024-11-26 07:41:18.512093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.516 [2024-11-26 07:41:18.512109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:34.516 [2024-11-26 07:41:18.521924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.516 [2024-11-26 07:41:18.522148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.516 [2024-11-26 07:41:18.522164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:34.516 [2024-11-26 07:41:18.532092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.516 [2024-11-26 07:41:18.532355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.516 [2024-11-26 07:41:18.532371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:34.516 [2024-11-26 07:41:18.542474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1191d00) with pdu=0x2000166ff3c8 00:31:34.516 [2024-11-26 07:41:18.542716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.516 [2024-11-26 07:41:18.542731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:34.516 4077.00 IOPS, 509.62 MiB/s 00:31:34.516 Latency(us) 00:31:34.516 [2024-11-26T06:41:18.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:34.516 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:31:34.516 nvme0n1 : 2.01 4074.29 509.29 0.00 0.00 3920.16 1747.63 13161.81 00:31:34.516 [2024-11-26T06:41:18.653Z] =================================================================================================================== 00:31:34.516 [2024-11-26T06:41:18.653Z] Total : 4074.29 509.29 0.00 0.00 3920.16 1747.63 13161.81 00:31:34.516 { 00:31:34.516 "results": [ 00:31:34.516 { 00:31:34.516 "job": "nvme0n1", 00:31:34.516 "core_mask": "0x2", 00:31:34.516 "workload": "randwrite", 00:31:34.516 "status": "finished", 00:31:34.516 "queue_depth": 16, 00:31:34.516 "io_size": 131072, 00:31:34.516 "runtime": 2.005995, 00:31:34.516 "iops": 4074.287323747068, 00:31:34.516 "mibps": 509.2859154683835, 00:31:34.516 "io_failed": 0, 00:31:34.516 "io_timeout": 0, 00:31:34.516 "avg_latency_us": 3920.1563261144424, 00:31:34.516 "min_latency_us": 1747.6266666666668, 00:31:34.516 "max_latency_us": 13161.813333333334 00:31:34.516 } 00:31:34.516 ], 00:31:34.516 "core_count": 1 00:31:34.516 } 00:31:34.516 07:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:34.516 07:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:34.516 07:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:34.516 | .driver_specific 00:31:34.516 | .nvme_error 00:31:34.516 | .status_code 00:31:34.516 | .command_transient_transport_error' 00:31:34.516 07:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:34.778 07:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 264 > 0 )) 00:31:34.778 07:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2303010 00:31:34.778 07:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2303010 ']' 00:31:34.778 07:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2303010 00:31:34.778 07:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:31:34.778 07:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:34.778 07:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2303010 00:31:34.778 07:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:34.778 07:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:34.778 07:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2303010' 00:31:34.778 killing process with pid 2303010 00:31:34.778 07:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2303010 00:31:34.778 Received shutdown signal, test time was about 2.000000 seconds 00:31:34.778 00:31:34.778 Latency(us) 00:31:34.778 [2024-11-26T06:41:18.915Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:34.778 [2024-11-26T06:41:18.915Z] =================================================================================================================== 00:31:34.778 [2024-11-26T06:41:18.915Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:34.778 07:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2303010 00:31:35.040 07:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2300606 00:31:35.040 07:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2300606 ']' 00:31:35.040 07:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2300606 00:31:35.040 07:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:31:35.040 07:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:35.040 07:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2300606 00:31:35.040 07:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:35.040 07:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:35.040 07:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2300606' 00:31:35.040 killing process with pid 2300606 00:31:35.040 07:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2300606 00:31:35.040 07:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2300606 00:31:35.040 00:31:35.040 real 0m16.611s 00:31:35.040 user 0m32.904s 00:31:35.040 sys 0m3.520s 00:31:35.040 07:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:35.040 07:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:35.040 ************************************ 00:31:35.040 END TEST nvmf_digest_error 00:31:35.040 ************************************ 00:31:35.040 07:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:31:35.040 07:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:31:35.040 07:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:35.040 07:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:31:35.040 07:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:35.040 07:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:31:35.040 07:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:35.040 07:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:35.040 rmmod nvme_tcp 00:31:35.301 rmmod nvme_fabrics 00:31:35.301 rmmod nvme_keyring 00:31:35.301 07:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:35.301 07:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:31:35.301 07:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:31:35.301 07:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2300606 ']' 00:31:35.301 07:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2300606 00:31:35.301 07:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2300606 ']' 00:31:35.301 07:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2300606 00:31:35.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2300606) - No such process 00:31:35.301 07:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2300606 is not found' 00:31:35.301 Process with pid 2300606 is not found 00:31:35.301 07:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:35.301 07:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:35.301 07:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:35.301 07:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:31:35.301 07:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:31:35.301 07:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:35.301 07:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:31:35.301 07:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:35.301 07:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:35.301 07:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:35.301 07:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:35.301 07:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:37.214 07:41:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:37.214 00:31:37.214 real 0m43.812s 00:31:37.214 user 1m7.700s 00:31:37.214 sys 0m13.586s 00:31:37.214 07:41:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:37.214 07:41:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:37.214 ************************************ 00:31:37.214 END TEST nvmf_digest 00:31:37.214 ************************************ 00:31:37.474 07:41:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:31:37.474 07:41:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:31:37.474 07:41:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:31:37.474 07:41:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:31:37.474 07:41:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:37.474 07:41:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:37.474 07:41:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.474 ************************************ 00:31:37.474 START TEST nvmf_bdevperf 00:31:37.474 ************************************ 00:31:37.474 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:31:37.474 * Looking for test storage... 00:31:37.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:37.474 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:37.475 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:31:37.475 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:37.475 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:37.475 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:37.475 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:37.475 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:37.475 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:31:37.475 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:31:37.475 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:31:37.475 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:31:37.475 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:31:37.475 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:31:37.475 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:31:37.475 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:37.475 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:31:37.475 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:31:37.475 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:37.475 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:37.475 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:31:37.475 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:31:37.475 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:37.475 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:31:37.475 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:31:37.475 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:31:37.475 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:31:37.475 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:37.475 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:31:37.475 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:31:37.475 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:37.475 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:37.475 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:31:37.475 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:37.475 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:37.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.475 --rc genhtml_branch_coverage=1 00:31:37.475 --rc genhtml_function_coverage=1 00:31:37.475 --rc genhtml_legend=1 00:31:37.475 --rc geninfo_all_blocks=1 00:31:37.475 --rc geninfo_unexecuted_blocks=1 00:31:37.475 00:31:37.475 ' 00:31:37.475 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:37.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.475 --rc genhtml_branch_coverage=1 00:31:37.475 --rc genhtml_function_coverage=1 00:31:37.475 --rc genhtml_legend=1 00:31:37.475 --rc geninfo_all_blocks=1 00:31:37.475 --rc geninfo_unexecuted_blocks=1 00:31:37.475 00:31:37.475 ' 00:31:37.475 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:37.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.475 --rc genhtml_branch_coverage=1 00:31:37.475 --rc genhtml_function_coverage=1 00:31:37.475 --rc genhtml_legend=1 00:31:37.475 --rc geninfo_all_blocks=1 00:31:37.475 --rc geninfo_unexecuted_blocks=1 00:31:37.475 00:31:37.475 ' 00:31:37.475 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:37.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.475 --rc genhtml_branch_coverage=1 00:31:37.475 --rc genhtml_function_coverage=1 00:31:37.475 --rc genhtml_legend=1 00:31:37.475 --rc geninfo_all_blocks=1 00:31:37.475 --rc geninfo_unexecuted_blocks=1 00:31:37.475 00:31:37.475 ' 00:31:37.475 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:37.736 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:31:37.736 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:37.736 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:37.736 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:37.736 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:37.736 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:37.736 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:37.736 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:37.736 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:37.736 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:37.736 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:37.736 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:37.736 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:37.736 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:37.736 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:37.736 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:37.736 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:37.736 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:37.736 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:31:37.736 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:37.736 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:37.736 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:37.736 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.736 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.736 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.736 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:31:37.736 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.736 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:31:37.736 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:37.736 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:37.736 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:37.736 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:37.736 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:37.736 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:37.736 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:37.736 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:37.736 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:37.737 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:37.737 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:37.737 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:37.737 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:31:37.737 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:37.737 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:37.737 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:37.737 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:37.737 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:37.737 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:37.737 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:37.737 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:37.737 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:37.737 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:37.737 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:31:37.737 07:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:45.876 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:45.876 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:45.876 Found net devices under 0000:31:00.0: cvl_0_0 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:45.876 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:45.877 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:45.877 Found net devices under 0000:31:00.1: cvl_0_1 00:31:45.877 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:45.877 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:45.877 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:31:45.877 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:45.877 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:45.877 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:45.877 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:45.877 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:45.877 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:45.877 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:45.877 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:45.877 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:45.877 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:45.877 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:45.877 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:45.877 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:45.877 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:45.877 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:45.877 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:45.877 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:45.877 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:45.877 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:45.877 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:45.877 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:45.877 07:41:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:46.138 07:41:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:46.138 07:41:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:46.138 07:41:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:46.138 07:41:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:46.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:46.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:31:46.138 00:31:46.138 --- 10.0.0.2 ping statistics --- 00:31:46.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:46.138 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:31:46.138 07:41:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:46.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:46.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:31:46.138 00:31:46.138 --- 10.0.0.1 ping statistics --- 00:31:46.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:46.138 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:31:46.138 07:41:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:46.138 07:41:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:31:46.138 07:41:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:46.138 07:41:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:46.138 07:41:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:46.138 07:41:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:46.138 07:41:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:46.138 07:41:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:46.138 07:41:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:46.138 07:41:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:31:46.138 07:41:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:31:46.138 07:41:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:46.138 07:41:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:46.138 07:41:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:46.138 07:41:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2308580 00:31:46.138 07:41:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2308580 00:31:46.138 07:41:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:46.138 07:41:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2308580 ']' 00:31:46.138 07:41:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:46.138 07:41:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:46.138 07:41:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:46.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:46.138 07:41:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:46.138 07:41:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:46.138 [2024-11-26 07:41:30.181575] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:31:46.138 [2024-11-26 07:41:30.181644] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:46.399 [2024-11-26 07:41:30.289345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:46.399 [2024-11-26 07:41:30.341255] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:46.399 [2024-11-26 07:41:30.341309] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:46.399 [2024-11-26 07:41:30.341317] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:46.399 [2024-11-26 07:41:30.341325] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:46.399 [2024-11-26 07:41:30.341331] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:46.399 [2024-11-26 07:41:30.343196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:46.399 [2024-11-26 07:41:30.343370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:46.399 [2024-11-26 07:41:30.343389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:46.971 07:41:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:46.971 07:41:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:31:46.971 07:41:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:46.971 07:41:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:46.971 07:41:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:46.971 07:41:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:46.971 07:41:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:46.971 07:41:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.971 07:41:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:46.971 [2024-11-26 07:41:31.042568] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:46.971 07:41:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.971 07:41:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:46.971 07:41:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.971 07:41:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:46.971 Malloc0 00:31:46.971 07:41:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.971 07:41:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:46.971 07:41:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.971 07:41:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:46.971 07:41:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.971 07:41:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:46.971 07:41:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.971 07:41:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:47.233 07:41:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.233 07:41:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:47.233 07:41:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.233 07:41:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:47.233 [2024-11-26 07:41:31.113381] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:47.233 07:41:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.233 07:41:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:31:47.233 07:41:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:31:47.233 07:41:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:31:47.233 07:41:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:31:47.233 07:41:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:47.233 07:41:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:47.233 { 00:31:47.233 "params": { 00:31:47.233 "name": "Nvme$subsystem", 00:31:47.233 "trtype": "$TEST_TRANSPORT", 00:31:47.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:47.233 "adrfam": "ipv4", 00:31:47.233 "trsvcid": "$NVMF_PORT", 00:31:47.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:47.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:47.233 "hdgst": ${hdgst:-false}, 00:31:47.233 "ddgst": ${ddgst:-false} 00:31:47.233 }, 00:31:47.233 "method": "bdev_nvme_attach_controller" 00:31:47.233 } 00:31:47.233 EOF 00:31:47.233 )") 00:31:47.233 07:41:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:31:47.233 07:41:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:31:47.233 07:41:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:31:47.233 07:41:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:47.233 "params": { 00:31:47.233 "name": "Nvme1", 00:31:47.233 "trtype": "tcp", 00:31:47.233 "traddr": "10.0.0.2", 00:31:47.233 "adrfam": "ipv4", 00:31:47.233 "trsvcid": "4420", 00:31:47.233 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:47.233 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:47.233 "hdgst": false, 00:31:47.233 "ddgst": false 00:31:47.233 }, 00:31:47.233 "method": "bdev_nvme_attach_controller" 00:31:47.233 }' 00:31:47.233 [2024-11-26 07:41:31.171393] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:31:47.233 [2024-11-26 07:41:31.171440] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2308753 ] 00:31:47.233 [2024-11-26 07:41:31.247815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.233 [2024-11-26 07:41:31.283975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:47.495 Running I/O for 1 seconds... 00:31:48.438 8961.00 IOPS, 35.00 MiB/s 00:31:48.438 Latency(us) 00:31:48.438 [2024-11-26T06:41:32.575Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:48.438 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:48.438 Verification LBA range: start 0x0 length 0x4000 00:31:48.438 Nvme1n1 : 1.01 9010.32 35.20 0.00 0.00 14141.09 1747.63 13325.65 00:31:48.438 [2024-11-26T06:41:32.575Z] =================================================================================================================== 00:31:48.438 [2024-11-26T06:41:32.575Z] Total : 9010.32 35.20 0.00 0.00 14141.09 1747.63 13325.65 00:31:48.438 07:41:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2309088 00:31:48.438 07:41:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:31:48.438 07:41:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:31:48.438 07:41:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:31:48.438 07:41:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:31:48.438 07:41:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:31:48.438 07:41:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:48.438 07:41:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:48.438 { 00:31:48.438 "params": { 00:31:48.438 "name": "Nvme$subsystem", 00:31:48.438 "trtype": "$TEST_TRANSPORT", 00:31:48.438 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:48.438 "adrfam": "ipv4", 00:31:48.438 "trsvcid": "$NVMF_PORT", 00:31:48.438 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:48.438 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:48.438 "hdgst": ${hdgst:-false}, 00:31:48.438 "ddgst": ${ddgst:-false} 00:31:48.438 }, 00:31:48.438 "method": "bdev_nvme_attach_controller" 00:31:48.438 } 00:31:48.438 EOF 00:31:48.438 )") 00:31:48.438 07:41:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:31:48.438 07:41:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:31:48.698 07:41:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:31:48.698 07:41:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:48.698 "params": { 00:31:48.698 "name": "Nvme1", 00:31:48.698 "trtype": "tcp", 00:31:48.698 "traddr": "10.0.0.2", 00:31:48.698 "adrfam": "ipv4", 00:31:48.698 "trsvcid": "4420", 00:31:48.698 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:48.698 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:48.698 "hdgst": false, 00:31:48.698 "ddgst": false 00:31:48.698 }, 00:31:48.698 "method": "bdev_nvme_attach_controller" 00:31:48.698 }' 00:31:48.698 [2024-11-26 07:41:32.615398] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:31:48.698 [2024-11-26 07:41:32.615453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2309088 ] 00:31:48.698 [2024-11-26 07:41:32.693276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:48.698 [2024-11-26 07:41:32.729263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:48.958 Running I/O for 15 seconds... 00:31:51.282 9429.00 IOPS, 36.83 MiB/s [2024-11-26T06:41:35.682Z] 10397.50 IOPS, 40.62 MiB/s [2024-11-26T06:41:35.682Z] 07:41:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2308580 00:31:51.545 07:41:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:31:51.545 [2024-11-26 07:41:35.569162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:80952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.545 [2024-11-26 07:41:35.569204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.545 [2024-11-26 07:41:35.569223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:80960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.545 [2024-11-26 07:41:35.569235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.545 [2024-11-26 07:41:35.569247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:80968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.545 [2024-11-26 07:41:35.569255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.546 [2024-11-26 07:41:35.569271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.546 [2024-11-26 07:41:35.569281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.546 [2024-11-26 07:41:35.569293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.546 [2024-11-26 07:41:35.569302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.546 [2024-11-26 07:41:35.569313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:80992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.546 [2024-11-26 07:41:35.569321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.546 [2024-11-26 07:41:35.569331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:81000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.546 [2024-11-26 07:41:35.569338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.546 [2024-11-26 07:41:35.569350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.546 [2024-11-26 07:41:35.569358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.546 [2024-11-26 07:41:35.569367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.546 [2024-11-26 07:41:35.569376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.546 [2024-11-26 07:41:35.569386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:81024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.546 [2024-11-26 07:41:35.569395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.546 [2024-11-26 07:41:35.569406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:81032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.546 [2024-11-26 07:41:35.569415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.546 [2024-11-26 07:41:35.569426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:81040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.546 [2024-11-26 07:41:35.569436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.546 [2024-11-26 07:41:35.569447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:81048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.546 [2024-11-26 07:41:35.569456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.546 [2024-11-26 07:41:35.569468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.546 [2024-11-26 07:41:35.569478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.546 [2024-11-26 07:41:35.569489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:81064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.546 [2024-11-26 07:41:35.569499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.546 [2024-11-26 07:41:35.569509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:81072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.546 [2024-11-26 07:41:35.569520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.546 [2024-11-26 07:41:35.569530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.546 [2024-11-26 07:41:35.569537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.546 [2024-11-26 07:41:35.569547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:81088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.546 [2024-11-26 07:41:35.569555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.546 [2024-11-26 07:41:35.569564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.546 [2024-11-26 07:41:35.569572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.546 [2024-11-26 07:41:35.569581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.546 [2024-11-26 07:41:35.569589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.546 [2024-11-26 07:41:35.569598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:81112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.546 [2024-11-26 07:41:35.569606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.546 [2024-11-26 07:41:35.569615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.546 [2024-11-26 07:41:35.569624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.546 [2024-11-26 07:41:35.569633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:81128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.546 [2024-11-26 07:41:35.569641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.546 [2024-11-26 07:41:35.569650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.546 [2024-11-26 07:41:35.569658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.546 [2024-11-26 07:41:35.569668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:81144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.546 [2024-11-26 07:41:35.569675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.546 [2024-11-26 07:41:35.569684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.546 [2024-11-26 07:41:35.569691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.546 [2024-11-26 07:41:35.569701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.546 [2024-11-26 07:41:35.569709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.546 [2024-11-26 07:41:35.569718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:81168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.546 [2024-11-26 07:41:35.569726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.546 [2024-11-26 07:41:35.569735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:81176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.546 [2024-11-26 07:41:35.569744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.546 [2024-11-26 07:41:35.569754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.546 [2024-11-26 07:41:35.569761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.546 [2024-11-26 07:41:35.569771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.546 [2024-11-26 07:41:35.569778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.546 [2024-11-26 07:41:35.569787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.546 [2024-11-26 07:41:35.569794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.546 [2024-11-26 07:41:35.569804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:81200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.546 [2024-11-26 07:41:35.569811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.546 [2024-11-26 07:41:35.569822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:81208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.546 [2024-11-26 07:41:35.569829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.546 [2024-11-26 07:41:35.569838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:81216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.547 [2024-11-26 07:41:35.569846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.547 [2024-11-26 07:41:35.569855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:81224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.547 [2024-11-26 07:41:35.569870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.547 [2024-11-26 07:41:35.569880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.547 [2024-11-26 07:41:35.569887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.547 [2024-11-26 07:41:35.569897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:81240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.547 [2024-11-26 07:41:35.569904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.547 [2024-11-26 07:41:35.569913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.547 [2024-11-26 07:41:35.569921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.547 [2024-11-26 07:41:35.569930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.547 [2024-11-26 07:41:35.569938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.547 [2024-11-26 07:41:35.569947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.547 [2024-11-26 07:41:35.569954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.547 [2024-11-26 07:41:35.569965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:81272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.547 [2024-11-26 07:41:35.569973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.547 [2024-11-26 07:41:35.569982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:81280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.547 [2024-11-26 07:41:35.569990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.547 [2024-11-26 07:41:35.569999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.547 [2024-11-26 07:41:35.570006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.547 [2024-11-26 07:41:35.570016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.547 [2024-11-26 07:41:35.570023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.547 [2024-11-26 07:41:35.570033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:81304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.547 [2024-11-26 07:41:35.570040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.547 [2024-11-26 07:41:35.570050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.547 [2024-11-26 07:41:35.570057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.547 [2024-11-26 07:41:35.570066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.547 [2024-11-26 07:41:35.570074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.547 [2024-11-26 07:41:35.570083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:81328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.547 [2024-11-26 07:41:35.570090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.547 [2024-11-26 07:41:35.570100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:81336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.547 [2024-11-26 07:41:35.570107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.547 [2024-11-26 07:41:35.570117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.547 [2024-11-26 07:41:35.570124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.547 [2024-11-26 07:41:35.570134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:81352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.547 [2024-11-26 07:41:35.570141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.547 [2024-11-26 07:41:35.570151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.547 [2024-11-26 07:41:35.570159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.547 [2024-11-26 07:41:35.570168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:81368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.547 [2024-11-26 07:41:35.570177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.547 [2024-11-26 07:41:35.570186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:81376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.547 [2024-11-26 07:41:35.570194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.547 [2024-11-26 07:41:35.570204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:81384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.547 [2024-11-26 07:41:35.570211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.547 [2024-11-26 07:41:35.570221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.547 [2024-11-26 07:41:35.570228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.547 [2024-11-26 07:41:35.570237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.547 [2024-11-26 07:41:35.570244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.547 [2024-11-26 07:41:35.570254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.547 [2024-11-26 07:41:35.570261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.547 [2024-11-26 07:41:35.570271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.547 [2024-11-26 07:41:35.570278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.547 [2024-11-26 07:41:35.570288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.547 [2024-11-26 07:41:35.570295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.547 [2024-11-26 07:41:35.570304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.547 [2024-11-26 07:41:35.570312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.547 [2024-11-26 07:41:35.570321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:81440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.547 [2024-11-26 07:41:35.570328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.547 [2024-11-26 07:41:35.570338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.547 [2024-11-26 07:41:35.570345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.547 [2024-11-26 07:41:35.570354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:81456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.547 [2024-11-26 07:41:35.570362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.547 [2024-11-26 07:41:35.570372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.547 [2024-11-26 07:41:35.570379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.548 [2024-11-26 07:41:35.570390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:81472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.548 [2024-11-26 07:41:35.570398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.548 [2024-11-26 07:41:35.570407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.548 [2024-11-26 07:41:35.570415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.548 [2024-11-26 07:41:35.570428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.548 [2024-11-26 07:41:35.570436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.548 [2024-11-26 07:41:35.570445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.548 [2024-11-26 07:41:35.570452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.548 [2024-11-26 07:41:35.570461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:81504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.548 [2024-11-26 07:41:35.570469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.548 [2024-11-26 07:41:35.570479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.548 [2024-11-26 07:41:35.570486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.548 [2024-11-26 07:41:35.570496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:81520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.548 [2024-11-26 07:41:35.570503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.548 [2024-11-26 07:41:35.570512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:81528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.548 [2024-11-26 07:41:35.570520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.548 [2024-11-26 07:41:35.570529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.548 [2024-11-26 07:41:35.570537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.548 [2024-11-26 07:41:35.570546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:81544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.548 [2024-11-26 07:41:35.570553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.548 [2024-11-26 07:41:35.570563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.548 [2024-11-26 07:41:35.570570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.548 [2024-11-26 07:41:35.570580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.548 [2024-11-26 07:41:35.570587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.548 [2024-11-26 07:41:35.570596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.548 [2024-11-26 07:41:35.570607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.548 [2024-11-26 07:41:35.570616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.548 [2024-11-26 07:41:35.570623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.548 [2024-11-26 07:41:35.570633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.548 [2024-11-26 07:41:35.570640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.548 [2024-11-26 07:41:35.570650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.548 [2024-11-26 07:41:35.570657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.548 [2024-11-26 07:41:35.570666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.548 [2024-11-26 07:41:35.570673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.548 [2024-11-26 07:41:35.570683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:81608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.548 [2024-11-26 07:41:35.570690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.548 [2024-11-26 07:41:35.570699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:81616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.548 [2024-11-26 07:41:35.570706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.548 [2024-11-26 07:41:35.570716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:81624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.548 [2024-11-26 07:41:35.570723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.548 [2024-11-26 07:41:35.570732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:81632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.548 [2024-11-26 07:41:35.570740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.548 [2024-11-26 07:41:35.570749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:81640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.548 [2024-11-26 07:41:35.570756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.548 [2024-11-26 07:41:35.570766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.548 [2024-11-26 07:41:35.570773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.548 [2024-11-26 07:41:35.570782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:81656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.548 [2024-11-26 07:41:35.570790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.548 [2024-11-26 07:41:35.570799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:81664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.548 [2024-11-26 07:41:35.570807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.548 [2024-11-26 07:41:35.570816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:81672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.548 [2024-11-26 07:41:35.570824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.548 [2024-11-26 07:41:35.570834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:81680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.548 [2024-11-26 07:41:35.570842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.548 [2024-11-26 07:41:35.570852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.548 [2024-11-26 07:41:35.570859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.548 [2024-11-26 07:41:35.570873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:81696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.548 [2024-11-26 07:41:35.570880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.548 [2024-11-26 07:41:35.570890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.548 [2024-11-26 07:41:35.570897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.548 [2024-11-26 07:41:35.570906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:81712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.548 [2024-11-26 07:41:35.570914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.548 [2024-11-26 07:41:35.570923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.548 [2024-11-26 07:41:35.570931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.548 [2024-11-26 07:41:35.570940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:81728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.549 [2024-11-26 07:41:35.570948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.549 [2024-11-26 07:41:35.570957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:81736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.549 [2024-11-26 07:41:35.570964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.549 [2024-11-26 07:41:35.570977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:81744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.549 [2024-11-26 07:41:35.570985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.549 [2024-11-26 07:41:35.570994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.549 [2024-11-26 07:41:35.571001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.549 [2024-11-26 07:41:35.571011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:81760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.549 [2024-11-26 07:41:35.571018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.549 [2024-11-26 07:41:35.571028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.549 [2024-11-26 07:41:35.571035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.549 [2024-11-26 07:41:35.571046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:81776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.549 [2024-11-26 07:41:35.571054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.549 [2024-11-26 07:41:35.571063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:81784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.549 [2024-11-26 07:41:35.571070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.549 [2024-11-26 07:41:35.571080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.549 [2024-11-26 07:41:35.571087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.549 [2024-11-26 07:41:35.571097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.549 [2024-11-26 07:41:35.571104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.549 [2024-11-26 07:41:35.571114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.549 [2024-11-26 07:41:35.571121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.549 [2024-11-26 07:41:35.571130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:81816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.549 [2024-11-26 07:41:35.571138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.549 [2024-11-26 07:41:35.571147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:81824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.549 [2024-11-26 07:41:35.571154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.549 [2024-11-26 07:41:35.571163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.549 [2024-11-26 07:41:35.571170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.549 [2024-11-26 07:41:35.571180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:81840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.549 [2024-11-26 07:41:35.571187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.549 [2024-11-26 07:41:35.571197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.549 [2024-11-26 07:41:35.571204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.549 [2024-11-26 07:41:35.571213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:81856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.549 [2024-11-26 07:41:35.571221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.549 [2024-11-26 07:41:35.571230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.549 [2024-11-26 07:41:35.571237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.549 [2024-11-26 07:41:35.571247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.549 [2024-11-26 07:41:35.571256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.549 [2024-11-26 07:41:35.571265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.549 [2024-11-26 07:41:35.571273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.549 [2024-11-26 07:41:35.571282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.549 [2024-11-26 07:41:35.571289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.549 [2024-11-26 07:41:35.571299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.549 [2024-11-26 07:41:35.571306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.549 [2024-11-26 07:41:35.571316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.549 [2024-11-26 07:41:35.571323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.549 [2024-11-26 07:41:35.571332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:81904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.549 [2024-11-26 07:41:35.571340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.549 [2024-11-26 07:41:35.571349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.549 [2024-11-26 07:41:35.571357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.549 [2024-11-26 07:41:35.571366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.549 [2024-11-26 07:41:35.571373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.549 [2024-11-26 07:41:35.571383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.549 [2024-11-26 07:41:35.571390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.549 [2024-11-26 07:41:35.571399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.549 [2024-11-26 07:41:35.571406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.549 [2024-11-26 07:41:35.571416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:81944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.549 [2024-11-26 07:41:35.571423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.549 [2024-11-26 07:41:35.571432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1372b50 is same with the state(6) to be set 00:31:51.549 [2024-11-26 07:41:35.571441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:51.549 [2024-11-26 07:41:35.571447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:51.549 [2024-11-26 07:41:35.571454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81952 len:8 PRP1 0x0 PRP2 0x0 00:31:51.549 [2024-11-26 07:41:35.571463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.549 [2024-11-26 07:41:35.575047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:51.549 [2024-11-26 07:41:35.575099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:51.550 [2024-11-26 07:41:35.575919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.550 [2024-11-26 07:41:35.575946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:51.550 [2024-11-26 07:41:35.575955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:51.550 [2024-11-26 07:41:35.576182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:51.550 [2024-11-26 07:41:35.576402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:51.550 [2024-11-26 07:41:35.576411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:51.550 [2024-11-26 07:41:35.576419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:51.550 [2024-11-26 07:41:35.576427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:51.550 [2024-11-26 07:41:35.589177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:51.550 [2024-11-26 07:41:35.589734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.550 [2024-11-26 07:41:35.589752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:51.550 [2024-11-26 07:41:35.589760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:51.550 [2024-11-26 07:41:35.589985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:51.550 [2024-11-26 07:41:35.590204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:51.550 [2024-11-26 07:41:35.590212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:51.550 [2024-11-26 07:41:35.590219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:51.550 [2024-11-26 07:41:35.590226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:51.550 [2024-11-26 07:41:35.602975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:51.550 [2024-11-26 07:41:35.603502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.550 [2024-11-26 07:41:35.603541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:51.550 [2024-11-26 07:41:35.603552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:51.550 [2024-11-26 07:41:35.603792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:51.550 [2024-11-26 07:41:35.604024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:51.550 [2024-11-26 07:41:35.604034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:51.550 [2024-11-26 07:41:35.604042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:51.550 [2024-11-26 07:41:35.604051] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:51.550 [2024-11-26 07:41:35.616779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:51.550 [2024-11-26 07:41:35.617462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.550 [2024-11-26 07:41:35.617500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:51.550 [2024-11-26 07:41:35.617511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:51.550 [2024-11-26 07:41:35.617749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:51.550 [2024-11-26 07:41:35.617978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:51.550 [2024-11-26 07:41:35.617988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:51.550 [2024-11-26 07:41:35.617996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:51.550 [2024-11-26 07:41:35.618004] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:51.550 [2024-11-26 07:41:35.630756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:51.550 [2024-11-26 07:41:35.631420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.550 [2024-11-26 07:41:35.631457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:51.550 [2024-11-26 07:41:35.631468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:51.550 [2024-11-26 07:41:35.631705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:51.550 [2024-11-26 07:41:35.631937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:51.550 [2024-11-26 07:41:35.631947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:51.550 [2024-11-26 07:41:35.631955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:51.550 [2024-11-26 07:41:35.631963] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:51.550 [2024-11-26 07:41:35.644695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:51.550 [2024-11-26 07:41:35.645361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.550 [2024-11-26 07:41:35.645399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:51.550 [2024-11-26 07:41:35.645410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:51.550 [2024-11-26 07:41:35.645647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:51.550 [2024-11-26 07:41:35.645878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:51.550 [2024-11-26 07:41:35.645888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:51.550 [2024-11-26 07:41:35.645896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:51.550 [2024-11-26 07:41:35.645903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:51.550 [2024-11-26 07:41:35.658643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:51.550 [2024-11-26 07:41:35.659218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.550 [2024-11-26 07:41:35.659238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:51.550 [2024-11-26 07:41:35.659246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:51.550 [2024-11-26 07:41:35.659470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:51.550 [2024-11-26 07:41:35.659689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:51.550 [2024-11-26 07:41:35.659697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:51.550 [2024-11-26 07:41:35.659704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:51.550 [2024-11-26 07:41:35.659711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:51.550 [2024-11-26 07:41:35.672441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:51.837 [2024-11-26 07:41:35.673095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.837 [2024-11-26 07:41:35.673134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:51.837 [2024-11-26 07:41:35.673145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:51.837 [2024-11-26 07:41:35.673382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:51.837 [2024-11-26 07:41:35.673605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:51.837 [2024-11-26 07:41:35.673614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:51.837 [2024-11-26 07:41:35.673622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:51.837 [2024-11-26 07:41:35.673629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:51.837 [2024-11-26 07:41:35.686370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:51.837 [2024-11-26 07:41:35.686954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.837 [2024-11-26 07:41:35.686991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:51.837 [2024-11-26 07:41:35.687004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:51.837 [2024-11-26 07:41:35.687245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:51.837 [2024-11-26 07:41:35.687467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:51.837 [2024-11-26 07:41:35.687476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:51.837 [2024-11-26 07:41:35.687484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:51.837 [2024-11-26 07:41:35.687491] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:51.837 [2024-11-26 07:41:35.700234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:51.837 [2024-11-26 07:41:35.700927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.837 [2024-11-26 07:41:35.700965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:51.837 [2024-11-26 07:41:35.700977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:51.837 [2024-11-26 07:41:35.701217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:51.837 [2024-11-26 07:41:35.701439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:51.837 [2024-11-26 07:41:35.701453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:51.837 [2024-11-26 07:41:35.701461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:51.837 [2024-11-26 07:41:35.701469] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:51.837 [2024-11-26 07:41:35.714089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:51.837 [2024-11-26 07:41:35.714766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.837 [2024-11-26 07:41:35.714803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:51.837 [2024-11-26 07:41:35.714816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:51.838 [2024-11-26 07:41:35.715066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:51.838 [2024-11-26 07:41:35.715289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:51.838 [2024-11-26 07:41:35.715298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:51.838 [2024-11-26 07:41:35.715306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:51.838 [2024-11-26 07:41:35.715314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:51.838 [2024-11-26 07:41:35.728062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:51.838 [2024-11-26 07:41:35.728699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.838 [2024-11-26 07:41:35.728737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:51.838 [2024-11-26 07:41:35.728748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:51.838 [2024-11-26 07:41:35.728995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:51.838 [2024-11-26 07:41:35.729218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:51.838 [2024-11-26 07:41:35.729229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:51.838 [2024-11-26 07:41:35.729237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:51.838 [2024-11-26 07:41:35.729245] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:51.838 [2024-11-26 07:41:35.741995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:51.838 [2024-11-26 07:41:35.742635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.838 [2024-11-26 07:41:35.742672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:51.838 [2024-11-26 07:41:35.742683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:51.838 [2024-11-26 07:41:35.742928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:51.838 [2024-11-26 07:41:35.743152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:51.838 [2024-11-26 07:41:35.743161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:51.838 [2024-11-26 07:41:35.743169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:51.838 [2024-11-26 07:41:35.743181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:51.838 [2024-11-26 07:41:35.755924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:51.838 [2024-11-26 07:41:35.756447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.838 [2024-11-26 07:41:35.756483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:51.838 [2024-11-26 07:41:35.756493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:51.838 [2024-11-26 07:41:35.756731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:51.838 [2024-11-26 07:41:35.756961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:51.838 [2024-11-26 07:41:35.756971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:51.838 [2024-11-26 07:41:35.756980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:51.838 [2024-11-26 07:41:35.756987] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:51.838 [2024-11-26 07:41:35.769715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:51.838 [2024-11-26 07:41:35.770367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.838 [2024-11-26 07:41:35.770405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:51.838 [2024-11-26 07:41:35.770416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:51.838 [2024-11-26 07:41:35.770654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:51.838 [2024-11-26 07:41:35.770885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:51.838 [2024-11-26 07:41:35.770894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:51.838 [2024-11-26 07:41:35.770902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:51.838 [2024-11-26 07:41:35.770910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:51.838 [2024-11-26 07:41:35.783635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:51.838 [2024-11-26 07:41:35.784274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.838 [2024-11-26 07:41:35.784311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:51.838 [2024-11-26 07:41:35.784322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:51.838 [2024-11-26 07:41:35.784561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:51.838 [2024-11-26 07:41:35.784783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:51.838 [2024-11-26 07:41:35.784791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:51.838 [2024-11-26 07:41:35.784799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:51.838 [2024-11-26 07:41:35.784807] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:51.838 [2024-11-26 07:41:35.797540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:51.838 [2024-11-26 07:41:35.798105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.838 [2024-11-26 07:41:35.798125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:51.838 [2024-11-26 07:41:35.798133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:51.838 [2024-11-26 07:41:35.798352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:51.838 [2024-11-26 07:41:35.798571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:51.838 [2024-11-26 07:41:35.798579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:51.838 [2024-11-26 07:41:35.798586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:51.838 [2024-11-26 07:41:35.798592] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:51.838 [2024-11-26 07:41:35.811518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:51.838 [2024-11-26 07:41:35.812179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.838 [2024-11-26 07:41:35.812216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:51.838 [2024-11-26 07:41:35.812227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:51.838 [2024-11-26 07:41:35.812465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:51.838 [2024-11-26 07:41:35.812688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:51.838 [2024-11-26 07:41:35.812696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:51.838 [2024-11-26 07:41:35.812704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:51.838 [2024-11-26 07:41:35.812712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:51.838 [2024-11-26 07:41:35.825453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:51.838 [2024-11-26 07:41:35.826163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.838 [2024-11-26 07:41:35.826200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:51.838 [2024-11-26 07:41:35.826211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:51.838 [2024-11-26 07:41:35.826449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:51.838 [2024-11-26 07:41:35.826672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:51.838 [2024-11-26 07:41:35.826681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:51.838 [2024-11-26 07:41:35.826690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:51.838 [2024-11-26 07:41:35.826699] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:51.838 [2024-11-26 07:41:35.839273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:51.838 [2024-11-26 07:41:35.839917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.838 [2024-11-26 07:41:35.839955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:51.838 [2024-11-26 07:41:35.839968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:51.838 [2024-11-26 07:41:35.840213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:51.838 [2024-11-26 07:41:35.840437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:51.838 [2024-11-26 07:41:35.840445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:51.838 [2024-11-26 07:41:35.840453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:51.838 [2024-11-26 07:41:35.840461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:51.838 [2024-11-26 07:41:35.853205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:51.838 [2024-11-26 07:41:35.853872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.838 [2024-11-26 07:41:35.853910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:51.839 [2024-11-26 07:41:35.853922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:51.839 [2024-11-26 07:41:35.854163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:51.839 [2024-11-26 07:41:35.854386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:51.839 [2024-11-26 07:41:35.854394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:51.839 [2024-11-26 07:41:35.854402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:51.839 [2024-11-26 07:41:35.854410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:51.839 [2024-11-26 07:41:35.867143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:51.839 [2024-11-26 07:41:35.867815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.839 [2024-11-26 07:41:35.867853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:51.839 [2024-11-26 07:41:35.867871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:51.839 [2024-11-26 07:41:35.868110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:51.839 [2024-11-26 07:41:35.868333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:51.839 [2024-11-26 07:41:35.868342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:51.839 [2024-11-26 07:41:35.868349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:51.839 [2024-11-26 07:41:35.868357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:51.839 [2024-11-26 07:41:35.881097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:51.839 [2024-11-26 07:41:35.881660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.839 [2024-11-26 07:41:35.881697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:51.839 [2024-11-26 07:41:35.881708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:51.839 [2024-11-26 07:41:35.881955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:51.839 [2024-11-26 07:41:35.882178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:51.839 [2024-11-26 07:41:35.882192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:51.839 [2024-11-26 07:41:35.882199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:51.839 [2024-11-26 07:41:35.882207] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:51.839 [2024-11-26 07:41:35.894939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:51.839 [2024-11-26 07:41:35.895634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.839 [2024-11-26 07:41:35.895671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:51.839 [2024-11-26 07:41:35.895681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:51.839 [2024-11-26 07:41:35.895929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:51.839 [2024-11-26 07:41:35.896153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:51.839 [2024-11-26 07:41:35.896162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:51.839 [2024-11-26 07:41:35.896170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:51.839 [2024-11-26 07:41:35.896178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:51.839 [2024-11-26 07:41:35.908901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:51.839 [2024-11-26 07:41:35.909572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.839 [2024-11-26 07:41:35.909609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:51.839 [2024-11-26 07:41:35.909620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:51.839 [2024-11-26 07:41:35.909858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:51.839 [2024-11-26 07:41:35.910091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:51.839 [2024-11-26 07:41:35.910100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:51.839 [2024-11-26 07:41:35.910108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:51.839 [2024-11-26 07:41:35.910115] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:51.839 [2024-11-26 07:41:35.922844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:51.839 [2024-11-26 07:41:35.923478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.839 [2024-11-26 07:41:35.923516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:51.839 [2024-11-26 07:41:35.923526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:51.839 [2024-11-26 07:41:35.923764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:51.839 [2024-11-26 07:41:35.923996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:51.839 [2024-11-26 07:41:35.924006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:51.839 [2024-11-26 07:41:35.924014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:51.839 [2024-11-26 07:41:35.924026] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:51.839 [2024-11-26 07:41:35.936746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:51.839 [2024-11-26 07:41:35.937430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.839 [2024-11-26 07:41:35.937468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:51.839 [2024-11-26 07:41:35.937478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:51.839 [2024-11-26 07:41:35.937716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:51.839 [2024-11-26 07:41:35.937952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:51.839 [2024-11-26 07:41:35.937961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:51.839 [2024-11-26 07:41:35.937969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:51.839 [2024-11-26 07:41:35.937977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:51.839 [2024-11-26 07:41:35.950709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:51.839 [2024-11-26 07:41:35.951173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.839 [2024-11-26 07:41:35.951194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:51.839 [2024-11-26 07:41:35.951202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:51.839 [2024-11-26 07:41:35.951422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:51.839 [2024-11-26 07:41:35.951641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:51.839 [2024-11-26 07:41:35.951649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:51.839 [2024-11-26 07:41:35.951656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:51.839 [2024-11-26 07:41:35.951663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:51.839 [2024-11-26 07:41:35.964596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:51.839 [2024-11-26 07:41:35.965177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.839 [2024-11-26 07:41:35.965194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:51.839 [2024-11-26 07:41:35.965202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:51.839 [2024-11-26 07:41:35.965420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:51.839 [2024-11-26 07:41:35.965639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:51.839 [2024-11-26 07:41:35.965647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:51.839 [2024-11-26 07:41:35.965654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:51.839 [2024-11-26 07:41:35.965661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.100 [2024-11-26 07:41:35.978388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.100 [2024-11-26 07:41:35.979049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.100 [2024-11-26 07:41:35.979086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.100 [2024-11-26 07:41:35.979097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.100 [2024-11-26 07:41:35.979336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.100 [2024-11-26 07:41:35.979558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.100 [2024-11-26 07:41:35.979567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.100 [2024-11-26 07:41:35.979574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.100 [2024-11-26 07:41:35.979582] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.100 [2024-11-26 07:41:35.992313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.100 [2024-11-26 07:41:35.992907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.100 [2024-11-26 07:41:35.992933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.100 [2024-11-26 07:41:35.992941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.100 [2024-11-26 07:41:35.993165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.100 [2024-11-26 07:41:35.993385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.100 [2024-11-26 07:41:35.993393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.100 [2024-11-26 07:41:35.993400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.100 [2024-11-26 07:41:35.993406] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.100 [2024-11-26 07:41:36.006131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.100 [2024-11-26 07:41:36.006785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.100 [2024-11-26 07:41:36.006823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.100 [2024-11-26 07:41:36.006833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.100 [2024-11-26 07:41:36.007081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.100 [2024-11-26 07:41:36.007305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.100 [2024-11-26 07:41:36.007314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.100 [2024-11-26 07:41:36.007321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.100 [2024-11-26 07:41:36.007329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.100 [2024-11-26 07:41:36.020058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.100 [2024-11-26 07:41:36.020693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.100 [2024-11-26 07:41:36.020730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.100 [2024-11-26 07:41:36.020740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.100 [2024-11-26 07:41:36.020993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.100 [2024-11-26 07:41:36.021217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.100 [2024-11-26 07:41:36.021225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.100 [2024-11-26 07:41:36.021233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.100 [2024-11-26 07:41:36.021241] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.100 [2024-11-26 07:41:36.033964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.100 [2024-11-26 07:41:36.034621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.100 [2024-11-26 07:41:36.034659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.100 [2024-11-26 07:41:36.034672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.100 [2024-11-26 07:41:36.034922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.100 [2024-11-26 07:41:36.035145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.100 [2024-11-26 07:41:36.035154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.100 [2024-11-26 07:41:36.035161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.100 [2024-11-26 07:41:36.035169] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.100 [2024-11-26 07:41:36.047902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.100 [2024-11-26 07:41:36.048548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.101 [2024-11-26 07:41:36.048585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.101 [2024-11-26 07:41:36.048595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.101 [2024-11-26 07:41:36.048833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.101 8834.33 IOPS, 34.51 MiB/s [2024-11-26T06:41:36.238Z] [2024-11-26 07:41:36.050718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.101 [2024-11-26 07:41:36.050727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.101 [2024-11-26 07:41:36.050735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.101 [2024-11-26 07:41:36.050743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.101 [2024-11-26 07:41:36.061810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.101 [2024-11-26 07:41:36.062477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.101 [2024-11-26 07:41:36.062515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.101 [2024-11-26 07:41:36.062526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.101 [2024-11-26 07:41:36.062764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.101 [2024-11-26 07:41:36.062996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.101 [2024-11-26 07:41:36.063010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.101 [2024-11-26 07:41:36.063018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.101 [2024-11-26 07:41:36.063025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.101 [2024-11-26 07:41:36.075746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.101 [2024-11-26 07:41:36.076392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.101 [2024-11-26 07:41:36.076430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.101 [2024-11-26 07:41:36.076441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.101 [2024-11-26 07:41:36.076679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.101 [2024-11-26 07:41:36.076913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.101 [2024-11-26 07:41:36.076923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.101 [2024-11-26 07:41:36.076931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.101 [2024-11-26 07:41:36.076938] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.101 [2024-11-26 07:41:36.089682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.101 [2024-11-26 07:41:36.090341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.101 [2024-11-26 07:41:36.090378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.101 [2024-11-26 07:41:36.090389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.101 [2024-11-26 07:41:36.090627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.101 [2024-11-26 07:41:36.090849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.101 [2024-11-26 07:41:36.090858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.101 [2024-11-26 07:41:36.090876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.101 [2024-11-26 07:41:36.090884] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.101 [2024-11-26 07:41:36.103615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.101 [2024-11-26 07:41:36.104196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.101 [2024-11-26 07:41:36.104233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.101 [2024-11-26 07:41:36.104246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.101 [2024-11-26 07:41:36.104487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.101 [2024-11-26 07:41:36.104710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.101 [2024-11-26 07:41:36.104719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.101 [2024-11-26 07:41:36.104726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.101 [2024-11-26 07:41:36.104739] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.101 [2024-11-26 07:41:36.117469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.101 [2024-11-26 07:41:36.117933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.101 [2024-11-26 07:41:36.117953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.101 [2024-11-26 07:41:36.117961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.101 [2024-11-26 07:41:36.118180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.101 [2024-11-26 07:41:36.118399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.101 [2024-11-26 07:41:36.118407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.101 [2024-11-26 07:41:36.118414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.101 [2024-11-26 07:41:36.118420] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.101 [2024-11-26 07:41:36.131358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.101 [2024-11-26 07:41:36.132031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.101 [2024-11-26 07:41:36.132068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.101 [2024-11-26 07:41:36.132079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.101 [2024-11-26 07:41:36.132318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.101 [2024-11-26 07:41:36.132540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.101 [2024-11-26 07:41:36.132549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.101 [2024-11-26 07:41:36.132556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.101 [2024-11-26 07:41:36.132564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.101 [2024-11-26 07:41:36.145302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.101 [2024-11-26 07:41:36.145940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.101 [2024-11-26 07:41:36.145977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.101 [2024-11-26 07:41:36.145988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.101 [2024-11-26 07:41:36.146225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.101 [2024-11-26 07:41:36.146448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.101 [2024-11-26 07:41:36.146457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.101 [2024-11-26 07:41:36.146464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.101 [2024-11-26 07:41:36.146472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.101 [2024-11-26 07:41:36.159217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.101 [2024-11-26 07:41:36.159881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.101 [2024-11-26 07:41:36.159919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.101 [2024-11-26 07:41:36.159930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.101 [2024-11-26 07:41:36.160168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.101 [2024-11-26 07:41:36.160390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.101 [2024-11-26 07:41:36.160398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.101 [2024-11-26 07:41:36.160407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.101 [2024-11-26 07:41:36.160414] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.102 [2024-11-26 07:41:36.173148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.102 [2024-11-26 07:41:36.173782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.102 [2024-11-26 07:41:36.173819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.102 [2024-11-26 07:41:36.173830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.102 [2024-11-26 07:41:36.174077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.102 [2024-11-26 07:41:36.174301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.102 [2024-11-26 07:41:36.174309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.102 [2024-11-26 07:41:36.174317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.102 [2024-11-26 07:41:36.174325] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.102 [2024-11-26 07:41:36.187051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.102 [2024-11-26 07:41:36.187611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.102 [2024-11-26 07:41:36.187648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.102 [2024-11-26 07:41:36.187660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.102 [2024-11-26 07:41:36.187909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.102 [2024-11-26 07:41:36.188133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.102 [2024-11-26 07:41:36.188142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.102 [2024-11-26 07:41:36.188150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.102 [2024-11-26 07:41:36.188158] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.102 [2024-11-26 07:41:36.200884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.102 [2024-11-26 07:41:36.201489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.102 [2024-11-26 07:41:36.201527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.102 [2024-11-26 07:41:36.201538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.102 [2024-11-26 07:41:36.201780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.102 [2024-11-26 07:41:36.202012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.102 [2024-11-26 07:41:36.202022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.102 [2024-11-26 07:41:36.202029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.102 [2024-11-26 07:41:36.202037] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.102 [2024-11-26 07:41:36.214758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.102 [2024-11-26 07:41:36.215386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.102 [2024-11-26 07:41:36.215423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.102 [2024-11-26 07:41:36.215433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.102 [2024-11-26 07:41:36.215671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.102 [2024-11-26 07:41:36.215904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.102 [2024-11-26 07:41:36.215913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.102 [2024-11-26 07:41:36.215921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.102 [2024-11-26 07:41:36.215929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.102 [2024-11-26 07:41:36.228664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.363 [2024-11-26 07:41:36.229354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.363 [2024-11-26 07:41:36.229392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.363 [2024-11-26 07:41:36.229403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.363 [2024-11-26 07:41:36.229641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.363 [2024-11-26 07:41:36.229874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.363 [2024-11-26 07:41:36.229883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.363 [2024-11-26 07:41:36.229891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.363 [2024-11-26 07:41:36.229899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.363 [2024-11-26 07:41:36.242628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.363 [2024-11-26 07:41:36.243170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.363 [2024-11-26 07:41:36.243189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.363 [2024-11-26 07:41:36.243197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.363 [2024-11-26 07:41:36.243416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.363 [2024-11-26 07:41:36.243635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.363 [2024-11-26 07:41:36.243648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.363 [2024-11-26 07:41:36.243655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.364 [2024-11-26 07:41:36.243661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.364 [2024-11-26 07:41:36.256416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.364 [2024-11-26 07:41:36.256988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.364 [2024-11-26 07:41:36.257005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.364 [2024-11-26 07:41:36.257013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.364 [2024-11-26 07:41:36.257231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.364 [2024-11-26 07:41:36.257450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.364 [2024-11-26 07:41:36.257457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.364 [2024-11-26 07:41:36.257465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.364 [2024-11-26 07:41:36.257471] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.364 [2024-11-26 07:41:36.270397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.364 [2024-11-26 07:41:36.271040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.364 [2024-11-26 07:41:36.271078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.364 [2024-11-26 07:41:36.271090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.364 [2024-11-26 07:41:36.271330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.364 [2024-11-26 07:41:36.271552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.364 [2024-11-26 07:41:36.271561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.364 [2024-11-26 07:41:36.271569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.364 [2024-11-26 07:41:36.271576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.364 [2024-11-26 07:41:36.284307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.364 [2024-11-26 07:41:36.284946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.364 [2024-11-26 07:41:36.284983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.364 [2024-11-26 07:41:36.284994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.364 [2024-11-26 07:41:36.285231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.364 [2024-11-26 07:41:36.285454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.364 [2024-11-26 07:41:36.285462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.364 [2024-11-26 07:41:36.285470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.364 [2024-11-26 07:41:36.285487] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.364 [2024-11-26 07:41:36.298217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.364 [2024-11-26 07:41:36.298880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.364 [2024-11-26 07:41:36.298918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.364 [2024-11-26 07:41:36.298929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.364 [2024-11-26 07:41:36.299167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.364 [2024-11-26 07:41:36.299390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.364 [2024-11-26 07:41:36.299398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.364 [2024-11-26 07:41:36.299405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.364 [2024-11-26 07:41:36.299413] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.364 [2024-11-26 07:41:36.312143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.364 [2024-11-26 07:41:36.312805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.364 [2024-11-26 07:41:36.312843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.364 [2024-11-26 07:41:36.312854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.364 [2024-11-26 07:41:36.313101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.364 [2024-11-26 07:41:36.313324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.364 [2024-11-26 07:41:36.313333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.364 [2024-11-26 07:41:36.313341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.364 [2024-11-26 07:41:36.313348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.364 [2024-11-26 07:41:36.326080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.364 [2024-11-26 07:41:36.326761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.364 [2024-11-26 07:41:36.326798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.364 [2024-11-26 07:41:36.326810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.364 [2024-11-26 07:41:36.327058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.364 [2024-11-26 07:41:36.327282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.364 [2024-11-26 07:41:36.327292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.364 [2024-11-26 07:41:36.327300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.364 [2024-11-26 07:41:36.327309] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.364 [2024-11-26 07:41:36.340053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.364 [2024-11-26 07:41:36.340703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.364 [2024-11-26 07:41:36.340741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.364 [2024-11-26 07:41:36.340752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.364 [2024-11-26 07:41:36.341000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.364 [2024-11-26 07:41:36.341224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.364 [2024-11-26 07:41:36.341233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.364 [2024-11-26 07:41:36.341241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.365 [2024-11-26 07:41:36.341249] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.365 [2024-11-26 07:41:36.353998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.365 [2024-11-26 07:41:36.354638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.365 [2024-11-26 07:41:36.354675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.365 [2024-11-26 07:41:36.354685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.365 [2024-11-26 07:41:36.354932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.365 [2024-11-26 07:41:36.355155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.365 [2024-11-26 07:41:36.355164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.365 [2024-11-26 07:41:36.355172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.365 [2024-11-26 07:41:36.355179] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.365 [2024-11-26 07:41:36.367909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.365 [2024-11-26 07:41:36.368571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.365 [2024-11-26 07:41:36.368608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.365 [2024-11-26 07:41:36.368619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.365 [2024-11-26 07:41:36.368857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.365 [2024-11-26 07:41:36.369090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.365 [2024-11-26 07:41:36.369098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.365 [2024-11-26 07:41:36.369106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.365 [2024-11-26 07:41:36.369114] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.365 [2024-11-26 07:41:36.381833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.365 [2024-11-26 07:41:36.382427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.365 [2024-11-26 07:41:36.382447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.365 [2024-11-26 07:41:36.382454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.365 [2024-11-26 07:41:36.382678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.365 [2024-11-26 07:41:36.382905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.365 [2024-11-26 07:41:36.382914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.365 [2024-11-26 07:41:36.382921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.365 [2024-11-26 07:41:36.382928] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.365 [2024-11-26 07:41:36.395636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.365 [2024-11-26 07:41:36.396178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.365 [2024-11-26 07:41:36.396196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.365 [2024-11-26 07:41:36.396203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.365 [2024-11-26 07:41:36.396422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.365 [2024-11-26 07:41:36.396640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.365 [2024-11-26 07:41:36.396648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.365 [2024-11-26 07:41:36.396655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.365 [2024-11-26 07:41:36.396661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.365 [2024-11-26 07:41:36.409582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.365 [2024-11-26 07:41:36.410056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.365 [2024-11-26 07:41:36.410073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.365 [2024-11-26 07:41:36.410080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.365 [2024-11-26 07:41:36.410298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.365 [2024-11-26 07:41:36.410516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.365 [2024-11-26 07:41:36.410526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.365 [2024-11-26 07:41:36.410533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.365 [2024-11-26 07:41:36.410539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.365 [2024-11-26 07:41:36.423465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.365 [2024-11-26 07:41:36.424118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.365 [2024-11-26 07:41:36.424155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.365 [2024-11-26 07:41:36.424166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.365 [2024-11-26 07:41:36.424403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.365 [2024-11-26 07:41:36.424625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.365 [2024-11-26 07:41:36.424638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.365 [2024-11-26 07:41:36.424647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.365 [2024-11-26 07:41:36.424654] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.365 [2024-11-26 07:41:36.437382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.365 [2024-11-26 07:41:36.438092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.365 [2024-11-26 07:41:36.438129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.365 [2024-11-26 07:41:36.438140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.365 [2024-11-26 07:41:36.438378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.365 [2024-11-26 07:41:36.438600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.365 [2024-11-26 07:41:36.438609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.365 [2024-11-26 07:41:36.438617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.366 [2024-11-26 07:41:36.438624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.366 [2024-11-26 07:41:36.451360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.366 [2024-11-26 07:41:36.452037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.366 [2024-11-26 07:41:36.452074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.366 [2024-11-26 07:41:36.452085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.366 [2024-11-26 07:41:36.452323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.366 [2024-11-26 07:41:36.452556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.366 [2024-11-26 07:41:36.452566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.366 [2024-11-26 07:41:36.452574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.366 [2024-11-26 07:41:36.452582] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.366 [2024-11-26 07:41:36.465313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.366 [2024-11-26 07:41:36.465859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.366 [2024-11-26 07:41:36.465884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.366 [2024-11-26 07:41:36.465892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.366 [2024-11-26 07:41:36.466112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.366 [2024-11-26 07:41:36.466331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.366 [2024-11-26 07:41:36.466339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.366 [2024-11-26 07:41:36.466346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.366 [2024-11-26 07:41:36.466357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.366 [2024-11-26 07:41:36.479289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.366 [2024-11-26 07:41:36.479699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.366 [2024-11-26 07:41:36.479717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.366 [2024-11-26 07:41:36.479725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.366 [2024-11-26 07:41:36.479956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.366 [2024-11-26 07:41:36.480177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.366 [2024-11-26 07:41:36.480185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.366 [2024-11-26 07:41:36.480193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.366 [2024-11-26 07:41:36.480200] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.628 [2024-11-26 07:41:36.493139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.628 [2024-11-26 07:41:36.493798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.628 [2024-11-26 07:41:36.493835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.628 [2024-11-26 07:41:36.493846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.628 [2024-11-26 07:41:36.494094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.628 [2024-11-26 07:41:36.494317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.628 [2024-11-26 07:41:36.494326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.628 [2024-11-26 07:41:36.494334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.628 [2024-11-26 07:41:36.494342] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.628 [2024-11-26 07:41:36.507092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.628 [2024-11-26 07:41:36.507638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.628 [2024-11-26 07:41:36.507658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.628 [2024-11-26 07:41:36.507666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.628 [2024-11-26 07:41:36.507892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.628 [2024-11-26 07:41:36.508112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.628 [2024-11-26 07:41:36.508121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.628 [2024-11-26 07:41:36.508128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.628 [2024-11-26 07:41:36.508135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.628 [2024-11-26 07:41:36.521091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.628 [2024-11-26 07:41:36.521628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.628 [2024-11-26 07:41:36.521645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.628 [2024-11-26 07:41:36.521652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.628 [2024-11-26 07:41:36.521877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.628 [2024-11-26 07:41:36.522097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.628 [2024-11-26 07:41:36.522105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.628 [2024-11-26 07:41:36.522113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.628 [2024-11-26 07:41:36.522120] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.628 [2024-11-26 07:41:36.535061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.628 [2024-11-26 07:41:36.535680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.628 [2024-11-26 07:41:36.535718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.628 [2024-11-26 07:41:36.535729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.628 [2024-11-26 07:41:36.535978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.628 [2024-11-26 07:41:36.536202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.628 [2024-11-26 07:41:36.536211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.628 [2024-11-26 07:41:36.536219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.628 [2024-11-26 07:41:36.536227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.628 [2024-11-26 07:41:36.548951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.628 [2024-11-26 07:41:36.549608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.628 [2024-11-26 07:41:36.549645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.628 [2024-11-26 07:41:36.549656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.628 [2024-11-26 07:41:36.549901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.628 [2024-11-26 07:41:36.550124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.628 [2024-11-26 07:41:36.550133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.628 [2024-11-26 07:41:36.550141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.628 [2024-11-26 07:41:36.550149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.628 [2024-11-26 07:41:36.562903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.628 [2024-11-26 07:41:36.563573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.628 [2024-11-26 07:41:36.563611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.628 [2024-11-26 07:41:36.563622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.628 [2024-11-26 07:41:36.563874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.628 [2024-11-26 07:41:36.564098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.629 [2024-11-26 07:41:36.564107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.629 [2024-11-26 07:41:36.564114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.629 [2024-11-26 07:41:36.564123] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.629 [2024-11-26 07:41:36.576856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.629 [2024-11-26 07:41:36.577545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.629 [2024-11-26 07:41:36.577583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.629 [2024-11-26 07:41:36.577594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.629 [2024-11-26 07:41:36.577833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.629 [2024-11-26 07:41:36.578069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.629 [2024-11-26 07:41:36.578078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.629 [2024-11-26 07:41:36.578086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.629 [2024-11-26 07:41:36.578094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.629 [2024-11-26 07:41:36.590839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.629 [2024-11-26 07:41:36.591384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.629 [2024-11-26 07:41:36.591405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.629 [2024-11-26 07:41:36.591414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.629 [2024-11-26 07:41:36.591632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.629 [2024-11-26 07:41:36.591851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.629 [2024-11-26 07:41:36.591868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.629 [2024-11-26 07:41:36.591875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.629 [2024-11-26 07:41:36.591882] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.629 [2024-11-26 07:41:36.604705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.629 [2024-11-26 07:41:36.605270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.629 [2024-11-26 07:41:36.605289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.629 [2024-11-26 07:41:36.605296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.629 [2024-11-26 07:41:36.605515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.629 [2024-11-26 07:41:36.605733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.629 [2024-11-26 07:41:36.605746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.629 [2024-11-26 07:41:36.605753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.629 [2024-11-26 07:41:36.605760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.629 [2024-11-26 07:41:36.618482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.629 [2024-11-26 07:41:36.619020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.629 [2024-11-26 07:41:36.619038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.629 [2024-11-26 07:41:36.619045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.629 [2024-11-26 07:41:36.619264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.629 [2024-11-26 07:41:36.619482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.629 [2024-11-26 07:41:36.619490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.629 [2024-11-26 07:41:36.619498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.629 [2024-11-26 07:41:36.619504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.629 [2024-11-26 07:41:36.632432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.629 [2024-11-26 07:41:36.633101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.629 [2024-11-26 07:41:36.633138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.629 [2024-11-26 07:41:36.633149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.629 [2024-11-26 07:41:36.633387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.629 [2024-11-26 07:41:36.633610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.629 [2024-11-26 07:41:36.633618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.629 [2024-11-26 07:41:36.633626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.629 [2024-11-26 07:41:36.633634] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.629 [2024-11-26 07:41:36.646374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.629 [2024-11-26 07:41:36.647064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.629 [2024-11-26 07:41:36.647102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.629 [2024-11-26 07:41:36.647112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.629 [2024-11-26 07:41:36.647351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.629 [2024-11-26 07:41:36.647573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.629 [2024-11-26 07:41:36.647582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.629 [2024-11-26 07:41:36.647589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.629 [2024-11-26 07:41:36.647601] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.629 [2024-11-26 07:41:36.660345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.629 [2024-11-26 07:41:36.660972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.629 [2024-11-26 07:41:36.661010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.629 [2024-11-26 07:41:36.661020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.629 [2024-11-26 07:41:36.661259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.629 [2024-11-26 07:41:36.661481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.629 [2024-11-26 07:41:36.661490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.629 [2024-11-26 07:41:36.661498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.629 [2024-11-26 07:41:36.661505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.629 [2024-11-26 07:41:36.674267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.629 [2024-11-26 07:41:36.674848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.629 [2024-11-26 07:41:36.674892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.629 [2024-11-26 07:41:36.674904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.629 [2024-11-26 07:41:36.675141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.629 [2024-11-26 07:41:36.675364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.629 [2024-11-26 07:41:36.675373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.629 [2024-11-26 07:41:36.675380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.629 [2024-11-26 07:41:36.675388] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.629 [2024-11-26 07:41:36.688111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.629 [2024-11-26 07:41:36.688786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.629 [2024-11-26 07:41:36.688822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.629 [2024-11-26 07:41:36.688834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.629 [2024-11-26 07:41:36.689081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.629 [2024-11-26 07:41:36.689304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.629 [2024-11-26 07:41:36.689313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.629 [2024-11-26 07:41:36.689321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.629 [2024-11-26 07:41:36.689329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.629 [2024-11-26 07:41:36.702052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.629 [2024-11-26 07:41:36.702713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.629 [2024-11-26 07:41:36.702750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.629 [2024-11-26 07:41:36.702761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.629 [2024-11-26 07:41:36.703008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.630 [2024-11-26 07:41:36.703232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.630 [2024-11-26 07:41:36.703240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.630 [2024-11-26 07:41:36.703248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.630 [2024-11-26 07:41:36.703256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.630 [2024-11-26 07:41:36.716004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.630 [2024-11-26 07:41:36.716590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.630 [2024-11-26 07:41:36.716610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.630 [2024-11-26 07:41:36.716617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.630 [2024-11-26 07:41:36.716837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.630 [2024-11-26 07:41:36.717062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.630 [2024-11-26 07:41:36.717071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.630 [2024-11-26 07:41:36.717079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.630 [2024-11-26 07:41:36.717086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.630 [2024-11-26 07:41:36.729838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.630 [2024-11-26 07:41:36.730454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.630 [2024-11-26 07:41:36.730493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.630 [2024-11-26 07:41:36.730504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.630 [2024-11-26 07:41:36.730741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.630 [2024-11-26 07:41:36.730973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.630 [2024-11-26 07:41:36.730983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.630 [2024-11-26 07:41:36.730991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.630 [2024-11-26 07:41:36.730998] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.630 [2024-11-26 07:41:36.743739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.630 [2024-11-26 07:41:36.744288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.630 [2024-11-26 07:41:36.744308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.630 [2024-11-26 07:41:36.744316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.630 [2024-11-26 07:41:36.744541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.630 [2024-11-26 07:41:36.744759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.630 [2024-11-26 07:41:36.744767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.630 [2024-11-26 07:41:36.744775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.630 [2024-11-26 07:41:36.744781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.891 [2024-11-26 07:41:36.757533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.891 [2024-11-26 07:41:36.758160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-26 07:41:36.758200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.891 [2024-11-26 07:41:36.758211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.891 [2024-11-26 07:41:36.758449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.891 [2024-11-26 07:41:36.758672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.891 [2024-11-26 07:41:36.758682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.891 [2024-11-26 07:41:36.758689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.891 [2024-11-26 07:41:36.758697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.891 [2024-11-26 07:41:36.771458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.891 [2024-11-26 07:41:36.772161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-26 07:41:36.772198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.891 [2024-11-26 07:41:36.772209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.891 [2024-11-26 07:41:36.772447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.891 [2024-11-26 07:41:36.772670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.891 [2024-11-26 07:41:36.772678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.891 [2024-11-26 07:41:36.772686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.891 [2024-11-26 07:41:36.772694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.891 [2024-11-26 07:41:36.785426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.891 [2024-11-26 07:41:36.785984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-26 07:41:36.786022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.891 [2024-11-26 07:41:36.786035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.891 [2024-11-26 07:41:36.786275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.891 [2024-11-26 07:41:36.786498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.891 [2024-11-26 07:41:36.786512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.891 [2024-11-26 07:41:36.786520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.891 [2024-11-26 07:41:36.786528] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.891 [2024-11-26 07:41:36.799270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.891 [2024-11-26 07:41:36.799833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-26 07:41:36.799853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.891 [2024-11-26 07:41:36.799867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.891 [2024-11-26 07:41:36.800087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.891 [2024-11-26 07:41:36.800305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.891 [2024-11-26 07:41:36.800315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.891 [2024-11-26 07:41:36.800323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.891 [2024-11-26 07:41:36.800330] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.891 [2024-11-26 07:41:36.813066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.891 [2024-11-26 07:41:36.813730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-26 07:41:36.813767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.891 [2024-11-26 07:41:36.813778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.891 [2024-11-26 07:41:36.814024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.891 [2024-11-26 07:41:36.814248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.891 [2024-11-26 07:41:36.814256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.891 [2024-11-26 07:41:36.814265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.891 [2024-11-26 07:41:36.814273] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.891 [2024-11-26 07:41:36.827039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.891 [2024-11-26 07:41:36.827648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-26 07:41:36.827686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.891 [2024-11-26 07:41:36.827697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.891 [2024-11-26 07:41:36.827946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.891 [2024-11-26 07:41:36.828170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.891 [2024-11-26 07:41:36.828179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.891 [2024-11-26 07:41:36.828188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.891 [2024-11-26 07:41:36.828201] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.892 [2024-11-26 07:41:36.840958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.892 [2024-11-26 07:41:36.841546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-26 07:41:36.841567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.892 [2024-11-26 07:41:36.841575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.892 [2024-11-26 07:41:36.841795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.892 [2024-11-26 07:41:36.842022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.892 [2024-11-26 07:41:36.842031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.892 [2024-11-26 07:41:36.842038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.892 [2024-11-26 07:41:36.842045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.892 [2024-11-26 07:41:36.854794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.892 [2024-11-26 07:41:36.855452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-26 07:41:36.855489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.892 [2024-11-26 07:41:36.855500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.892 [2024-11-26 07:41:36.855738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.892 [2024-11-26 07:41:36.855971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.892 [2024-11-26 07:41:36.855981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.892 [2024-11-26 07:41:36.855988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.892 [2024-11-26 07:41:36.855996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.892 [2024-11-26 07:41:36.868602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.892 [2024-11-26 07:41:36.869250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-26 07:41:36.869289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.892 [2024-11-26 07:41:36.869301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.892 [2024-11-26 07:41:36.869542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.892 [2024-11-26 07:41:36.869765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.892 [2024-11-26 07:41:36.869774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.892 [2024-11-26 07:41:36.869782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.892 [2024-11-26 07:41:36.869790] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.892 [2024-11-26 07:41:36.882543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.892 [2024-11-26 07:41:36.883190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-26 07:41:36.883227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.892 [2024-11-26 07:41:36.883238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.892 [2024-11-26 07:41:36.883476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.892 [2024-11-26 07:41:36.883698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.892 [2024-11-26 07:41:36.883708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.892 [2024-11-26 07:41:36.883716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.892 [2024-11-26 07:41:36.883723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.892 [2024-11-26 07:41:36.896484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.892 [2024-11-26 07:41:36.897155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-26 07:41:36.897194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.892 [2024-11-26 07:41:36.897205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.892 [2024-11-26 07:41:36.897442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.892 [2024-11-26 07:41:36.897664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.892 [2024-11-26 07:41:36.897673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.892 [2024-11-26 07:41:36.897681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.892 [2024-11-26 07:41:36.897688] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.892 [2024-11-26 07:41:36.910425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.892 [2024-11-26 07:41:36.911073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-26 07:41:36.911111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.892 [2024-11-26 07:41:36.911122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.892 [2024-11-26 07:41:36.911360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.892 [2024-11-26 07:41:36.911582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.892 [2024-11-26 07:41:36.911591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.892 [2024-11-26 07:41:36.911599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.892 [2024-11-26 07:41:36.911607] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.892 [2024-11-26 07:41:36.924414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.892 [2024-11-26 07:41:36.925092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-26 07:41:36.925130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.892 [2024-11-26 07:41:36.925142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.892 [2024-11-26 07:41:36.925386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.892 [2024-11-26 07:41:36.925608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.892 [2024-11-26 07:41:36.925617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.892 [2024-11-26 07:41:36.925625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.892 [2024-11-26 07:41:36.925633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.892 [2024-11-26 07:41:36.938370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.892 [2024-11-26 07:41:36.938967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-26 07:41:36.939004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.892 [2024-11-26 07:41:36.939016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.892 [2024-11-26 07:41:36.939257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.892 [2024-11-26 07:41:36.939480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.892 [2024-11-26 07:41:36.939489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.892 [2024-11-26 07:41:36.939497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.892 [2024-11-26 07:41:36.939504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.892 [2024-11-26 07:41:36.952240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.893 [2024-11-26 07:41:36.952875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-26 07:41:36.952896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.893 [2024-11-26 07:41:36.952904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.893 [2024-11-26 07:41:36.953123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.893 [2024-11-26 07:41:36.953341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.893 [2024-11-26 07:41:36.953349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.893 [2024-11-26 07:41:36.953356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.893 [2024-11-26 07:41:36.953363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.893 [2024-11-26 07:41:36.966113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.893 [2024-11-26 07:41:36.966669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-26 07:41:36.966686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.893 [2024-11-26 07:41:36.966693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.893 [2024-11-26 07:41:36.966918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.893 [2024-11-26 07:41:36.967136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.893 [2024-11-26 07:41:36.967153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.893 [2024-11-26 07:41:36.967160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.893 [2024-11-26 07:41:36.967167] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.893 [2024-11-26 07:41:36.979906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.893 [2024-11-26 07:41:36.980561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-26 07:41:36.980599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.893 [2024-11-26 07:41:36.980610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.893 [2024-11-26 07:41:36.980847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.893 [2024-11-26 07:41:36.981081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.893 [2024-11-26 07:41:36.981091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.893 [2024-11-26 07:41:36.981098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.893 [2024-11-26 07:41:36.981106] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.893 [2024-11-26 07:41:36.993849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.893 [2024-11-26 07:41:36.994511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-26 07:41:36.994549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.893 [2024-11-26 07:41:36.994560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.893 [2024-11-26 07:41:36.994799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.893 [2024-11-26 07:41:36.995031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.893 [2024-11-26 07:41:36.995041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.893 [2024-11-26 07:41:36.995048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.893 [2024-11-26 07:41:36.995056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:52.893 [2024-11-26 07:41:37.007797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:52.893 [2024-11-26 07:41:37.008435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-26 07:41:37.008473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:52.893 [2024-11-26 07:41:37.008484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:52.893 [2024-11-26 07:41:37.008722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:52.893 [2024-11-26 07:41:37.008953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:52.893 [2024-11-26 07:41:37.008963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:52.893 [2024-11-26 07:41:37.008971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:52.893 [2024-11-26 07:41:37.008984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.154 [2024-11-26 07:41:37.021727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.154 [2024-11-26 07:41:37.022397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.154 [2024-11-26 07:41:37.022435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.154 [2024-11-26 07:41:37.022446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.154 [2024-11-26 07:41:37.022684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.154 [2024-11-26 07:41:37.022914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.154 [2024-11-26 07:41:37.022924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.154 [2024-11-26 07:41:37.022932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.154 [2024-11-26 07:41:37.022939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.154 [2024-11-26 07:41:37.035671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.154 [2024-11-26 07:41:37.036322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.154 [2024-11-26 07:41:37.036360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.154 [2024-11-26 07:41:37.036373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.154 [2024-11-26 07:41:37.036614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.154 [2024-11-26 07:41:37.036837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.154 [2024-11-26 07:41:37.036845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.154 [2024-11-26 07:41:37.036853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.154 [2024-11-26 07:41:37.036861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.154 [2024-11-26 07:41:37.049604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.154 [2024-11-26 07:41:37.050171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.154 [2024-11-26 07:41:37.050190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.154 [2024-11-26 07:41:37.050198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.154 [2024-11-26 07:41:37.050417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.154 [2024-11-26 07:41:37.050635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.154 [2024-11-26 07:41:37.050644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.154 [2024-11-26 07:41:37.050651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.154 [2024-11-26 07:41:37.050658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.154 6625.75 IOPS, 25.88 MiB/s [2024-11-26T06:41:37.291Z] [2024-11-26 07:41:37.063590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.154 [2024-11-26 07:41:37.064038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.154 [2024-11-26 07:41:37.064056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.154 [2024-11-26 07:41:37.064064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.154 [2024-11-26 07:41:37.064282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.154 [2024-11-26 07:41:37.064501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.154 [2024-11-26 07:41:37.064510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.154 [2024-11-26 07:41:37.064517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.154 [2024-11-26 07:41:37.064524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.154 [2024-11-26 07:41:37.077457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.154 [2024-11-26 07:41:37.077914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.154 [2024-11-26 07:41:37.077938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.154 [2024-11-26 07:41:37.077947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.154 [2024-11-26 07:41:37.078170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.154 [2024-11-26 07:41:37.078390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.154 [2024-11-26 07:41:37.078406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.154 [2024-11-26 07:41:37.078413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.154 [2024-11-26 07:41:37.078421] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.154 [2024-11-26 07:41:37.091359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.154 [2024-11-26 07:41:37.091906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.154 [2024-11-26 07:41:37.091929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.154 [2024-11-26 07:41:37.091936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.154 [2024-11-26 07:41:37.092159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.154 [2024-11-26 07:41:37.092380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.154 [2024-11-26 07:41:37.092389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.154 [2024-11-26 07:41:37.092397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.154 [2024-11-26 07:41:37.092404] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.154 [2024-11-26 07:41:37.105338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.154 [2024-11-26 07:41:37.105873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.154 [2024-11-26 07:41:37.105891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.154 [2024-11-26 07:41:37.105903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.154 [2024-11-26 07:41:37.106122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.154 [2024-11-26 07:41:37.106340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.154 [2024-11-26 07:41:37.106349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.154 [2024-11-26 07:41:37.106356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.154 [2024-11-26 07:41:37.106363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.154 [2024-11-26 07:41:37.119306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.154 [2024-11-26 07:41:37.119859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.154 [2024-11-26 07:41:37.119911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.154 [2024-11-26 07:41:37.119923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.154 [2024-11-26 07:41:37.120162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.154 [2024-11-26 07:41:37.120385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.154 [2024-11-26 07:41:37.120393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.154 [2024-11-26 07:41:37.120401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.155 [2024-11-26 07:41:37.120409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.155 [2024-11-26 07:41:37.133141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.155 [2024-11-26 07:41:37.133820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.155 [2024-11-26 07:41:37.133857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.155 [2024-11-26 07:41:37.133876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.155 [2024-11-26 07:41:37.134115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.155 [2024-11-26 07:41:37.134337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.155 [2024-11-26 07:41:37.134347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.155 [2024-11-26 07:41:37.134354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.155 [2024-11-26 07:41:37.134362] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.155 [2024-11-26 07:41:37.147092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.155 [2024-11-26 07:41:37.147578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.155 [2024-11-26 07:41:37.147597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.155 [2024-11-26 07:41:37.147604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.155 [2024-11-26 07:41:37.147823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.155 [2024-11-26 07:41:37.148052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.155 [2024-11-26 07:41:37.148062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.155 [2024-11-26 07:41:37.148069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.155 [2024-11-26 07:41:37.148076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.155 [2024-11-26 07:41:37.161022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.155 [2024-11-26 07:41:37.161475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.155 [2024-11-26 07:41:37.161492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.155 [2024-11-26 07:41:37.161499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.155 [2024-11-26 07:41:37.161718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.155 [2024-11-26 07:41:37.161941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.155 [2024-11-26 07:41:37.161950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.155 [2024-11-26 07:41:37.161957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.155 [2024-11-26 07:41:37.161964] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.155 [2024-11-26 07:41:37.174901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.155 [2024-11-26 07:41:37.175500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.155 [2024-11-26 07:41:37.175538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.155 [2024-11-26 07:41:37.175549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.155 [2024-11-26 07:41:37.175787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.155 [2024-11-26 07:41:37.176018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.155 [2024-11-26 07:41:37.176028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.155 [2024-11-26 07:41:37.176036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.155 [2024-11-26 07:41:37.176044] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.155 [2024-11-26 07:41:37.188775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.155 [2024-11-26 07:41:37.189441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.155 [2024-11-26 07:41:37.189479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.155 [2024-11-26 07:41:37.189491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.155 [2024-11-26 07:41:37.189729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.155 [2024-11-26 07:41:37.189959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.155 [2024-11-26 07:41:37.189969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.155 [2024-11-26 07:41:37.189977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.155 [2024-11-26 07:41:37.189989] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.155 [2024-11-26 07:41:37.202728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.155 [2024-11-26 07:41:37.203369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.155 [2024-11-26 07:41:37.203407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.155 [2024-11-26 07:41:37.203418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.155 [2024-11-26 07:41:37.203655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.155 [2024-11-26 07:41:37.203886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.155 [2024-11-26 07:41:37.203896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.155 [2024-11-26 07:41:37.203903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.155 [2024-11-26 07:41:37.203911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.155 [2024-11-26 07:41:37.216642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.155 [2024-11-26 07:41:37.217196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.155 [2024-11-26 07:41:37.217216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.155 [2024-11-26 07:41:37.217223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.155 [2024-11-26 07:41:37.217442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.155 [2024-11-26 07:41:37.217661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.155 [2024-11-26 07:41:37.217670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.155 [2024-11-26 07:41:37.217677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.155 [2024-11-26 07:41:37.217684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.155 [2024-11-26 07:41:37.230630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.155 [2024-11-26 07:41:37.231158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.155 [2024-11-26 07:41:37.231175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.155 [2024-11-26 07:41:37.231182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.155 [2024-11-26 07:41:37.231400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.155 [2024-11-26 07:41:37.231619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.155 [2024-11-26 07:41:37.231628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.155 [2024-11-26 07:41:37.231634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.155 [2024-11-26 07:41:37.231641] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.155 [2024-11-26 07:41:37.244575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.155 [2024-11-26 07:41:37.245214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.155 [2024-11-26 07:41:37.245251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.155 [2024-11-26 07:41:37.245262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.155 [2024-11-26 07:41:37.245500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.155 [2024-11-26 07:41:37.245722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.155 [2024-11-26 07:41:37.245731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.155 [2024-11-26 07:41:37.245739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.155 [2024-11-26 07:41:37.245747] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.155 [2024-11-26 07:41:37.258498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.155 [2024-11-26 07:41:37.259092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.155 [2024-11-26 07:41:37.259113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.155 [2024-11-26 07:41:37.259121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.155 [2024-11-26 07:41:37.259340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.155 [2024-11-26 07:41:37.259558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.155 [2024-11-26 07:41:37.259566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.155 [2024-11-26 07:41:37.259573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.155 [2024-11-26 07:41:37.259580] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.155 [2024-11-26 07:41:37.272306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.155 [2024-11-26 07:41:37.272879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.155 [2024-11-26 07:41:37.272897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.155 [2024-11-26 07:41:37.272904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.155 [2024-11-26 07:41:37.273123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.155 [2024-11-26 07:41:37.273341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.155 [2024-11-26 07:41:37.273350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.155 [2024-11-26 07:41:37.273357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.155 [2024-11-26 07:41:37.273364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.416 [2024-11-26 07:41:37.286091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.416 [2024-11-26 07:41:37.286629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.416 [2024-11-26 07:41:37.286646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.416 [2024-11-26 07:41:37.286658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.416 [2024-11-26 07:41:37.286881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.416 [2024-11-26 07:41:37.287100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.416 [2024-11-26 07:41:37.287108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.416 [2024-11-26 07:41:37.287115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.416 [2024-11-26 07:41:37.287121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.416 [2024-11-26 07:41:37.300052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.416 [2024-11-26 07:41:37.300637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.416 [2024-11-26 07:41:37.300654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.416 [2024-11-26 07:41:37.300662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.416 [2024-11-26 07:41:37.300884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.416 [2024-11-26 07:41:37.301104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.416 [2024-11-26 07:41:37.301115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.416 [2024-11-26 07:41:37.301122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.416 [2024-11-26 07:41:37.301128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.416 [2024-11-26 07:41:37.313845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.416 [2024-11-26 07:41:37.314369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.416 [2024-11-26 07:41:37.314387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.416 [2024-11-26 07:41:37.314394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.416 [2024-11-26 07:41:37.314612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.416 [2024-11-26 07:41:37.314832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.416 [2024-11-26 07:41:37.314840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.416 [2024-11-26 07:41:37.314848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.416 [2024-11-26 07:41:37.314854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.416 [2024-11-26 07:41:37.327793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.416 [2024-11-26 07:41:37.328414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.416 [2024-11-26 07:41:37.328453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.416 [2024-11-26 07:41:37.328464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.416 [2024-11-26 07:41:37.328702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.416 [2024-11-26 07:41:37.328934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.416 [2024-11-26 07:41:37.328950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.416 [2024-11-26 07:41:37.328959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.416 [2024-11-26 07:41:37.328968] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.416 [2024-11-26 07:41:37.341704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.416 [2024-11-26 07:41:37.342287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.417 [2024-11-26 07:41:37.342326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.417 [2024-11-26 07:41:37.342337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.417 [2024-11-26 07:41:37.342575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.417 [2024-11-26 07:41:37.342798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.417 [2024-11-26 07:41:37.342808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.417 [2024-11-26 07:41:37.342816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.417 [2024-11-26 07:41:37.342825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.417 [2024-11-26 07:41:37.355561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.417 [2024-11-26 07:41:37.356158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.417 [2024-11-26 07:41:37.356197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.417 [2024-11-26 07:41:37.356208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.417 [2024-11-26 07:41:37.356456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.417 [2024-11-26 07:41:37.356680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.417 [2024-11-26 07:41:37.356690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.417 [2024-11-26 07:41:37.356698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.417 [2024-11-26 07:41:37.356707] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.417 [2024-11-26 07:41:37.369442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.417 [2024-11-26 07:41:37.370145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.417 [2024-11-26 07:41:37.370184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.417 [2024-11-26 07:41:37.370195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.417 [2024-11-26 07:41:37.370433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.417 [2024-11-26 07:41:37.370657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.417 [2024-11-26 07:41:37.370668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.417 [2024-11-26 07:41:37.370675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.417 [2024-11-26 07:41:37.370688] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.417 [2024-11-26 07:41:37.383431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.417 [2024-11-26 07:41:37.384158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.417 [2024-11-26 07:41:37.384196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.417 [2024-11-26 07:41:37.384209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.417 [2024-11-26 07:41:37.384448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.417 [2024-11-26 07:41:37.384672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.417 [2024-11-26 07:41:37.384682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.417 [2024-11-26 07:41:37.384690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.417 [2024-11-26 07:41:37.384697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.417 [2024-11-26 07:41:37.397224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.417 [2024-11-26 07:41:37.397760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.417 [2024-11-26 07:41:37.397780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.417 [2024-11-26 07:41:37.397788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.417 [2024-11-26 07:41:37.398012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.417 [2024-11-26 07:41:37.398232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.417 [2024-11-26 07:41:37.398242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.417 [2024-11-26 07:41:37.398250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.417 [2024-11-26 07:41:37.398256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.417 [2024-11-26 07:41:37.411184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.417 [2024-11-26 07:41:37.411810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.417 [2024-11-26 07:41:37.411848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.417 [2024-11-26 07:41:37.411860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.417 [2024-11-26 07:41:37.412108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.417 [2024-11-26 07:41:37.412332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.417 [2024-11-26 07:41:37.412342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.417 [2024-11-26 07:41:37.412350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.417 [2024-11-26 07:41:37.412358] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.417 [2024-11-26 07:41:37.425101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.417 [2024-11-26 07:41:37.425709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.417 [2024-11-26 07:41:37.425748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.417 [2024-11-26 07:41:37.425759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.417 [2024-11-26 07:41:37.426004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.417 [2024-11-26 07:41:37.426229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.417 [2024-11-26 07:41:37.426239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.417 [2024-11-26 07:41:37.426247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.417 [2024-11-26 07:41:37.426255] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.417 [2024-11-26 07:41:37.438987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.417 [2024-11-26 07:41:37.439573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.417 [2024-11-26 07:41:37.439593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.417 [2024-11-26 07:41:37.439601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.417 [2024-11-26 07:41:37.439821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.417 [2024-11-26 07:41:37.440046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.417 [2024-11-26 07:41:37.440057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.417 [2024-11-26 07:41:37.440064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.418 [2024-11-26 07:41:37.440071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.418 [2024-11-26 07:41:37.452793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.418 [2024-11-26 07:41:37.453443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.418 [2024-11-26 07:41:37.453482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.418 [2024-11-26 07:41:37.453494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.418 [2024-11-26 07:41:37.453732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.418 [2024-11-26 07:41:37.453962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.418 [2024-11-26 07:41:37.453973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.418 [2024-11-26 07:41:37.453981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.418 [2024-11-26 07:41:37.453989] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.418 [2024-11-26 07:41:37.466732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.418 [2024-11-26 07:41:37.467381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.418 [2024-11-26 07:41:37.467420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.418 [2024-11-26 07:41:37.467437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.418 [2024-11-26 07:41:37.467677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.418 [2024-11-26 07:41:37.467909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.418 [2024-11-26 07:41:37.467920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.418 [2024-11-26 07:41:37.467928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.418 [2024-11-26 07:41:37.467936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.418 [2024-11-26 07:41:37.480667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.418 [2024-11-26 07:41:37.481294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.418 [2024-11-26 07:41:37.481333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.418 [2024-11-26 07:41:37.481344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.418 [2024-11-26 07:41:37.481582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.418 [2024-11-26 07:41:37.481805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.418 [2024-11-26 07:41:37.481815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.418 [2024-11-26 07:41:37.481823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.418 [2024-11-26 07:41:37.481831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.418 [2024-11-26 07:41:37.494570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.418 [2024-11-26 07:41:37.495214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.418 [2024-11-26 07:41:37.495253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.418 [2024-11-26 07:41:37.495265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.418 [2024-11-26 07:41:37.495504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.418 [2024-11-26 07:41:37.495728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.418 [2024-11-26 07:41:37.495737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.418 [2024-11-26 07:41:37.495745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.418 [2024-11-26 07:41:37.495753] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.418 [2024-11-26 07:41:37.508494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.418 [2024-11-26 07:41:37.508905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.418 [2024-11-26 07:41:37.508928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.418 [2024-11-26 07:41:37.508936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.418 [2024-11-26 07:41:37.509158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.418 [2024-11-26 07:41:37.509378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.418 [2024-11-26 07:41:37.509392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.418 [2024-11-26 07:41:37.509400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.418 [2024-11-26 07:41:37.509407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.418 [2024-11-26 07:41:37.522359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.418 [2024-11-26 07:41:37.522971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.418 [2024-11-26 07:41:37.523010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.418 [2024-11-26 07:41:37.523022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.418 [2024-11-26 07:41:37.523262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.418 [2024-11-26 07:41:37.523485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.418 [2024-11-26 07:41:37.523495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.418 [2024-11-26 07:41:37.523503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.418 [2024-11-26 07:41:37.523510] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.418 [2024-11-26 07:41:37.536249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.418 [2024-11-26 07:41:37.536946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.418 [2024-11-26 07:41:37.536984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.418 [2024-11-26 07:41:37.536997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.418 [2024-11-26 07:41:37.537238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.418 [2024-11-26 07:41:37.537462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.418 [2024-11-26 07:41:37.537472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.418 [2024-11-26 07:41:37.537480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.418 [2024-11-26 07:41:37.537488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.679 [2024-11-26 07:41:37.550226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.679 [2024-11-26 07:41:37.550884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.679 [2024-11-26 07:41:37.550922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.679 [2024-11-26 07:41:37.550935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.679 [2024-11-26 07:41:37.551177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.679 [2024-11-26 07:41:37.551400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.679 [2024-11-26 07:41:37.551409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.679 [2024-11-26 07:41:37.551417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.679 [2024-11-26 07:41:37.551430] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.679 [2024-11-26 07:41:37.564178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.679 [2024-11-26 07:41:37.564765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.679 [2024-11-26 07:41:37.564785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.679 [2024-11-26 07:41:37.564793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.679 [2024-11-26 07:41:37.565020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.679 [2024-11-26 07:41:37.565240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.679 [2024-11-26 07:41:37.565250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.679 [2024-11-26 07:41:37.565257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.679 [2024-11-26 07:41:37.565264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.679 [2024-11-26 07:41:37.577986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.679 [2024-11-26 07:41:37.578649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.679 [2024-11-26 07:41:37.578689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.679 [2024-11-26 07:41:37.578701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.679 [2024-11-26 07:41:37.578948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.680 [2024-11-26 07:41:37.579175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.680 [2024-11-26 07:41:37.579186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.680 [2024-11-26 07:41:37.579196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.680 [2024-11-26 07:41:37.579204] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.680 [2024-11-26 07:41:37.591938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.680 [2024-11-26 07:41:37.592615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.680 [2024-11-26 07:41:37.592654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.680 [2024-11-26 07:41:37.592666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.680 [2024-11-26 07:41:37.592913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.680 [2024-11-26 07:41:37.593138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.680 [2024-11-26 07:41:37.593148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.680 [2024-11-26 07:41:37.593156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.680 [2024-11-26 07:41:37.593164] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.680 [2024-11-26 07:41:37.605898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.680 [2024-11-26 07:41:37.606579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.680 [2024-11-26 07:41:37.606617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.680 [2024-11-26 07:41:37.606628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.680 [2024-11-26 07:41:37.606876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.680 [2024-11-26 07:41:37.607101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.680 [2024-11-26 07:41:37.607111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.680 [2024-11-26 07:41:37.607120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.680 [2024-11-26 07:41:37.607128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.680 [2024-11-26 07:41:37.619854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.680 [2024-11-26 07:41:37.620459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.680 [2024-11-26 07:41:37.620497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.680 [2024-11-26 07:41:37.620509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.680 [2024-11-26 07:41:37.620747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.680 [2024-11-26 07:41:37.620979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.680 [2024-11-26 07:41:37.620990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.680 [2024-11-26 07:41:37.620998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.680 [2024-11-26 07:41:37.621006] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.680 [2024-11-26 07:41:37.633828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.680 [2024-11-26 07:41:37.634374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.680 [2024-11-26 07:41:37.634395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.680 [2024-11-26 07:41:37.634404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.680 [2024-11-26 07:41:37.634623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.680 [2024-11-26 07:41:37.634843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.680 [2024-11-26 07:41:37.634852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.680 [2024-11-26 07:41:37.634859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.680 [2024-11-26 07:41:37.634874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.680 [2024-11-26 07:41:37.647803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.680 [2024-11-26 07:41:37.648329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.680 [2024-11-26 07:41:37.648347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.680 [2024-11-26 07:41:37.648363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.680 [2024-11-26 07:41:37.648581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.680 [2024-11-26 07:41:37.648801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.680 [2024-11-26 07:41:37.648810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.680 [2024-11-26 07:41:37.648817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.680 [2024-11-26 07:41:37.648824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.680 [2024-11-26 07:41:37.661764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.680 [2024-11-26 07:41:37.662332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.680 [2024-11-26 07:41:37.662349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.680 [2024-11-26 07:41:37.662357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.680 [2024-11-26 07:41:37.662575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.680 [2024-11-26 07:41:37.662794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.680 [2024-11-26 07:41:37.662804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.680 [2024-11-26 07:41:37.662811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.680 [2024-11-26 07:41:37.662818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.680 [2024-11-26 07:41:37.675742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.680 [2024-11-26 07:41:37.676399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.680 [2024-11-26 07:41:37.676438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.680 [2024-11-26 07:41:37.676449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.680 [2024-11-26 07:41:37.676686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.680 [2024-11-26 07:41:37.676920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.680 [2024-11-26 07:41:37.676930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.680 [2024-11-26 07:41:37.676938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.680 [2024-11-26 07:41:37.676946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.680 [2024-11-26 07:41:37.689672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.680 [2024-11-26 07:41:37.690244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.680 [2024-11-26 07:41:37.690265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.680 [2024-11-26 07:41:37.690273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.680 [2024-11-26 07:41:37.690492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.680 [2024-11-26 07:41:37.690712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.680 [2024-11-26 07:41:37.690726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.680 [2024-11-26 07:41:37.690734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.680 [2024-11-26 07:41:37.690740] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.680 [2024-11-26 07:41:37.703567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.680 [2024-11-26 07:41:37.704243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.680 [2024-11-26 07:41:37.704282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.680 [2024-11-26 07:41:37.704293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.680 [2024-11-26 07:41:37.704531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.680 [2024-11-26 07:41:37.704755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.680 [2024-11-26 07:41:37.704765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.681 [2024-11-26 07:41:37.704773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.681 [2024-11-26 07:41:37.704781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.681 [2024-11-26 07:41:37.717516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.681 [2024-11-26 07:41:37.718171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.681 [2024-11-26 07:41:37.718210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.681 [2024-11-26 07:41:37.718222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.681 [2024-11-26 07:41:37.718460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.681 [2024-11-26 07:41:37.718683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.681 [2024-11-26 07:41:37.718692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.681 [2024-11-26 07:41:37.718700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.681 [2024-11-26 07:41:37.718708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.681 [2024-11-26 07:41:37.731455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.681 [2024-11-26 07:41:37.732139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.681 [2024-11-26 07:41:37.732178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.681 [2024-11-26 07:41:37.732189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.681 [2024-11-26 07:41:37.732426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.681 [2024-11-26 07:41:37.732649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.681 [2024-11-26 07:41:37.732659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.681 [2024-11-26 07:41:37.732667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.681 [2024-11-26 07:41:37.732680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.681 [2024-11-26 07:41:37.745418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.681 [2024-11-26 07:41:37.745997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.681 [2024-11-26 07:41:37.746036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.681 [2024-11-26 07:41:37.746049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.681 [2024-11-26 07:41:37.746289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.681 [2024-11-26 07:41:37.746513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.681 [2024-11-26 07:41:37.746523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.681 [2024-11-26 07:41:37.746531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.681 [2024-11-26 07:41:37.746539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.681 [2024-11-26 07:41:37.759289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.681 [2024-11-26 07:41:37.759921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.681 [2024-11-26 07:41:37.759961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.681 [2024-11-26 07:41:37.759972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.681 [2024-11-26 07:41:37.760210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.681 [2024-11-26 07:41:37.760434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.681 [2024-11-26 07:41:37.760444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.681 [2024-11-26 07:41:37.760451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.681 [2024-11-26 07:41:37.760459] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.681 [2024-11-26 07:41:37.773195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.681 [2024-11-26 07:41:37.773785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.681 [2024-11-26 07:41:37.773805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.681 [2024-11-26 07:41:37.773813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.681 [2024-11-26 07:41:37.774038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.681 [2024-11-26 07:41:37.774258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.681 [2024-11-26 07:41:37.774267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.681 [2024-11-26 07:41:37.774275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.681 [2024-11-26 07:41:37.774282] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.681 [2024-11-26 07:41:37.786997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.681 [2024-11-26 07:41:37.787660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.681 [2024-11-26 07:41:37.787699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.681 [2024-11-26 07:41:37.787710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.681 [2024-11-26 07:41:37.787957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.681 [2024-11-26 07:41:37.788182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.681 [2024-11-26 07:41:37.788193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.681 [2024-11-26 07:41:37.788201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.681 [2024-11-26 07:41:37.788209] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.681 [2024-11-26 07:41:37.800935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.681 [2024-11-26 07:41:37.801468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.681 [2024-11-26 07:41:37.801488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.681 [2024-11-26 07:41:37.801497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.681 [2024-11-26 07:41:37.801716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.681 [2024-11-26 07:41:37.801943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.681 [2024-11-26 07:41:37.801953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.681 [2024-11-26 07:41:37.801961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.681 [2024-11-26 07:41:37.801968] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.942 [2024-11-26 07:41:37.814899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.942 [2024-11-26 07:41:37.815424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.942 [2024-11-26 07:41:37.815463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.942 [2024-11-26 07:41:37.815474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.942 [2024-11-26 07:41:37.815712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.942 [2024-11-26 07:41:37.815945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.942 [2024-11-26 07:41:37.815956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.942 [2024-11-26 07:41:37.815964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.942 [2024-11-26 07:41:37.815972] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.942 [2024-11-26 07:41:37.828752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.942 [2024-11-26 07:41:37.829438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.942 [2024-11-26 07:41:37.829476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.942 [2024-11-26 07:41:37.829488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.942 [2024-11-26 07:41:37.829732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.942 [2024-11-26 07:41:37.829966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.942 [2024-11-26 07:41:37.829977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.942 [2024-11-26 07:41:37.829985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.942 [2024-11-26 07:41:37.829994] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.942 [2024-11-26 07:41:37.842723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.942 [2024-11-26 07:41:37.843379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.942 [2024-11-26 07:41:37.843418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.942 [2024-11-26 07:41:37.843430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.942 [2024-11-26 07:41:37.843668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.942 [2024-11-26 07:41:37.843901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.942 [2024-11-26 07:41:37.843912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.942 [2024-11-26 07:41:37.843921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.942 [2024-11-26 07:41:37.843929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.942 [2024-11-26 07:41:37.856844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.942 [2024-11-26 07:41:37.857530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.942 [2024-11-26 07:41:37.857568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.942 [2024-11-26 07:41:37.857579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.942 [2024-11-26 07:41:37.857817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.942 [2024-11-26 07:41:37.858061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.942 [2024-11-26 07:41:37.858074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.942 [2024-11-26 07:41:37.858082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.942 [2024-11-26 07:41:37.858090] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.942 [2024-11-26 07:41:37.870814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.942 [2024-11-26 07:41:37.871471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.942 [2024-11-26 07:41:37.871510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.943 [2024-11-26 07:41:37.871521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.943 [2024-11-26 07:41:37.871759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.943 [2024-11-26 07:41:37.871991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.943 [2024-11-26 07:41:37.872007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.943 [2024-11-26 07:41:37.872015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.943 [2024-11-26 07:41:37.872023] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.943 [2024-11-26 07:41:37.884746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.943 [2024-11-26 07:41:37.885297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.943 [2024-11-26 07:41:37.885317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.943 [2024-11-26 07:41:37.885325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.943 [2024-11-26 07:41:37.885545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.943 [2024-11-26 07:41:37.885764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.943 [2024-11-26 07:41:37.885773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.943 [2024-11-26 07:41:37.885781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.943 [2024-11-26 07:41:37.885788] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.943 [2024-11-26 07:41:37.898722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.943 [2024-11-26 07:41:37.899295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.943 [2024-11-26 07:41:37.899313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.943 [2024-11-26 07:41:37.899321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.943 [2024-11-26 07:41:37.899540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.943 [2024-11-26 07:41:37.899759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.943 [2024-11-26 07:41:37.899770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.943 [2024-11-26 07:41:37.899777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.943 [2024-11-26 07:41:37.899785] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.943 [2024-11-26 07:41:37.912505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.943 [2024-11-26 07:41:37.913040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.943 [2024-11-26 07:41:37.913057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.943 [2024-11-26 07:41:37.913065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.943 [2024-11-26 07:41:37.913284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.943 [2024-11-26 07:41:37.913503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.943 [2024-11-26 07:41:37.913512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.943 [2024-11-26 07:41:37.913519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.943 [2024-11-26 07:41:37.913529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.943 [2024-11-26 07:41:37.926472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.943 [2024-11-26 07:41:37.927101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.943 [2024-11-26 07:41:37.927139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.943 [2024-11-26 07:41:37.927150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.943 [2024-11-26 07:41:37.927388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.943 [2024-11-26 07:41:37.927611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.943 [2024-11-26 07:41:37.927622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.943 [2024-11-26 07:41:37.927630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.943 [2024-11-26 07:41:37.927637] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.943 [2024-11-26 07:41:37.940372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.943 [2024-11-26 07:41:37.940967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.943 [2024-11-26 07:41:37.941005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.943 [2024-11-26 07:41:37.941017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.943 [2024-11-26 07:41:37.941258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.943 [2024-11-26 07:41:37.941482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.943 [2024-11-26 07:41:37.941491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.943 [2024-11-26 07:41:37.941500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.943 [2024-11-26 07:41:37.941508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.943 [2024-11-26 07:41:37.954269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.943 [2024-11-26 07:41:37.954966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.943 [2024-11-26 07:41:37.955005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.943 [2024-11-26 07:41:37.955017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.943 [2024-11-26 07:41:37.955257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.943 [2024-11-26 07:41:37.955480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.943 [2024-11-26 07:41:37.955489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.943 [2024-11-26 07:41:37.955497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.943 [2024-11-26 07:41:37.955505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.943 [2024-11-26 07:41:37.968253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.943 [2024-11-26 07:41:37.968895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.943 [2024-11-26 07:41:37.968934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.943 [2024-11-26 07:41:37.968947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.943 [2024-11-26 07:41:37.969187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.943 [2024-11-26 07:41:37.969410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.943 [2024-11-26 07:41:37.969421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.943 [2024-11-26 07:41:37.969428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.943 [2024-11-26 07:41:37.969436] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.943 [2024-11-26 07:41:37.982172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.943 [2024-11-26 07:41:37.982814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.943 [2024-11-26 07:41:37.982853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.943 [2024-11-26 07:41:37.982873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.943 [2024-11-26 07:41:37.983111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.943 [2024-11-26 07:41:37.983335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.943 [2024-11-26 07:41:37.983345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.943 [2024-11-26 07:41:37.983353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.943 [2024-11-26 07:41:37.983362] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.943 [2024-11-26 07:41:37.996090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.943 [2024-11-26 07:41:37.996686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.943 [2024-11-26 07:41:37.996724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.943 [2024-11-26 07:41:37.996735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.943 [2024-11-26 07:41:37.996983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.943 [2024-11-26 07:41:37.997207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.943 [2024-11-26 07:41:37.997217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.943 [2024-11-26 07:41:37.997225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.943 [2024-11-26 07:41:37.997233] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.943 [2024-11-26 07:41:38.009960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.943 [2024-11-26 07:41:38.010542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.943 [2024-11-26 07:41:38.010562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.944 [2024-11-26 07:41:38.010570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.944 [2024-11-26 07:41:38.010794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.944 [2024-11-26 07:41:38.011020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.944 [2024-11-26 07:41:38.011031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.944 [2024-11-26 07:41:38.011039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.944 [2024-11-26 07:41:38.011046] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.944 [2024-11-26 07:41:38.023774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.944 [2024-11-26 07:41:38.024437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.944 [2024-11-26 07:41:38.024476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.944 [2024-11-26 07:41:38.024487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.944 [2024-11-26 07:41:38.024725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.944 [2024-11-26 07:41:38.024957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.944 [2024-11-26 07:41:38.024968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.944 [2024-11-26 07:41:38.024976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.944 [2024-11-26 07:41:38.024984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.944 [2024-11-26 07:41:38.037707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.944 [2024-11-26 07:41:38.038341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.944 [2024-11-26 07:41:38.038380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.944 [2024-11-26 07:41:38.038392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.944 [2024-11-26 07:41:38.038630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.944 [2024-11-26 07:41:38.038853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.944 [2024-11-26 07:41:38.038873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.944 [2024-11-26 07:41:38.038881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.944 [2024-11-26 07:41:38.038889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.944 [2024-11-26 07:41:38.051619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.944 [2024-11-26 07:41:38.052179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.944 [2024-11-26 07:41:38.052199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.944 [2024-11-26 07:41:38.052208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.944 [2024-11-26 07:41:38.052427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.944 [2024-11-26 07:41:38.052646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.944 [2024-11-26 07:41:38.052660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.944 [2024-11-26 07:41:38.052667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.944 [2024-11-26 07:41:38.052674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:53.944 5300.60 IOPS, 20.71 MiB/s [2024-11-26T06:41:38.081Z] [2024-11-26 07:41:38.065603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:53.944 [2024-11-26 07:41:38.066251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.944 [2024-11-26 07:41:38.066290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:53.944 [2024-11-26 07:41:38.066301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:53.944 [2024-11-26 07:41:38.066539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:53.944 [2024-11-26 07:41:38.066763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:53.944 [2024-11-26 07:41:38.066773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:53.944 [2024-11-26 07:41:38.066781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:53.944 [2024-11-26 07:41:38.066789] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.205 [2024-11-26 07:41:38.079525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.205 [2024-11-26 07:41:38.080170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.205 [2024-11-26 07:41:38.080208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.205 [2024-11-26 07:41:38.080220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.205 [2024-11-26 07:41:38.080459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.205 [2024-11-26 07:41:38.080683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.205 [2024-11-26 07:41:38.080694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.205 [2024-11-26 07:41:38.080703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.205 [2024-11-26 07:41:38.080712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.205 [2024-11-26 07:41:38.093447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.205 [2024-11-26 07:41:38.094142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.205 [2024-11-26 07:41:38.094181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.205 [2024-11-26 07:41:38.094193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.205 [2024-11-26 07:41:38.094432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.205 [2024-11-26 07:41:38.094655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.205 [2024-11-26 07:41:38.094665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.205 [2024-11-26 07:41:38.094673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.205 [2024-11-26 07:41:38.094686] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.205 [2024-11-26 07:41:38.107420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.205 [2024-11-26 07:41:38.107967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.205 [2024-11-26 07:41:38.108006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.205 [2024-11-26 07:41:38.108018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.205 [2024-11-26 07:41:38.108257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.205 [2024-11-26 07:41:38.108481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.205 [2024-11-26 07:41:38.108491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.205 [2024-11-26 07:41:38.108499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.205 [2024-11-26 07:41:38.108506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.205 [2024-11-26 07:41:38.121247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.205 [2024-11-26 07:41:38.121944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.205 [2024-11-26 07:41:38.121983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.205 [2024-11-26 07:41:38.121995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.205 [2024-11-26 07:41:38.122237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.205 [2024-11-26 07:41:38.122460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.205 [2024-11-26 07:41:38.122470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.205 [2024-11-26 07:41:38.122478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.205 [2024-11-26 07:41:38.122486] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.205 [2024-11-26 07:41:38.135225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.205 [2024-11-26 07:41:38.135893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.205 [2024-11-26 07:41:38.135932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.205 [2024-11-26 07:41:38.135944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.205 [2024-11-26 07:41:38.136183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.205 [2024-11-26 07:41:38.136406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.205 [2024-11-26 07:41:38.136416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.205 [2024-11-26 07:41:38.136424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.205 [2024-11-26 07:41:38.136432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.205 [2024-11-26 07:41:38.149164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.205 [2024-11-26 07:41:38.149797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.205 [2024-11-26 07:41:38.149836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.205 [2024-11-26 07:41:38.149848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.205 [2024-11-26 07:41:38.150097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.205 [2024-11-26 07:41:38.150321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.205 [2024-11-26 07:41:38.150331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.205 [2024-11-26 07:41:38.150339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.205 [2024-11-26 07:41:38.150347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.205 [2024-11-26 07:41:38.163083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.205 [2024-11-26 07:41:38.163751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.205 [2024-11-26 07:41:38.163789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.205 [2024-11-26 07:41:38.163801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.205 [2024-11-26 07:41:38.164049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.205 [2024-11-26 07:41:38.164274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.205 [2024-11-26 07:41:38.164284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.205 [2024-11-26 07:41:38.164292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.205 [2024-11-26 07:41:38.164300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.205 [2024-11-26 07:41:38.177029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.205 [2024-11-26 07:41:38.177685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.205 [2024-11-26 07:41:38.177724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.205 [2024-11-26 07:41:38.177735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.205 [2024-11-26 07:41:38.177982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.205 [2024-11-26 07:41:38.178207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.205 [2024-11-26 07:41:38.178217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.205 [2024-11-26 07:41:38.178225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.205 [2024-11-26 07:41:38.178233] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.205 [2024-11-26 07:41:38.190959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.205 [2024-11-26 07:41:38.191648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.205 [2024-11-26 07:41:38.191686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.205 [2024-11-26 07:41:38.191702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.205 [2024-11-26 07:41:38.191951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.205 [2024-11-26 07:41:38.192175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.205 [2024-11-26 07:41:38.192186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.205 [2024-11-26 07:41:38.192194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.205 [2024-11-26 07:41:38.192202] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.205 [2024-11-26 07:41:38.204930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.205 [2024-11-26 07:41:38.205602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.205 [2024-11-26 07:41:38.205641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.205 [2024-11-26 07:41:38.205651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.205 [2024-11-26 07:41:38.205899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.205 [2024-11-26 07:41:38.206123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.205 [2024-11-26 07:41:38.206133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.205 [2024-11-26 07:41:38.206141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.205 [2024-11-26 07:41:38.206149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.205 [2024-11-26 07:41:38.218878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.205 [2024-11-26 07:41:38.219453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.205 [2024-11-26 07:41:38.219472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.205 [2024-11-26 07:41:38.219481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.205 [2024-11-26 07:41:38.219700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.205 [2024-11-26 07:41:38.219927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.205 [2024-11-26 07:41:38.219937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.205 [2024-11-26 07:41:38.219944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.205 [2024-11-26 07:41:38.219951] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.205 [2024-11-26 07:41:38.232681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.205 [2024-11-26 07:41:38.233177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.205 [2024-11-26 07:41:38.233195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.205 [2024-11-26 07:41:38.233202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.205 [2024-11-26 07:41:38.233420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.205 [2024-11-26 07:41:38.233644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.205 [2024-11-26 07:41:38.233654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.205 [2024-11-26 07:41:38.233661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.205 [2024-11-26 07:41:38.233667] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.206 [2024-11-26 07:41:38.246595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.206 [2024-11-26 07:41:38.247061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.206 [2024-11-26 07:41:38.247079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.206 [2024-11-26 07:41:38.247087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.206 [2024-11-26 07:41:38.247305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.206 [2024-11-26 07:41:38.247524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.206 [2024-11-26 07:41:38.247534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.206 [2024-11-26 07:41:38.247542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.206 [2024-11-26 07:41:38.247549] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.206 [2024-11-26 07:41:38.260401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.206 [2024-11-26 07:41:38.261065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.206 [2024-11-26 07:41:38.261103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.206 [2024-11-26 07:41:38.261114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.206 [2024-11-26 07:41:38.261351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.206 [2024-11-26 07:41:38.261575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.206 [2024-11-26 07:41:38.261584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.206 [2024-11-26 07:41:38.261593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.206 [2024-11-26 07:41:38.261601] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.206 [2024-11-26 07:41:38.274338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.206 [2024-11-26 07:41:38.274833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.206 [2024-11-26 07:41:38.274878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.206 [2024-11-26 07:41:38.274890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.206 [2024-11-26 07:41:38.275128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.206 [2024-11-26 07:41:38.275351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.206 [2024-11-26 07:41:38.275361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.206 [2024-11-26 07:41:38.275369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.206 [2024-11-26 07:41:38.275382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.206 [2024-11-26 07:41:38.288320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.206 [2024-11-26 07:41:38.288970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.206 [2024-11-26 07:41:38.289009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.206 [2024-11-26 07:41:38.289022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.206 [2024-11-26 07:41:38.289261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.206 [2024-11-26 07:41:38.289484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.206 [2024-11-26 07:41:38.289494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.206 [2024-11-26 07:41:38.289502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.206 [2024-11-26 07:41:38.289510] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.206 [2024-11-26 07:41:38.302243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.206 [2024-11-26 07:41:38.302884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.206 [2024-11-26 07:41:38.302922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.206 [2024-11-26 07:41:38.302934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.206 [2024-11-26 07:41:38.303176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.206 [2024-11-26 07:41:38.303399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.206 [2024-11-26 07:41:38.303408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.206 [2024-11-26 07:41:38.303416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.206 [2024-11-26 07:41:38.303424] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.206 [2024-11-26 07:41:38.316157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.206 [2024-11-26 07:41:38.316831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.206 [2024-11-26 07:41:38.316876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.206 [2024-11-26 07:41:38.316888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.206 [2024-11-26 07:41:38.317126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.206 [2024-11-26 07:41:38.317349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.206 [2024-11-26 07:41:38.317358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.206 [2024-11-26 07:41:38.317366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.206 [2024-11-26 07:41:38.317374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.206 [2024-11-26 07:41:38.330118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.206 [2024-11-26 07:41:38.330771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.206 [2024-11-26 07:41:38.330809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.206 [2024-11-26 07:41:38.330821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.206 [2024-11-26 07:41:38.331072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.206 [2024-11-26 07:41:38.331296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.206 [2024-11-26 07:41:38.331307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.206 [2024-11-26 07:41:38.331315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.206 [2024-11-26 07:41:38.331322] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.467 [2024-11-26 07:41:38.344057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.467 [2024-11-26 07:41:38.344731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.467 [2024-11-26 07:41:38.344769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.467 [2024-11-26 07:41:38.344781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.467 [2024-11-26 07:41:38.345028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.467 [2024-11-26 07:41:38.345252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.467 [2024-11-26 07:41:38.345263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.467 [2024-11-26 07:41:38.345272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.467 [2024-11-26 07:41:38.345281] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.467 [2024-11-26 07:41:38.358010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.467 [2024-11-26 07:41:38.358668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.467 [2024-11-26 07:41:38.358706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.467 [2024-11-26 07:41:38.358718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.467 [2024-11-26 07:41:38.358966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.467 [2024-11-26 07:41:38.359190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.467 [2024-11-26 07:41:38.359200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.467 [2024-11-26 07:41:38.359208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.467 [2024-11-26 07:41:38.359216] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.467 [2024-11-26 07:41:38.371950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.467 [2024-11-26 07:41:38.372622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.467 [2024-11-26 07:41:38.372661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.467 [2024-11-26 07:41:38.372677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.467 [2024-11-26 07:41:38.372924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.467 [2024-11-26 07:41:38.373148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.467 [2024-11-26 07:41:38.373158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.467 [2024-11-26 07:41:38.373166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.467 [2024-11-26 07:41:38.373174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.467 [2024-11-26 07:41:38.385900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.467 [2024-11-26 07:41:38.386546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.467 [2024-11-26 07:41:38.386585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.467 [2024-11-26 07:41:38.386596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.467 [2024-11-26 07:41:38.386834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.467 [2024-11-26 07:41:38.387069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.467 [2024-11-26 07:41:38.387080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.467 [2024-11-26 07:41:38.387088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.467 [2024-11-26 07:41:38.387096] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.467 [2024-11-26 07:41:38.399821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.467 [2024-11-26 07:41:38.400369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.467 [2024-11-26 07:41:38.400389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.467 [2024-11-26 07:41:38.400397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.467 [2024-11-26 07:41:38.400616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.467 [2024-11-26 07:41:38.400835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.467 [2024-11-26 07:41:38.400845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.467 [2024-11-26 07:41:38.400852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.467 [2024-11-26 07:41:38.400859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.467 [2024-11-26 07:41:38.413788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.467 [2024-11-26 07:41:38.414356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.467 [2024-11-26 07:41:38.414374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.467 [2024-11-26 07:41:38.414382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.467 [2024-11-26 07:41:38.414600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.467 [2024-11-26 07:41:38.414824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.467 [2024-11-26 07:41:38.414834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.467 [2024-11-26 07:41:38.414841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.468 [2024-11-26 07:41:38.414848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.468 [2024-11-26 07:41:38.427584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.468 [2024-11-26 07:41:38.428155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.468 [2024-11-26 07:41:38.428173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.468 [2024-11-26 07:41:38.428181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.468 [2024-11-26 07:41:38.428399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.468 [2024-11-26 07:41:38.428618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.468 [2024-11-26 07:41:38.428628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.468 [2024-11-26 07:41:38.428635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.468 [2024-11-26 07:41:38.428641] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.468 [2024-11-26 07:41:38.441363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.468 [2024-11-26 07:41:38.441908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.468 [2024-11-26 07:41:38.441932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.468 [2024-11-26 07:41:38.441940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.468 [2024-11-26 07:41:38.442163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.468 [2024-11-26 07:41:38.442383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.468 [2024-11-26 07:41:38.442393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.468 [2024-11-26 07:41:38.442400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.468 [2024-11-26 07:41:38.442407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.468 [2024-11-26 07:41:38.455340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.468 [2024-11-26 07:41:38.455955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.468 [2024-11-26 07:41:38.455994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.468 [2024-11-26 07:41:38.456007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.468 [2024-11-26 07:41:38.456247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.468 [2024-11-26 07:41:38.456471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.468 [2024-11-26 07:41:38.456481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.468 [2024-11-26 07:41:38.456489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.468 [2024-11-26 07:41:38.456502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.468 [2024-11-26 07:41:38.469251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.468 [2024-11-26 07:41:38.469917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.468 [2024-11-26 07:41:38.469955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.468 [2024-11-26 07:41:38.469967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.468 [2024-11-26 07:41:38.470207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.468 [2024-11-26 07:41:38.470431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.468 [2024-11-26 07:41:38.470441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.468 [2024-11-26 07:41:38.470449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.468 [2024-11-26 07:41:38.470457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.468 [2024-11-26 07:41:38.483191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.468 [2024-11-26 07:41:38.483839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.468 [2024-11-26 07:41:38.483885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.468 [2024-11-26 07:41:38.483897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.468 [2024-11-26 07:41:38.484135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.468 [2024-11-26 07:41:38.484359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.468 [2024-11-26 07:41:38.484369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.468 [2024-11-26 07:41:38.484377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.468 [2024-11-26 07:41:38.484385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.468 [2024-11-26 07:41:38.497116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.468 [2024-11-26 07:41:38.497794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.468 [2024-11-26 07:41:38.497833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.468 [2024-11-26 07:41:38.497845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.468 [2024-11-26 07:41:38.498095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.468 [2024-11-26 07:41:38.498319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.468 [2024-11-26 07:41:38.498329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.468 [2024-11-26 07:41:38.498337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.468 [2024-11-26 07:41:38.498345] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.468 [2024-11-26 07:41:38.511072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.468 [2024-11-26 07:41:38.511726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.468 [2024-11-26 07:41:38.511765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.468 [2024-11-26 07:41:38.511776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.468 [2024-11-26 07:41:38.512025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.468 [2024-11-26 07:41:38.512250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.468 [2024-11-26 07:41:38.512260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.468 [2024-11-26 07:41:38.512268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.468 [2024-11-26 07:41:38.512276] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.468 [2024-11-26 07:41:38.525029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.468 [2024-11-26 07:41:38.525685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.468 [2024-11-26 07:41:38.525724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.468 [2024-11-26 07:41:38.525735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.468 [2024-11-26 07:41:38.525983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.468 [2024-11-26 07:41:38.526207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.468 [2024-11-26 07:41:38.526217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.468 [2024-11-26 07:41:38.526225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.468 [2024-11-26 07:41:38.526233] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.469 [2024-11-26 07:41:38.538978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.469 [2024-11-26 07:41:38.539558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.469 [2024-11-26 07:41:38.539595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.469 [2024-11-26 07:41:38.539607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.469 [2024-11-26 07:41:38.539845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.469 [2024-11-26 07:41:38.540085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.469 [2024-11-26 07:41:38.540097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.469 [2024-11-26 07:41:38.540105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.469 [2024-11-26 07:41:38.540113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.469 [2024-11-26 07:41:38.552843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.469 [2024-11-26 07:41:38.553510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.469 [2024-11-26 07:41:38.553548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.469 [2024-11-26 07:41:38.553564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.469 [2024-11-26 07:41:38.553802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.469 [2024-11-26 07:41:38.554035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.469 [2024-11-26 07:41:38.554045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.469 [2024-11-26 07:41:38.554053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.469 [2024-11-26 07:41:38.554062] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.469 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2308580 Killed "${NVMF_APP[@]}" "$@" 00:31:54.469 07:41:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:31:54.469 07:41:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:31:54.469 07:41:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:54.469 07:41:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:54.469 [2024-11-26 07:41:38.566828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.469 07:41:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:54.469 [2024-11-26 07:41:38.567416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.469 [2024-11-26 07:41:38.567438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.469 [2024-11-26 07:41:38.567446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.469 [2024-11-26 07:41:38.567667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.469 [2024-11-26 07:41:38.567895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.469 [2024-11-26 07:41:38.567906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.469 [2024-11-26 07:41:38.567913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.469 [2024-11-26 07:41:38.567920] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.469 07:41:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2310118 00:31:54.469 07:41:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2310118 00:31:54.469 07:41:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:54.469 07:41:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2310118 ']' 00:31:54.469 07:41:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:54.469 07:41:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:54.469 07:41:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:54.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:54.469 07:41:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:54.469 07:41:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:54.469 [2024-11-26 07:41:38.580667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.469 [2024-11-26 07:41:38.581207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.469 [2024-11-26 07:41:38.581230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.469 [2024-11-26 07:41:38.581239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.469 [2024-11-26 07:41:38.581460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.469 [2024-11-26 07:41:38.581682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.469 [2024-11-26 07:41:38.581693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.469 [2024-11-26 07:41:38.581702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.469 [2024-11-26 07:41:38.581711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.469 [2024-11-26 07:41:38.594466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.469 [2024-11-26 07:41:38.595130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.469 [2024-11-26 07:41:38.595169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.469 [2024-11-26 07:41:38.595181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.469 [2024-11-26 07:41:38.595420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.469 [2024-11-26 07:41:38.595643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.469 [2024-11-26 07:41:38.595654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.469 [2024-11-26 07:41:38.595662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.469 [2024-11-26 07:41:38.595670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.736 [2024-11-26 07:41:38.608412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.737 [2024-11-26 07:41:38.609029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.737 [2024-11-26 07:41:38.609049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.737 [2024-11-26 07:41:38.609058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.737 [2024-11-26 07:41:38.609278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.737 [2024-11-26 07:41:38.609499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.737 [2024-11-26 07:41:38.609508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.737 [2024-11-26 07:41:38.609517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.737 [2024-11-26 07:41:38.609524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.737 [2024-11-26 07:41:38.622311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.737 [2024-11-26 07:41:38.622857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.737 [2024-11-26 07:41:38.622882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.737 [2024-11-26 07:41:38.622891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.737 [2024-11-26 07:41:38.623114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.737 [2024-11-26 07:41:38.623334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.737 [2024-11-26 07:41:38.623343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.737 [2024-11-26 07:41:38.623350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.737 [2024-11-26 07:41:38.623357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.737 [2024-11-26 07:41:38.636095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.737 [2024-11-26 07:41:38.636651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.737 [2024-11-26 07:41:38.636690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.737 [2024-11-26 07:41:38.636701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.737 [2024-11-26 07:41:38.636947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.737 [2024-11-26 07:41:38.637172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.737 [2024-11-26 07:41:38.637182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.737 [2024-11-26 07:41:38.637189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.737 [2024-11-26 07:41:38.637197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.737 [2024-11-26 07:41:38.639959] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:31:54.737 [2024-11-26 07:41:38.640009] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:54.737 [2024-11-26 07:41:38.649936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.737 [2024-11-26 07:41:38.650624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.737 [2024-11-26 07:41:38.650663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.737 [2024-11-26 07:41:38.650676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.737 [2024-11-26 07:41:38.650923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.737 [2024-11-26 07:41:38.651148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.737 [2024-11-26 07:41:38.651158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.737 [2024-11-26 07:41:38.651166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.737 [2024-11-26 07:41:38.651175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.737 [2024-11-26 07:41:38.664013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.737 [2024-11-26 07:41:38.664656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.737 [2024-11-26 07:41:38.664694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.737 [2024-11-26 07:41:38.664705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.737 [2024-11-26 07:41:38.664957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.737 [2024-11-26 07:41:38.665181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.737 [2024-11-26 07:41:38.665192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.737 [2024-11-26 07:41:38.665201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.737 [2024-11-26 07:41:38.665209] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.737 [2024-11-26 07:41:38.677944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.737 [2024-11-26 07:41:38.678482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.737 [2024-11-26 07:41:38.678502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.737 [2024-11-26 07:41:38.678510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.737 [2024-11-26 07:41:38.678730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.737 [2024-11-26 07:41:38.678957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.737 [2024-11-26 07:41:38.678969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.737 [2024-11-26 07:41:38.678977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.737 [2024-11-26 07:41:38.678984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.737 [2024-11-26 07:41:38.691925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.737 [2024-11-26 07:41:38.692465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.737 [2024-11-26 07:41:38.692503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.737 [2024-11-26 07:41:38.692514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.737 [2024-11-26 07:41:38.692753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.737 [2024-11-26 07:41:38.692984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.737 [2024-11-26 07:41:38.692995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.737 [2024-11-26 07:41:38.693004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.737 [2024-11-26 07:41:38.693013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.737 [2024-11-26 07:41:38.705738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.737 [2024-11-26 07:41:38.706407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.737 [2024-11-26 07:41:38.706445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.737 [2024-11-26 07:41:38.706457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.737 [2024-11-26 07:41:38.706695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.737 [2024-11-26 07:41:38.706928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.737 [2024-11-26 07:41:38.706944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.737 [2024-11-26 07:41:38.706952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.737 [2024-11-26 07:41:38.706960] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.737 [2024-11-26 07:41:38.719570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.737 [2024-11-26 07:41:38.720214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.737 [2024-11-26 07:41:38.720253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.737 [2024-11-26 07:41:38.720264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.737 [2024-11-26 07:41:38.720502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.737 [2024-11-26 07:41:38.720726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.737 [2024-11-26 07:41:38.720737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.737 [2024-11-26 07:41:38.720745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.737 [2024-11-26 07:41:38.720754] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.737 [2024-11-26 07:41:38.733504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.737 [2024-11-26 07:41:38.734046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.738 [2024-11-26 07:41:38.734085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.738 [2024-11-26 07:41:38.734097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.738 [2024-11-26 07:41:38.734338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.738 [2024-11-26 07:41:38.734561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.738 [2024-11-26 07:41:38.734571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.738 [2024-11-26 07:41:38.734580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.738 [2024-11-26 07:41:38.734588] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.738 [2024-11-26 07:41:38.739226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:54.738 [2024-11-26 07:41:38.747333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.738 [2024-11-26 07:41:38.747893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.738 [2024-11-26 07:41:38.747914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.738 [2024-11-26 07:41:38.747923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.738 [2024-11-26 07:41:38.748143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.738 [2024-11-26 07:41:38.748364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.738 [2024-11-26 07:41:38.748374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.738 [2024-11-26 07:41:38.748381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.738 [2024-11-26 07:41:38.748394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.738 [2024-11-26 07:41:38.761139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.738 [2024-11-26 07:41:38.761814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.738 [2024-11-26 07:41:38.761853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.738 [2024-11-26 07:41:38.761873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.738 [2024-11-26 07:41:38.762113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.738 [2024-11-26 07:41:38.762337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.738 [2024-11-26 07:41:38.762347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.738 [2024-11-26 07:41:38.762355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.738 [2024-11-26 07:41:38.762363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.738 [2024-11-26 07:41:38.768590] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:54.738 [2024-11-26 07:41:38.768615] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:54.738 [2024-11-26 07:41:38.768622] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:54.738 [2024-11-26 07:41:38.768627] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:54.738 [2024-11-26 07:41:38.768631] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:54.738 [2024-11-26 07:41:38.769721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:54.738 [2024-11-26 07:41:38.769897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:54.738 [2024-11-26 07:41:38.769913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:54.738 [2024-11-26 07:41:38.775105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.738 [2024-11-26 07:41:38.775710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.738 [2024-11-26 07:41:38.775729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.738 [2024-11-26 07:41:38.775738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.738 [2024-11-26 07:41:38.775964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.738 [2024-11-26 07:41:38.776184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.738 [2024-11-26 07:41:38.776194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.738 [2024-11-26 07:41:38.776202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.738 [2024-11-26 07:41:38.776209] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.738 [2024-11-26 07:41:38.788941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.738 [2024-11-26 07:41:38.789611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.738 [2024-11-26 07:41:38.789653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.738 [2024-11-26 07:41:38.789664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.738 [2024-11-26 07:41:38.789919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.738 [2024-11-26 07:41:38.790144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.738 [2024-11-26 07:41:38.790154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.738 [2024-11-26 07:41:38.790162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.738 [2024-11-26 07:41:38.790170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.738 [2024-11-26 07:41:38.802909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.738 [2024-11-26 07:41:38.803566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.738 [2024-11-26 07:41:38.803606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.738 [2024-11-26 07:41:38.803617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.738 [2024-11-26 07:41:38.803856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.738 [2024-11-26 07:41:38.804088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.738 [2024-11-26 07:41:38.804098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.738 [2024-11-26 07:41:38.804107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.738 [2024-11-26 07:41:38.804115] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.738 [2024-11-26 07:41:38.816853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.738 [2024-11-26 07:41:38.817556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.738 [2024-11-26 07:41:38.817595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.738 [2024-11-26 07:41:38.817607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.738 [2024-11-26 07:41:38.817845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.738 [2024-11-26 07:41:38.818077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.738 [2024-11-26 07:41:38.818088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.738 [2024-11-26 07:41:38.818096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.738 [2024-11-26 07:41:38.818105] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.738 [2024-11-26 07:41:38.830640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.738 [2024-11-26 07:41:38.831203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.738 [2024-11-26 07:41:38.831223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.738 [2024-11-26 07:41:38.831232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.738 [2024-11-26 07:41:38.831452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.738 [2024-11-26 07:41:38.831671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.738 [2024-11-26 07:41:38.831687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.738 [2024-11-26 07:41:38.831695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.738 [2024-11-26 07:41:38.831703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.738 [2024-11-26 07:41:38.844431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.738 [2024-11-26 07:41:38.844853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.738 [2024-11-26 07:41:38.844877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.738 [2024-11-26 07:41:38.844886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.738 [2024-11-26 07:41:38.845105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.738 [2024-11-26 07:41:38.845325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.738 [2024-11-26 07:41:38.845335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.738 [2024-11-26 07:41:38.845342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.738 [2024-11-26 07:41:38.845349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:54.738 [2024-11-26 07:41:38.858498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:54.739 [2024-11-26 07:41:38.859174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.739 [2024-11-26 07:41:38.859213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:54.739 [2024-11-26 07:41:38.859225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:54.739 [2024-11-26 07:41:38.859464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:54.739 [2024-11-26 07:41:38.859688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:54.739 [2024-11-26 07:41:38.859698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:54.739 [2024-11-26 07:41:38.859706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:54.739 [2024-11-26 07:41:38.859714] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.038 [2024-11-26 07:41:38.872464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.038 [2024-11-26 07:41:38.873098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.038 [2024-11-26 07:41:38.873136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.038 [2024-11-26 07:41:38.873147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.038 [2024-11-26 07:41:38.873386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.038 [2024-11-26 07:41:38.873609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.038 [2024-11-26 07:41:38.873619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.038 [2024-11-26 07:41:38.873627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.038 [2024-11-26 07:41:38.873640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.038 [2024-11-26 07:41:38.886378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.038 [2024-11-26 07:41:38.887101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.038 [2024-11-26 07:41:38.887139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.038 [2024-11-26 07:41:38.887151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.038 [2024-11-26 07:41:38.887389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.038 [2024-11-26 07:41:38.887612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.038 [2024-11-26 07:41:38.887622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.038 [2024-11-26 07:41:38.887630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.038 [2024-11-26 07:41:38.887638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.038 [2024-11-26 07:41:38.900167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.038 [2024-11-26 07:41:38.900718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.038 [2024-11-26 07:41:38.900758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.038 [2024-11-26 07:41:38.900770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.038 [2024-11-26 07:41:38.901019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.038 [2024-11-26 07:41:38.901243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.038 [2024-11-26 07:41:38.901254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.038 [2024-11-26 07:41:38.901262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.038 [2024-11-26 07:41:38.901270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.038 [2024-11-26 07:41:38.914001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.038 [2024-11-26 07:41:38.914556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.038 [2024-11-26 07:41:38.914577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.038 [2024-11-26 07:41:38.914585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.038 [2024-11-26 07:41:38.914804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.038 [2024-11-26 07:41:38.915031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.038 [2024-11-26 07:41:38.915041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.038 [2024-11-26 07:41:38.915048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.038 [2024-11-26 07:41:38.915056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.038 [2024-11-26 07:41:38.927998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.038 [2024-11-26 07:41:38.928657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.038 [2024-11-26 07:41:38.928700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.038 [2024-11-26 07:41:38.928711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.038 [2024-11-26 07:41:38.928958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.038 [2024-11-26 07:41:38.929182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.038 [2024-11-26 07:41:38.929192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.038 [2024-11-26 07:41:38.929200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.038 [2024-11-26 07:41:38.929208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.038 [2024-11-26 07:41:38.941939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.038 [2024-11-26 07:41:38.942494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.038 [2024-11-26 07:41:38.942532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.038 [2024-11-26 07:41:38.942545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.038 [2024-11-26 07:41:38.942784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.038 [2024-11-26 07:41:38.943015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.038 [2024-11-26 07:41:38.943026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.038 [2024-11-26 07:41:38.943034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.038 [2024-11-26 07:41:38.943043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.038 [2024-11-26 07:41:38.955773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.038 [2024-11-26 07:41:38.956442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.038 [2024-11-26 07:41:38.956482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.038 [2024-11-26 07:41:38.956493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.038 [2024-11-26 07:41:38.956731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.038 [2024-11-26 07:41:38.956964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.038 [2024-11-26 07:41:38.956975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.038 [2024-11-26 07:41:38.956983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.038 [2024-11-26 07:41:38.956991] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.038 [2024-11-26 07:41:38.969726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.038 [2024-11-26 07:41:38.970330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.038 [2024-11-26 07:41:38.970369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.038 [2024-11-26 07:41:38.970381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.038 [2024-11-26 07:41:38.970628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.038 [2024-11-26 07:41:38.970853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.038 [2024-11-26 07:41:38.970871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.038 [2024-11-26 07:41:38.970879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.038 [2024-11-26 07:41:38.970887] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.038 [2024-11-26 07:41:38.983615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.038 [2024-11-26 07:41:38.984072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.038 [2024-11-26 07:41:38.984092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.038 [2024-11-26 07:41:38.984101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.038 [2024-11-26 07:41:38.984320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.038 [2024-11-26 07:41:38.984541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.038 [2024-11-26 07:41:38.984550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.038 [2024-11-26 07:41:38.984557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.038 [2024-11-26 07:41:38.984564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.039 [2024-11-26 07:41:38.997497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.039 [2024-11-26 07:41:38.998029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.039 [2024-11-26 07:41:38.998066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.039 [2024-11-26 07:41:38.998078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.039 [2024-11-26 07:41:38.998316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.039 [2024-11-26 07:41:38.998539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.039 [2024-11-26 07:41:38.998549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.039 [2024-11-26 07:41:38.998558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.039 [2024-11-26 07:41:38.998566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.039 [2024-11-26 07:41:39.011300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.039 [2024-11-26 07:41:39.012006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.039 [2024-11-26 07:41:39.012046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.039 [2024-11-26 07:41:39.012058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.039 [2024-11-26 07:41:39.012299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.039 [2024-11-26 07:41:39.012523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.039 [2024-11-26 07:41:39.012538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.039 [2024-11-26 07:41:39.012546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.039 [2024-11-26 07:41:39.012554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.039 [2024-11-26 07:41:39.025096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.039 [2024-11-26 07:41:39.025703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.039 [2024-11-26 07:41:39.025723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.039 [2024-11-26 07:41:39.025731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.039 [2024-11-26 07:41:39.025956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.039 [2024-11-26 07:41:39.026176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.039 [2024-11-26 07:41:39.026186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.039 [2024-11-26 07:41:39.026193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.039 [2024-11-26 07:41:39.026201] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.039 [2024-11-26 07:41:39.038925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.039 [2024-11-26 07:41:39.039364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.039 [2024-11-26 07:41:39.039381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.039 [2024-11-26 07:41:39.039389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.039 [2024-11-26 07:41:39.039607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.039 [2024-11-26 07:41:39.039826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.039 [2024-11-26 07:41:39.039837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.039 [2024-11-26 07:41:39.039844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.039 [2024-11-26 07:41:39.039850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.039 [2024-11-26 07:41:39.052787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.039 [2024-11-26 07:41:39.053473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.039 [2024-11-26 07:41:39.053512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.039 [2024-11-26 07:41:39.053523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.039 [2024-11-26 07:41:39.053761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.039 [2024-11-26 07:41:39.053994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.039 [2024-11-26 07:41:39.054005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.039 [2024-11-26 07:41:39.054013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.039 [2024-11-26 07:41:39.054025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.039 4417.17 IOPS, 17.25 MiB/s [2024-11-26T06:41:39.176Z] [2024-11-26 07:41:39.066761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.039 [2024-11-26 07:41:39.067441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.039 [2024-11-26 07:41:39.067479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.039 [2024-11-26 07:41:39.067492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.039 [2024-11-26 07:41:39.067731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.039 [2024-11-26 07:41:39.067963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.039 [2024-11-26 07:41:39.067974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.039 [2024-11-26 07:41:39.067982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.039 [2024-11-26 07:41:39.067990] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.039 [2024-11-26 07:41:39.080724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.039 [2024-11-26 07:41:39.081374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.039 [2024-11-26 07:41:39.081413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.039 [2024-11-26 07:41:39.081424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.039 [2024-11-26 07:41:39.081662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.039 [2024-11-26 07:41:39.081893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.039 [2024-11-26 07:41:39.081905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.039 [2024-11-26 07:41:39.081913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.039 [2024-11-26 07:41:39.081922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.039 [2024-11-26 07:41:39.094655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.039 [2024-11-26 07:41:39.095270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.039 [2024-11-26 07:41:39.095291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.039 [2024-11-26 07:41:39.095299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.039 [2024-11-26 07:41:39.095518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.039 [2024-11-26 07:41:39.095738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.039 [2024-11-26 07:41:39.095747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.039 [2024-11-26 07:41:39.095755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.039 [2024-11-26 07:41:39.095762] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.039 [2024-11-26 07:41:39.108487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.039 [2024-11-26 07:41:39.109157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.039 [2024-11-26 07:41:39.109196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.039 [2024-11-26 07:41:39.109207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.039 [2024-11-26 07:41:39.109446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.039 [2024-11-26 07:41:39.109670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.039 [2024-11-26 07:41:39.109679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.039 [2024-11-26 07:41:39.109687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.039 [2024-11-26 07:41:39.109695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.039 [2024-11-26 07:41:39.122441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.039 [2024-11-26 07:41:39.123172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.039 [2024-11-26 07:41:39.123211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.039 [2024-11-26 07:41:39.123223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.039 [2024-11-26 07:41:39.123461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.039 [2024-11-26 07:41:39.123684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.039 [2024-11-26 07:41:39.123694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.039 [2024-11-26 07:41:39.123702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.039 [2024-11-26 07:41:39.123710] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.039 [2024-11-26 07:41:39.136238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.040 [2024-11-26 07:41:39.136830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.040 [2024-11-26 07:41:39.136849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.040 [2024-11-26 07:41:39.136857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.040 [2024-11-26 07:41:39.137083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.040 [2024-11-26 07:41:39.137303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.040 [2024-11-26 07:41:39.137313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.040 [2024-11-26 07:41:39.137320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.040 [2024-11-26 07:41:39.137327] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.040 [2024-11-26 07:41:39.150056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.040 [2024-11-26 07:41:39.150592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.040 [2024-11-26 07:41:39.150610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.040 [2024-11-26 07:41:39.150618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.040 [2024-11-26 07:41:39.150841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.040 [2024-11-26 07:41:39.151067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.040 [2024-11-26 07:41:39.151077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.040 [2024-11-26 07:41:39.151084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.040 [2024-11-26 07:41:39.151091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.314 [2024-11-26 07:41:39.164112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.314 [2024-11-26 07:41:39.164805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.314 [2024-11-26 07:41:39.164844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.314 [2024-11-26 07:41:39.164856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.314 [2024-11-26 07:41:39.165104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.314 [2024-11-26 07:41:39.165328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.314 [2024-11-26 07:41:39.165339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.314 [2024-11-26 07:41:39.165347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.314 [2024-11-26 07:41:39.165356] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.314 [2024-11-26 07:41:39.178086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.314 [2024-11-26 07:41:39.178680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.314 [2024-11-26 07:41:39.178700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.314 [2024-11-26 07:41:39.178708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.314 [2024-11-26 07:41:39.178933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.314 [2024-11-26 07:41:39.179153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.314 [2024-11-26 07:41:39.179163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.314 [2024-11-26 07:41:39.179171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.314 [2024-11-26 07:41:39.179178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.314 [2024-11-26 07:41:39.191902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.314 [2024-11-26 07:41:39.192575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.314 [2024-11-26 07:41:39.192613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.314 [2024-11-26 07:41:39.192625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.314 [2024-11-26 07:41:39.192871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.314 [2024-11-26 07:41:39.193095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.314 [2024-11-26 07:41:39.193111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.314 [2024-11-26 07:41:39.193120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.314 [2024-11-26 07:41:39.193128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.314 [2024-11-26 07:41:39.205873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.314 [2024-11-26 07:41:39.206419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.314 [2024-11-26 07:41:39.206439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.314 [2024-11-26 07:41:39.206447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.314 [2024-11-26 07:41:39.206666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.314 [2024-11-26 07:41:39.206891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.314 [2024-11-26 07:41:39.206901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.314 [2024-11-26 07:41:39.206909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.314 [2024-11-26 07:41:39.206916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.314 [2024-11-26 07:41:39.219843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.314 [2024-11-26 07:41:39.220369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.314 [2024-11-26 07:41:39.220408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.314 [2024-11-26 07:41:39.220419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.314 [2024-11-26 07:41:39.220658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.314 [2024-11-26 07:41:39.220897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.314 [2024-11-26 07:41:39.220908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.314 [2024-11-26 07:41:39.220916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.314 [2024-11-26 07:41:39.220924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.314 [2024-11-26 07:41:39.233652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.314 [2024-11-26 07:41:39.234288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.314 [2024-11-26 07:41:39.234327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.314 [2024-11-26 07:41:39.234338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.314 [2024-11-26 07:41:39.234576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.314 [2024-11-26 07:41:39.234800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.314 [2024-11-26 07:41:39.234810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.314 [2024-11-26 07:41:39.234818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.314 [2024-11-26 07:41:39.234830] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.314 [2024-11-26 07:41:39.247571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.314 [2024-11-26 07:41:39.248117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.314 [2024-11-26 07:41:39.248156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.314 [2024-11-26 07:41:39.248168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.314 [2024-11-26 07:41:39.248406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.314 [2024-11-26 07:41:39.248629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.314 [2024-11-26 07:41:39.248639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.314 [2024-11-26 07:41:39.248647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.314 [2024-11-26 07:41:39.248655] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.314 [2024-11-26 07:41:39.261396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.314 [2024-11-26 07:41:39.261994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.314 [2024-11-26 07:41:39.262032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.314 [2024-11-26 07:41:39.262045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.314 [2024-11-26 07:41:39.262287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.314 [2024-11-26 07:41:39.262521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.314 [2024-11-26 07:41:39.262532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.314 [2024-11-26 07:41:39.262541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.314 [2024-11-26 07:41:39.262549] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.314 [2024-11-26 07:41:39.275297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.314 [2024-11-26 07:41:39.275870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.314 [2024-11-26 07:41:39.275907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.314 [2024-11-26 07:41:39.275918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.314 [2024-11-26 07:41:39.276157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.314 [2024-11-26 07:41:39.276380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.314 [2024-11-26 07:41:39.276390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.314 [2024-11-26 07:41:39.276398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.314 [2024-11-26 07:41:39.276406] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.314 [2024-11-26 07:41:39.289140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.314 [2024-11-26 07:41:39.289788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.314 [2024-11-26 07:41:39.289826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.314 [2024-11-26 07:41:39.289839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.314 [2024-11-26 07:41:39.290087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.314 [2024-11-26 07:41:39.290311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.314 [2024-11-26 07:41:39.290321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.314 [2024-11-26 07:41:39.290329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.314 [2024-11-26 07:41:39.290337] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.314 [2024-11-26 07:41:39.303062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.314 [2024-11-26 07:41:39.303752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.314 [2024-11-26 07:41:39.303791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.314 [2024-11-26 07:41:39.303802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.314 [2024-11-26 07:41:39.304049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.314 [2024-11-26 07:41:39.304274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.314 [2024-11-26 07:41:39.304284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.314 [2024-11-26 07:41:39.304292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.314 [2024-11-26 07:41:39.304301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.314 [2024-11-26 07:41:39.317027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.314 [2024-11-26 07:41:39.317570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.314 [2024-11-26 07:41:39.317591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.314 [2024-11-26 07:41:39.317599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.314 [2024-11-26 07:41:39.317819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.314 [2024-11-26 07:41:39.318043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.314 [2024-11-26 07:41:39.318055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.314 [2024-11-26 07:41:39.318062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.314 [2024-11-26 07:41:39.318069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.314 [2024-11-26 07:41:39.331008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.314 [2024-11-26 07:41:39.331546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.314 [2024-11-26 07:41:39.331563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.314 [2024-11-26 07:41:39.331571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.314 [2024-11-26 07:41:39.331794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.314 [2024-11-26 07:41:39.332019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.314 [2024-11-26 07:41:39.332029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.314 [2024-11-26 07:41:39.332036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.314 [2024-11-26 07:41:39.332043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.314 [2024-11-26 07:41:39.344970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.314 [2024-11-26 07:41:39.345509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.314 [2024-11-26 07:41:39.345548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.314 [2024-11-26 07:41:39.345561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.314 [2024-11-26 07:41:39.345801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.314 [2024-11-26 07:41:39.346035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.314 [2024-11-26 07:41:39.346046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.314 [2024-11-26 07:41:39.346056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.314 [2024-11-26 07:41:39.346065] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.314 [2024-11-26 07:41:39.358795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.314 [2024-11-26 07:41:39.359229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.314 [2024-11-26 07:41:39.359249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.314 [2024-11-26 07:41:39.359257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.314 [2024-11-26 07:41:39.359476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.314 [2024-11-26 07:41:39.359696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.314 [2024-11-26 07:41:39.359706] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.314 [2024-11-26 07:41:39.359714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.315 [2024-11-26 07:41:39.359721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.315 [2024-11-26 07:41:39.372659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.315 [2024-11-26 07:41:39.373229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.315 [2024-11-26 07:41:39.373248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.315 [2024-11-26 07:41:39.373256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.315 [2024-11-26 07:41:39.373475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.315 [2024-11-26 07:41:39.373695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.315 [2024-11-26 07:41:39.373709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.315 [2024-11-26 07:41:39.373716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.315 [2024-11-26 07:41:39.373723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.315 [2024-11-26 07:41:39.386442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.315 [2024-11-26 07:41:39.387106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.315 [2024-11-26 07:41:39.387145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.315 [2024-11-26 07:41:39.387157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.315 [2024-11-26 07:41:39.387395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.315 [2024-11-26 07:41:39.387617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.315 [2024-11-26 07:41:39.387628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.315 [2024-11-26 07:41:39.387637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.315 [2024-11-26 07:41:39.387645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.315 [2024-11-26 07:41:39.400378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.315 [2024-11-26 07:41:39.400963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.315 [2024-11-26 07:41:39.401002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.315 [2024-11-26 07:41:39.401014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.315 [2024-11-26 07:41:39.401254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.315 [2024-11-26 07:41:39.401477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.315 [2024-11-26 07:41:39.401488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.315 [2024-11-26 07:41:39.401496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.315 [2024-11-26 07:41:39.401504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.315 [2024-11-26 07:41:39.414233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.315 [2024-11-26 07:41:39.414883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.315 [2024-11-26 07:41:39.414922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.315 [2024-11-26 07:41:39.414935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.315 [2024-11-26 07:41:39.415176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.315 [2024-11-26 07:41:39.415399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.315 [2024-11-26 07:41:39.415410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.315 [2024-11-26 07:41:39.415418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.315 [2024-11-26 07:41:39.415432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.315 07:41:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:55.315 07:41:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:31:55.315 07:41:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:55.315 07:41:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:55.315 07:41:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:55.315 [2024-11-26 07:41:39.428174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.315 [2024-11-26 07:41:39.428819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.315 [2024-11-26 07:41:39.428858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.315 [2024-11-26 07:41:39.428879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.315 [2024-11-26 07:41:39.429119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.315 [2024-11-26 07:41:39.429342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.315 [2024-11-26 07:41:39.429352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.315 [2024-11-26 07:41:39.429360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.315 [2024-11-26 07:41:39.429368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.585 [2024-11-26 07:41:39.442100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.586 [2024-11-26 07:41:39.442693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.586 [2024-11-26 07:41:39.442713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.586 [2024-11-26 07:41:39.442721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.586 [2024-11-26 07:41:39.442947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.586 [2024-11-26 07:41:39.443168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.586 [2024-11-26 07:41:39.443177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.586 [2024-11-26 07:41:39.443185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.586 [2024-11-26 07:41:39.443193] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.586 [2024-11-26 07:41:39.455918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.586 [2024-11-26 07:41:39.456407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.586 [2024-11-26 07:41:39.456446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.586 [2024-11-26 07:41:39.456458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.586 [2024-11-26 07:41:39.456698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.586 [2024-11-26 07:41:39.456930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.586 [2024-11-26 07:41:39.456942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.586 [2024-11-26 07:41:39.456955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.586 [2024-11-26 07:41:39.456964] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.586 07:41:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:55.586 07:41:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:55.586 07:41:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.586 07:41:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:55.586 [2024-11-26 07:41:39.469702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.586 [2024-11-26 07:41:39.470370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.586 [2024-11-26 07:41:39.470408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.586 [2024-11-26 07:41:39.470420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.586 [2024-11-26 07:41:39.470657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.586 [2024-11-26 07:41:39.470889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.586 [2024-11-26 07:41:39.470900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.586 [2024-11-26 07:41:39.470907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.586 [2024-11-26 07:41:39.470916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.586 [2024-11-26 07:41:39.473854] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:55.586 07:41:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.586 07:41:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:55.586 07:41:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.586 07:41:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:55.586 [2024-11-26 07:41:39.483643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.586 [2024-11-26 07:41:39.484072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.586 [2024-11-26 07:41:39.484093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.586 [2024-11-26 07:41:39.484102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.586 [2024-11-26 07:41:39.484321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.586 [2024-11-26 07:41:39.484540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.586 [2024-11-26 07:41:39.484550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.586 [2024-11-26 07:41:39.484557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.586 [2024-11-26 07:41:39.484564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.586 [2024-11-26 07:41:39.497492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.586 [2024-11-26 07:41:39.498037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.586 [2024-11-26 07:41:39.498055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.586 [2024-11-26 07:41:39.498068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.586 [2024-11-26 07:41:39.498287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.586 [2024-11-26 07:41:39.498506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.586 [2024-11-26 07:41:39.498515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.586 [2024-11-26 07:41:39.498522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.586 [2024-11-26 07:41:39.498529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.586 [2024-11-26 07:41:39.511454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.586 [2024-11-26 07:41:39.512143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.586 [2024-11-26 07:41:39.512182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.586 [2024-11-26 07:41:39.512193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.586 [2024-11-26 07:41:39.512432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.586 [2024-11-26 07:41:39.512655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.586 [2024-11-26 07:41:39.512665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.586 [2024-11-26 07:41:39.512673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.586 [2024-11-26 07:41:39.512681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.586 Malloc0 00:31:55.586 07:41:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.586 07:41:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:55.586 07:41:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.586 07:41:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:55.586 [2024-11-26 07:41:39.525421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.586 [2024-11-26 07:41:39.525876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.586 [2024-11-26 07:41:39.525898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.586 [2024-11-26 07:41:39.525906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.586 07:41:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.586 [2024-11-26 07:41:39.526126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.586 [2024-11-26 07:41:39.526347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.586 [2024-11-26 07:41:39.526357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.586 [2024-11-26 07:41:39.526364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.586 [2024-11-26 07:41:39.526371] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.586 07:41:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:55.586 07:41:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.586 07:41:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:55.586 07:41:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.586 07:41:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:55.586 07:41:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.586 07:41:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:55.586 [2024-11-26 07:41:39.539300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.586 [2024-11-26 07:41:39.539860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.586 [2024-11-26 07:41:39.539907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13746a0 with addr=10.0.0.2, port=4420 00:31:55.586 [2024-11-26 07:41:39.539919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(6) to be set 00:31:55.586 [2024-11-26 07:41:39.540157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13746a0 (9): Bad file descriptor 00:31:55.586 [2024-11-26 07:41:39.540381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:55.587 [2024-11-26 07:41:39.540391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:55.587 [2024-11-26 07:41:39.540399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:55.587 [2024-11-26 07:41:39.540407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:55.587 [2024-11-26 07:41:39.545287] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:55.587 07:41:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.587 07:41:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2309088 00:31:55.587 [2024-11-26 07:41:39.553137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:55.587 [2024-11-26 07:41:39.584625] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:31:57.101 4569.71 IOPS, 17.85 MiB/s [2024-11-26T06:41:42.180Z] 5460.88 IOPS, 21.33 MiB/s [2024-11-26T06:41:43.120Z] 6101.44 IOPS, 23.83 MiB/s [2024-11-26T06:41:44.506Z] 6608.60 IOPS, 25.81 MiB/s [2024-11-26T06:41:45.446Z] 7041.73 IOPS, 27.51 MiB/s [2024-11-26T06:41:46.387Z] 7373.58 IOPS, 28.80 MiB/s [2024-11-26T06:41:47.327Z] 7672.00 IOPS, 29.97 MiB/s [2024-11-26T06:41:48.268Z] 7928.00 IOPS, 30.97 MiB/s 00:32:04.131 Latency(us) 00:32:04.131 [2024-11-26T06:41:48.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:04.131 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:04.131 Verification LBA range: start 0x0 length 0x4000 00:32:04.131 Nvme1n1 : 15.01 8153.58 31.85 9833.09 0.00 7090.92 795.31 15510.19 00:32:04.131 [2024-11-26T06:41:48.268Z] =================================================================================================================== 00:32:04.131 [2024-11-26T06:41:48.268Z] Total : 8153.58 31.85 9833.09 0.00 7090.92 795.31 15510.19 00:32:04.131 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:32:04.131 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:04.131 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.131 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:04.131 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.131 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:32:04.131 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:32:04.131 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:04.131 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:32:04.131 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:04.131 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:32:04.131 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:04.131 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:04.131 rmmod nvme_tcp 00:32:04.131 rmmod nvme_fabrics 00:32:04.131 rmmod nvme_keyring 00:32:04.392 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:04.392 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:32:04.392 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:32:04.392 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2310118 ']' 00:32:04.392 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2310118 00:32:04.392 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2310118 ']' 00:32:04.392 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2310118 00:32:04.392 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:32:04.392 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:04.392 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2310118 00:32:04.392 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:04.392 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:04.392 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2310118' 00:32:04.392 killing process with pid 2310118 00:32:04.392 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2310118 00:32:04.392 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2310118 00:32:04.392 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:04.392 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:04.392 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:04.392 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:32:04.392 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:32:04.392 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:04.392 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:32:04.392 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:04.392 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:04.392 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:04.392 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:04.392 07:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:06.935 07:41:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:06.935 00:32:06.935 real 0m29.140s 00:32:06.935 user 1m3.367s 00:32:06.935 sys 0m8.232s 00:32:06.935 07:41:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:06.935 07:41:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:06.935 ************************************ 00:32:06.935 END TEST nvmf_bdevperf 00:32:06.935 ************************************ 00:32:06.935 07:41:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:32:06.935 07:41:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:06.935 07:41:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:06.935 07:41:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.935 ************************************ 00:32:06.935 START TEST nvmf_target_disconnect 00:32:06.935 ************************************ 00:32:06.935 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:32:06.935 * Looking for test storage... 00:32:06.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:06.935 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:06.935 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:06.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.936 --rc genhtml_branch_coverage=1 00:32:06.936 --rc genhtml_function_coverage=1 00:32:06.936 --rc genhtml_legend=1 00:32:06.936 --rc geninfo_all_blocks=1 00:32:06.936 --rc geninfo_unexecuted_blocks=1 00:32:06.936 00:32:06.936 ' 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:06.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.936 --rc genhtml_branch_coverage=1 00:32:06.936 --rc genhtml_function_coverage=1 00:32:06.936 --rc genhtml_legend=1 00:32:06.936 --rc geninfo_all_blocks=1 00:32:06.936 --rc geninfo_unexecuted_blocks=1 00:32:06.936 00:32:06.936 ' 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:06.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.936 --rc genhtml_branch_coverage=1 00:32:06.936 --rc genhtml_function_coverage=1 00:32:06.936 --rc genhtml_legend=1 00:32:06.936 --rc geninfo_all_blocks=1 00:32:06.936 --rc geninfo_unexecuted_blocks=1 00:32:06.936 00:32:06.936 ' 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:06.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.936 --rc genhtml_branch_coverage=1 00:32:06.936 --rc genhtml_function_coverage=1 00:32:06.936 --rc genhtml_legend=1 00:32:06.936 --rc geninfo_all_blocks=1 00:32:06.936 --rc geninfo_unexecuted_blocks=1 00:32:06.936 00:32:06.936 ' 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:06.936 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:06.936 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:32:06.937 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:32:06.937 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:32:06.937 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:06.937 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:06.937 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:06.937 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:06.937 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:06.937 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:06.937 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:06.937 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:06.937 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:06.937 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:06.937 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:32:06.937 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:15.080 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:15.080 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:15.080 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:15.081 Found net devices under 0000:31:00.0: cvl_0_0 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:15.081 Found net devices under 0000:31:00.1: cvl_0_1 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:15.081 07:41:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:15.081 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:15.081 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:15.081 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:15.081 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:15.081 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:15.081 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:15.081 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:15.081 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:15.081 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:15.081 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:32:15.081 00:32:15.081 --- 10.0.0.2 ping statistics --- 00:32:15.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:15.081 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:32:15.081 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:15.081 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:15.081 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:32:15.081 00:32:15.081 --- 10.0.0.1 ping statistics --- 00:32:15.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:15.081 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:32:15.081 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:15.081 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:32:15.081 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:15.081 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:15.081 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:15.081 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:15.081 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:15.081 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:15.081 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:15.081 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:32:15.081 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:15.081 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:15.081 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:15.350 ************************************ 00:32:15.350 START TEST nvmf_target_disconnect_tc1 00:32:15.350 ************************************ 00:32:15.350 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:32:15.350 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:15.350 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:32:15.350 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:15.350 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:15.350 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:15.350 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:15.350 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:15.350 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:15.350 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:15.350 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:15.350 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:32:15.350 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:15.350 [2024-11-26 07:41:59.351922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.350 [2024-11-26 07:41:59.351990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x556cf0 with addr=10.0.0.2, port=4420 00:32:15.350 [2024-11-26 07:41:59.352019] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:15.350 [2024-11-26 07:41:59.352033] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:15.350 [2024-11-26 07:41:59.352041] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:32:15.350 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:32:15.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:32:15.351 Initializing NVMe Controllers 00:32:15.351 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:32:15.351 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:15.351 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:15.351 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:15.351 00:32:15.351 real 0m0.130s 00:32:15.351 user 0m0.054s 00:32:15.351 sys 0m0.076s 00:32:15.351 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:15.351 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:32:15.351 ************************************ 00:32:15.351 END TEST nvmf_target_disconnect_tc1 00:32:15.351 ************************************ 00:32:15.351 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:32:15.351 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:15.351 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:15.351 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:15.351 ************************************ 00:32:15.351 START TEST nvmf_target_disconnect_tc2 00:32:15.351 ************************************ 00:32:15.351 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:32:15.351 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:32:15.351 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:15.351 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:15.351 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:15.351 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:15.351 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2316839 00:32:15.351 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2316839 00:32:15.351 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:15.351 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2316839 ']' 00:32:15.351 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:15.351 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:15.351 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:15.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:15.351 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:15.351 07:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:15.613 [2024-11-26 07:41:59.507012] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:32:15.613 [2024-11-26 07:41:59.507068] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:15.613 [2024-11-26 07:41:59.617718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:15.613 [2024-11-26 07:41:59.669311] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:15.613 [2024-11-26 07:41:59.669370] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:15.613 [2024-11-26 07:41:59.669379] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:15.613 [2024-11-26 07:41:59.669386] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:15.613 [2024-11-26 07:41:59.669392] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:15.613 [2024-11-26 07:41:59.671470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:15.613 [2024-11-26 07:41:59.671630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:15.613 [2024-11-26 07:41:59.671791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:15.613 [2024-11-26 07:41:59.671792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:16.556 07:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:16.556 07:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:32:16.556 07:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:16.556 07:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:16.556 07:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:16.556 07:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:16.556 07:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:16.556 07:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.556 07:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:16.556 Malloc0 00:32:16.556 07:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.556 07:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:16.556 07:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.556 07:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:16.556 [2024-11-26 07:42:00.428153] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:16.556 07:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.556 07:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:16.556 07:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.556 07:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:16.556 07:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.556 07:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:16.556 07:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.556 07:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:16.556 07:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.556 07:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:16.556 07:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.556 07:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:16.556 [2024-11-26 07:42:00.468609] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:16.556 07:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.556 07:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:16.556 07:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.556 07:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:16.556 07:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.556 07:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2317065 00:32:16.556 07:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:32:16.556 07:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:18.474 07:42:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2316839 00:32:18.474 07:42:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:32:18.474 Read completed with error (sct=0, sc=8) 00:32:18.474 starting I/O failed 00:32:18.474 Read completed with error (sct=0, sc=8) 00:32:18.474 starting I/O failed 00:32:18.474 Read completed with error (sct=0, sc=8) 00:32:18.474 starting I/O failed 00:32:18.474 Read completed with error (sct=0, sc=8) 00:32:18.474 starting I/O failed 00:32:18.474 Write completed with error (sct=0, sc=8) 00:32:18.474 starting I/O failed 00:32:18.474 Read completed with error (sct=0, sc=8) 00:32:18.474 starting I/O failed 00:32:18.474 Write completed with error (sct=0, sc=8) 00:32:18.474 starting I/O failed 00:32:18.474 Read completed with error (sct=0, sc=8) 00:32:18.474 starting I/O failed 00:32:18.474 Write completed with error (sct=0, sc=8) 00:32:18.474 starting I/O failed 00:32:18.474 Write completed with error (sct=0, sc=8) 00:32:18.474 starting I/O failed 00:32:18.474 Write completed with error (sct=0, sc=8) 00:32:18.474 starting I/O failed 00:32:18.474 Read completed with error (sct=0, sc=8) 00:32:18.474 starting I/O failed 00:32:18.474 Read completed with error (sct=0, sc=8) 00:32:18.474 starting I/O failed 00:32:18.474 Write completed with error (sct=0, sc=8) 00:32:18.474 starting I/O failed 00:32:18.474 Read completed with error (sct=0, sc=8) 00:32:18.474 starting I/O failed 00:32:18.474 Read completed with error (sct=0, sc=8) 00:32:18.474 starting I/O failed 00:32:18.474 Read completed with error (sct=0, sc=8) 00:32:18.474 starting I/O failed 00:32:18.474 Read completed with error (sct=0, sc=8) 00:32:18.474 starting I/O failed 00:32:18.474 Read completed with error (sct=0, sc=8) 00:32:18.474 starting I/O failed 00:32:18.474 Read completed with error (sct=0, sc=8) 00:32:18.474 starting I/O failed 00:32:18.474 Read completed with error (sct=0, sc=8) 00:32:18.474 starting I/O failed 00:32:18.474 Write completed with error (sct=0, sc=8) 00:32:18.474 starting I/O failed 00:32:18.474 Read completed with error (sct=0, sc=8) 00:32:18.474 starting I/O failed 00:32:18.474 Write completed with error (sct=0, sc=8) 00:32:18.474 starting I/O failed 00:32:18.474 Read completed with error (sct=0, sc=8) 00:32:18.474 starting I/O failed 00:32:18.474 Write completed with error (sct=0, sc=8) 00:32:18.474 starting I/O failed 00:32:18.474 Read completed with error (sct=0, sc=8) 00:32:18.474 starting I/O failed 00:32:18.474 Write completed with error (sct=0, sc=8) 00:32:18.474 starting I/O failed 00:32:18.474 Read completed with error (sct=0, sc=8) 00:32:18.474 starting I/O failed 00:32:18.474 Write completed with error (sct=0, sc=8) 00:32:18.474 starting I/O failed 00:32:18.474 Read completed with error (sct=0, sc=8) 00:32:18.474 starting I/O failed 00:32:18.474 Write completed with error (sct=0, sc=8) 00:32:18.474 starting I/O failed 00:32:18.474 [2024-11-26 07:42:02.502603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.474 [2024-11-26 07:42:02.502904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.474 [2024-11-26 07:42:02.502928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.474 qpair failed and we were unable to recover it. 00:32:18.474 [2024-11-26 07:42:02.503341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.474 [2024-11-26 07:42:02.503379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.474 qpair failed and we were unable to recover it. 00:32:18.474 [2024-11-26 07:42:02.503674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.474 [2024-11-26 07:42:02.503688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.474 qpair failed and we were unable to recover it. 00:32:18.474 [2024-11-26 07:42:02.504128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.474 [2024-11-26 07:42:02.504169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.474 qpair failed and we were unable to recover it. 00:32:18.474 [2024-11-26 07:42:02.504523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.474 [2024-11-26 07:42:02.504538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.474 qpair failed and we were unable to recover it. 00:32:18.474 [2024-11-26 07:42:02.504740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.474 [2024-11-26 07:42:02.504753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.474 qpair failed and we were unable to recover it. 00:32:18.474 [2024-11-26 07:42:02.505222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.474 [2024-11-26 07:42:02.505262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.474 qpair failed and we were unable to recover it. 00:32:18.474 [2024-11-26 07:42:02.505564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.474 [2024-11-26 07:42:02.505579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.474 qpair failed and we were unable to recover it. 00:32:18.474 [2024-11-26 07:42:02.505780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.474 [2024-11-26 07:42:02.505793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.474 qpair failed and we were unable to recover it. 00:32:18.474 [2024-11-26 07:42:02.505990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.474 [2024-11-26 07:42:02.506004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.474 qpair failed and we were unable to recover it. 00:32:18.474 [2024-11-26 07:42:02.506291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.474 [2024-11-26 07:42:02.506309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.474 qpair failed and we were unable to recover it. 00:32:18.474 [2024-11-26 07:42:02.506685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.474 [2024-11-26 07:42:02.506698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.474 qpair failed and we were unable to recover it. 00:32:18.474 [2024-11-26 07:42:02.507051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.474 [2024-11-26 07:42:02.507065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.474 qpair failed and we were unable to recover it. 00:32:18.474 [2024-11-26 07:42:02.508074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.474 [2024-11-26 07:42:02.508102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.474 qpair failed and we were unable to recover it. 00:32:18.474 [2024-11-26 07:42:02.508410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.474 [2024-11-26 07:42:02.508424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.474 qpair failed and we were unable to recover it. 00:32:18.474 [2024-11-26 07:42:02.508738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.474 [2024-11-26 07:42:02.508750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.474 qpair failed and we were unable to recover it. 00:32:18.474 [2024-11-26 07:42:02.509040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.474 [2024-11-26 07:42:02.509054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.474 qpair failed and we were unable to recover it. 00:32:18.474 [2024-11-26 07:42:02.509344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.474 [2024-11-26 07:42:02.509357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.474 qpair failed and we were unable to recover it. 00:32:18.474 [2024-11-26 07:42:02.509598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.474 [2024-11-26 07:42:02.509610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.474 qpair failed and we were unable to recover it. 00:32:18.474 [2024-11-26 07:42:02.509858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.474 [2024-11-26 07:42:02.509877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.474 qpair failed and we were unable to recover it. 00:32:18.474 [2024-11-26 07:42:02.510275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.474 [2024-11-26 07:42:02.510287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.474 qpair failed and we were unable to recover it. 00:32:18.474 [2024-11-26 07:42:02.510617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.510630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.475 [2024-11-26 07:42:02.510832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.510845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.475 [2024-11-26 07:42:02.511071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.511084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.475 [2024-11-26 07:42:02.511383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.511396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.475 [2024-11-26 07:42:02.511740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.511753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.475 [2024-11-26 07:42:02.512135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.512148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.475 [2024-11-26 07:42:02.512332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.512346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.475 [2024-11-26 07:42:02.512724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.512736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.475 [2024-11-26 07:42:02.513056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.513069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.475 [2024-11-26 07:42:02.513362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.513375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.475 [2024-11-26 07:42:02.513684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.513697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.475 [2024-11-26 07:42:02.513922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.513936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.475 [2024-11-26 07:42:02.514345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.514358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.475 [2024-11-26 07:42:02.514508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.514521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.475 [2024-11-26 07:42:02.514729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.514744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.475 [2024-11-26 07:42:02.515070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.515083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.475 [2024-11-26 07:42:02.515425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.515440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.475 [2024-11-26 07:42:02.515780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.515792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.475 [2024-11-26 07:42:02.516198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.516211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.475 [2024-11-26 07:42:02.516551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.516563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.475 [2024-11-26 07:42:02.516908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.516921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.475 [2024-11-26 07:42:02.517229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.517241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.475 [2024-11-26 07:42:02.517543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.517555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.475 [2024-11-26 07:42:02.517900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.517912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.475 [2024-11-26 07:42:02.518145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.518158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.475 [2024-11-26 07:42:02.518377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.518389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.475 [2024-11-26 07:42:02.518711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.518723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.475 [2024-11-26 07:42:02.519048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.519060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.475 [2024-11-26 07:42:02.519399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.519412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.475 [2024-11-26 07:42:02.519638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.519650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.475 [2024-11-26 07:42:02.519959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.519971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.475 [2024-11-26 07:42:02.520323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.520336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.475 [2024-11-26 07:42:02.520550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.520563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.475 [2024-11-26 07:42:02.520893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.520906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.475 [2024-11-26 07:42:02.521286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.521299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.475 [2024-11-26 07:42:02.521637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.521650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.475 [2024-11-26 07:42:02.521841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.521855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.475 [2024-11-26 07:42:02.522098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.522110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.475 [2024-11-26 07:42:02.522396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.475 [2024-11-26 07:42:02.522408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.475 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.522607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.476 [2024-11-26 07:42:02.522618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.476 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.522965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.476 [2024-11-26 07:42:02.522979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.476 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.523199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.476 [2024-11-26 07:42:02.523211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.476 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.523532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.476 [2024-11-26 07:42:02.523544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.476 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.523833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.476 [2024-11-26 07:42:02.523844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.476 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.524166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.476 [2024-11-26 07:42:02.524179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.476 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.524502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.476 [2024-11-26 07:42:02.524513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.476 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.524838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.476 [2024-11-26 07:42:02.524851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.476 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.525183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.476 [2024-11-26 07:42:02.525195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.476 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.525426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.476 [2024-11-26 07:42:02.525437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.476 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.525719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.476 [2024-11-26 07:42:02.525730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.476 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.526066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.476 [2024-11-26 07:42:02.526078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.476 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.526262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.476 [2024-11-26 07:42:02.526275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.476 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.526580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.476 [2024-11-26 07:42:02.526592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.476 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.526812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.476 [2024-11-26 07:42:02.526823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.476 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.527130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.476 [2024-11-26 07:42:02.527142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.476 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.527485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.476 [2024-11-26 07:42:02.527496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.476 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.527699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.476 [2024-11-26 07:42:02.527709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.476 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.527942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.476 [2024-11-26 07:42:02.527955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.476 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.528231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.476 [2024-11-26 07:42:02.528242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.476 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.528538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.476 [2024-11-26 07:42:02.528550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.476 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.528875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.476 [2024-11-26 07:42:02.528886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.476 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.529205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.476 [2024-11-26 07:42:02.529217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.476 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.529520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.476 [2024-11-26 07:42:02.529531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.476 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.529704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.476 [2024-11-26 07:42:02.529715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.476 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.530011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.476 [2024-11-26 07:42:02.530023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.476 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.530253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.476 [2024-11-26 07:42:02.530263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.476 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.530578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.476 [2024-11-26 07:42:02.530589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.476 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.530899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.476 [2024-11-26 07:42:02.530911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.476 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.531095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.476 [2024-11-26 07:42:02.531107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.476 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.531432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.476 [2024-11-26 07:42:02.531444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.476 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.531615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.476 [2024-11-26 07:42:02.531626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.476 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.531921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.476 [2024-11-26 07:42:02.531932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.476 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.532283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.476 [2024-11-26 07:42:02.532294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.476 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.532482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.476 [2024-11-26 07:42:02.532492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.476 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.532822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.476 [2024-11-26 07:42:02.532833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.476 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.533043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.476 [2024-11-26 07:42:02.533055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.476 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.533363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.476 [2024-11-26 07:42:02.533374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.476 qpair failed and we were unable to recover it. 00:32:18.476 [2024-11-26 07:42:02.533579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.477 [2024-11-26 07:42:02.533590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.477 qpair failed and we were unable to recover it. 00:32:18.477 [2024-11-26 07:42:02.533943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.477 [2024-11-26 07:42:02.533955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.477 qpair failed and we were unable to recover it. 00:32:18.477 [2024-11-26 07:42:02.534109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.477 [2024-11-26 07:42:02.534121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.477 qpair failed and we were unable to recover it. 00:32:18.477 [2024-11-26 07:42:02.534414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.477 [2024-11-26 07:42:02.534426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.477 qpair failed and we were unable to recover it. 00:32:18.477 [2024-11-26 07:42:02.534767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.477 [2024-11-26 07:42:02.534778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.477 qpair failed and we were unable to recover it. 00:32:18.477 [2024-11-26 07:42:02.535112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.477 [2024-11-26 07:42:02.535125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.477 qpair failed and we were unable to recover it. 00:32:18.477 [2024-11-26 07:42:02.535431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.477 [2024-11-26 07:42:02.535443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.477 qpair failed and we were unable to recover it. 00:32:18.477 [2024-11-26 07:42:02.535653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.477 [2024-11-26 07:42:02.535666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.477 qpair failed and we were unable to recover it. 00:32:18.477 [2024-11-26 07:42:02.535898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.477 [2024-11-26 07:42:02.535911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.477 qpair failed and we were unable to recover it. 00:32:18.477 [2024-11-26 07:42:02.536177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.477 [2024-11-26 07:42:02.536189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.477 qpair failed and we were unable to recover it. 00:32:18.477 [2024-11-26 07:42:02.536574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.477 [2024-11-26 07:42:02.536586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.477 qpair failed and we were unable to recover it. 00:32:18.477 [2024-11-26 07:42:02.536842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.477 [2024-11-26 07:42:02.536854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.477 qpair failed and we were unable to recover it. 00:32:18.477 [2024-11-26 07:42:02.537047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.477 [2024-11-26 07:42:02.537059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.477 qpair failed and we were unable to recover it. 00:32:18.477 [2024-11-26 07:42:02.537279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.477 [2024-11-26 07:42:02.537291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.477 qpair failed and we were unable to recover it. 00:32:18.477 [2024-11-26 07:42:02.537586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.477 [2024-11-26 07:42:02.537599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.477 qpair failed and we were unable to recover it. 00:32:18.477 [2024-11-26 07:42:02.537910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.477 [2024-11-26 07:42:02.537922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.477 qpair failed and we were unable to recover it. 00:32:18.477 [2024-11-26 07:42:02.538252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.477 [2024-11-26 07:42:02.538263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.477 qpair failed and we were unable to recover it. 00:32:18.477 [2024-11-26 07:42:02.538562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.477 [2024-11-26 07:42:02.538573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.477 qpair failed and we were unable to recover it. 00:32:18.477 [2024-11-26 07:42:02.538906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.477 [2024-11-26 07:42:02.538919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.477 qpair failed and we were unable to recover it. 00:32:18.477 [2024-11-26 07:42:02.539136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.477 [2024-11-26 07:42:02.539147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.477 qpair failed and we were unable to recover it. 00:32:18.477 [2024-11-26 07:42:02.539369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.477 [2024-11-26 07:42:02.539379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.477 qpair failed and we were unable to recover it. 00:32:18.477 [2024-11-26 07:42:02.539681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.477 [2024-11-26 07:42:02.539692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.477 qpair failed and we were unable to recover it. 00:32:18.477 [2024-11-26 07:42:02.539972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.477 [2024-11-26 07:42:02.539983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.477 qpair failed and we were unable to recover it. 00:32:18.477 [2024-11-26 07:42:02.540298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.477 [2024-11-26 07:42:02.540310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.477 qpair failed and we were unable to recover it. 00:32:18.477 [2024-11-26 07:42:02.540538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.477 [2024-11-26 07:42:02.540549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.477 qpair failed and we were unable to recover it. 00:32:18.477 [2024-11-26 07:42:02.540715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.477 [2024-11-26 07:42:02.540728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.477 qpair failed and we were unable to recover it. 00:32:18.477 [2024-11-26 07:42:02.541013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.477 [2024-11-26 07:42:02.541024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.477 qpair failed and we were unable to recover it. 00:32:18.477 [2024-11-26 07:42:02.541374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.477 [2024-11-26 07:42:02.541386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.477 qpair failed and we were unable to recover it. 00:32:18.477 [2024-11-26 07:42:02.541475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.477 [2024-11-26 07:42:02.541484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:18.477 qpair failed and we were unable to recover it. 00:32:18.477 Read completed with error (sct=0, sc=8) 00:32:18.477 starting I/O failed 00:32:18.477 Read completed with error (sct=0, sc=8) 00:32:18.477 starting I/O failed 00:32:18.477 Read completed with error (sct=0, sc=8) 00:32:18.477 starting I/O failed 00:32:18.477 Read completed with error (sct=0, sc=8) 00:32:18.477 starting I/O failed 00:32:18.477 Read completed with error (sct=0, sc=8) 00:32:18.477 starting I/O failed 00:32:18.477 Read completed with error (sct=0, sc=8) 00:32:18.477 starting I/O failed 00:32:18.477 Read completed with error (sct=0, sc=8) 00:32:18.477 starting I/O failed 00:32:18.477 Read completed with error (sct=0, sc=8) 00:32:18.477 starting I/O failed 00:32:18.477 Read completed with error (sct=0, sc=8) 00:32:18.477 starting I/O failed 00:32:18.477 Read completed with error (sct=0, sc=8) 00:32:18.477 starting I/O failed 00:32:18.477 Read completed with error (sct=0, sc=8) 00:32:18.477 starting I/O failed 00:32:18.477 Write completed with error (sct=0, sc=8) 00:32:18.477 starting I/O failed 00:32:18.477 Write completed with error (sct=0, sc=8) 00:32:18.477 starting I/O failed 00:32:18.477 Write completed with error (sct=0, sc=8) 00:32:18.477 starting I/O failed 00:32:18.477 Write completed with error (sct=0, sc=8) 00:32:18.477 starting I/O failed 00:32:18.477 Read completed with error (sct=0, sc=8) 00:32:18.477 starting I/O failed 00:32:18.477 Write completed with error (sct=0, sc=8) 00:32:18.477 starting I/O failed 00:32:18.477 Write completed with error (sct=0, sc=8) 00:32:18.477 starting I/O failed 00:32:18.477 Write completed with error (sct=0, sc=8) 00:32:18.477 starting I/O failed 00:32:18.477 Write completed with error (sct=0, sc=8) 00:32:18.477 starting I/O failed 00:32:18.477 Write completed with error (sct=0, sc=8) 00:32:18.477 starting I/O failed 00:32:18.477 Read completed with error (sct=0, sc=8) 00:32:18.477 starting I/O failed 00:32:18.477 Read completed with error (sct=0, sc=8) 00:32:18.477 starting I/O failed 00:32:18.477 Write completed with error (sct=0, sc=8) 00:32:18.477 starting I/O failed 00:32:18.477 Write completed with error (sct=0, sc=8) 00:32:18.477 starting I/O failed 00:32:18.477 Read completed with error (sct=0, sc=8) 00:32:18.477 starting I/O failed 00:32:18.477 Read completed with error (sct=0, sc=8) 00:32:18.477 starting I/O failed 00:32:18.477 Write completed with error (sct=0, sc=8) 00:32:18.477 starting I/O failed 00:32:18.478 Read completed with error (sct=0, sc=8) 00:32:18.478 starting I/O failed 00:32:18.478 Read completed with error (sct=0, sc=8) 00:32:18.478 starting I/O failed 00:32:18.478 Write completed with error (sct=0, sc=8) 00:32:18.478 starting I/O failed 00:32:18.478 Read completed with error (sct=0, sc=8) 00:32:18.478 starting I/O failed 00:32:18.478 [2024-11-26 07:42:02.541685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:18.478 [2024-11-26 07:42:02.542157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.478 [2024-11-26 07:42:02.542194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.478 qpair failed and we were unable to recover it. 00:32:18.478 [2024-11-26 07:42:02.542516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.478 [2024-11-26 07:42:02.542527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.478 qpair failed and we were unable to recover it. 00:32:18.478 [2024-11-26 07:42:02.542827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.478 [2024-11-26 07:42:02.542836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.478 qpair failed and we were unable to recover it. 00:32:18.478 [2024-11-26 07:42:02.543289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.478 [2024-11-26 07:42:02.543319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.478 qpair failed and we were unable to recover it. 00:32:18.478 [2024-11-26 07:42:02.543531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.478 [2024-11-26 07:42:02.543541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.478 qpair failed and we were unable to recover it. 00:32:18.478 [2024-11-26 07:42:02.543740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.478 [2024-11-26 07:42:02.543749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.478 qpair failed and we were unable to recover it. 00:32:18.478 [2024-11-26 07:42:02.544093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.478 [2024-11-26 07:42:02.544102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.478 qpair failed and we were unable to recover it. 00:32:18.478 [2024-11-26 07:42:02.544405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.478 [2024-11-26 07:42:02.544415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.478 qpair failed and we were unable to recover it. 00:32:18.478 [2024-11-26 07:42:02.544737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.478 [2024-11-26 07:42:02.544744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.478 qpair failed and we were unable to recover it. 00:32:18.478 [2024-11-26 07:42:02.544789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.478 [2024-11-26 07:42:02.544796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.478 qpair failed and we were unable to recover it. 00:32:18.478 [2024-11-26 07:42:02.545125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.478 [2024-11-26 07:42:02.545133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.478 qpair failed and we were unable to recover it. 00:32:18.478 [2024-11-26 07:42:02.545367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.478 [2024-11-26 07:42:02.545376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.478 qpair failed and we were unable to recover it. 00:32:18.478 [2024-11-26 07:42:02.545717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.478 [2024-11-26 07:42:02.545726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.478 qpair failed and we were unable to recover it. 00:32:18.478 [2024-11-26 07:42:02.546102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.478 [2024-11-26 07:42:02.546111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.478 qpair failed and we were unable to recover it. 00:32:18.478 [2024-11-26 07:42:02.546321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.478 [2024-11-26 07:42:02.546328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.478 qpair failed and we were unable to recover it. 00:32:18.478 [2024-11-26 07:42:02.546656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.478 [2024-11-26 07:42:02.546665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.478 qpair failed and we were unable to recover it. 00:32:18.478 [2024-11-26 07:42:02.546893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.478 [2024-11-26 07:42:02.546901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.478 qpair failed and we were unable to recover it. 00:32:18.478 [2024-11-26 07:42:02.547209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.478 [2024-11-26 07:42:02.547217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.478 qpair failed and we were unable to recover it. 00:32:18.478 [2024-11-26 07:42:02.547512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.478 [2024-11-26 07:42:02.547520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.478 qpair failed and we were unable to recover it. 00:32:18.478 [2024-11-26 07:42:02.547922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.478 [2024-11-26 07:42:02.547930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.478 qpair failed and we were unable to recover it. 00:32:18.478 [2024-11-26 07:42:02.548214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.478 [2024-11-26 07:42:02.548223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.478 qpair failed and we were unable to recover it. 00:32:18.478 [2024-11-26 07:42:02.548430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.478 [2024-11-26 07:42:02.548438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.478 qpair failed and we were unable to recover it. 00:32:18.478 [2024-11-26 07:42:02.548749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.478 [2024-11-26 07:42:02.548757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.478 qpair failed and we were unable to recover it. 00:32:18.478 [2024-11-26 07:42:02.549096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.478 [2024-11-26 07:42:02.549104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.478 qpair failed and we were unable to recover it. 00:32:18.478 [2024-11-26 07:42:02.549422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.478 [2024-11-26 07:42:02.549431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.478 qpair failed and we were unable to recover it. 00:32:18.478 [2024-11-26 07:42:02.549727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.478 [2024-11-26 07:42:02.549736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.478 qpair failed and we were unable to recover it. 00:32:18.478 [2024-11-26 07:42:02.549908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.478 [2024-11-26 07:42:02.549917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.478 qpair failed and we were unable to recover it. 00:32:18.478 [2024-11-26 07:42:02.550095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.478 [2024-11-26 07:42:02.550103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.478 qpair failed and we were unable to recover it. 00:32:18.478 [2024-11-26 07:42:02.550394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.478 [2024-11-26 07:42:02.550403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.478 qpair failed and we were unable to recover it. 00:32:18.479 [2024-11-26 07:42:02.550734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.479 [2024-11-26 07:42:02.550742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.479 qpair failed and we were unable to recover it. 00:32:18.479 [2024-11-26 07:42:02.551049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.479 [2024-11-26 07:42:02.551057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.479 qpair failed and we were unable to recover it. 00:32:18.479 [2024-11-26 07:42:02.551367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.479 [2024-11-26 07:42:02.551376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.479 qpair failed and we were unable to recover it. 00:32:18.479 [2024-11-26 07:42:02.551702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.479 [2024-11-26 07:42:02.551711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.479 qpair failed and we were unable to recover it. 00:32:18.479 [2024-11-26 07:42:02.551870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.479 [2024-11-26 07:42:02.551879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.479 qpair failed and we were unable to recover it. 00:32:18.479 [2024-11-26 07:42:02.552174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.479 [2024-11-26 07:42:02.552182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.479 qpair failed and we were unable to recover it. 00:32:18.479 [2024-11-26 07:42:02.552468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.479 [2024-11-26 07:42:02.552476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.479 qpair failed and we were unable to recover it. 00:32:18.479 [2024-11-26 07:42:02.552850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.479 [2024-11-26 07:42:02.552857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.479 qpair failed and we were unable to recover it. 00:32:18.479 [2024-11-26 07:42:02.553182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.479 [2024-11-26 07:42:02.553192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.479 qpair failed and we were unable to recover it. 00:32:18.479 [2024-11-26 07:42:02.553530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.479 [2024-11-26 07:42:02.553540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.479 qpair failed and we were unable to recover it. 00:32:18.479 [2024-11-26 07:42:02.553891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.479 [2024-11-26 07:42:02.553901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.479 qpair failed and we were unable to recover it. 00:32:18.479 [2024-11-26 07:42:02.554012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.479 [2024-11-26 07:42:02.554020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.479 qpair failed and we were unable to recover it. 00:32:18.479 [2024-11-26 07:42:02.554263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.479 [2024-11-26 07:42:02.554271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.479 qpair failed and we were unable to recover it. 00:32:18.479 [2024-11-26 07:42:02.554604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.479 [2024-11-26 07:42:02.554613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.479 qpair failed and we were unable to recover it. 00:32:18.479 [2024-11-26 07:42:02.554897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.479 [2024-11-26 07:42:02.554905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.479 qpair failed and we were unable to recover it. 00:32:18.479 [2024-11-26 07:42:02.555208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.479 [2024-11-26 07:42:02.555216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.479 qpair failed and we were unable to recover it. 00:32:18.479 [2024-11-26 07:42:02.555409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.479 [2024-11-26 07:42:02.555417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.479 qpair failed and we were unable to recover it. 00:32:18.479 [2024-11-26 07:42:02.555563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.479 [2024-11-26 07:42:02.555572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.479 qpair failed and we were unable to recover it. 00:32:18.479 [2024-11-26 07:42:02.555858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.479 [2024-11-26 07:42:02.555871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.479 qpair failed and we were unable to recover it. 00:32:18.479 [2024-11-26 07:42:02.556258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.479 [2024-11-26 07:42:02.556267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.479 qpair failed and we were unable to recover it. 00:32:18.479 [2024-11-26 07:42:02.556568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.479 [2024-11-26 07:42:02.556576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.479 qpair failed and we were unable to recover it. 00:32:18.479 [2024-11-26 07:42:02.556901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.479 [2024-11-26 07:42:02.556910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.479 qpair failed and we were unable to recover it. 00:32:18.479 [2024-11-26 07:42:02.557221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.479 [2024-11-26 07:42:02.557228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.479 qpair failed and we were unable to recover it. 00:32:18.479 [2024-11-26 07:42:02.557543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.479 [2024-11-26 07:42:02.557551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.479 qpair failed and we were unable to recover it. 00:32:18.479 [2024-11-26 07:42:02.557847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.479 [2024-11-26 07:42:02.557856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.479 qpair failed and we were unable to recover it. 00:32:18.479 [2024-11-26 07:42:02.558168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.479 [2024-11-26 07:42:02.558176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.479 qpair failed and we were unable to recover it. 00:32:18.479 [2024-11-26 07:42:02.558487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.479 [2024-11-26 07:42:02.558496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.479 qpair failed and we were unable to recover it. 00:32:18.479 [2024-11-26 07:42:02.558744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.479 [2024-11-26 07:42:02.558753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.479 qpair failed and we were unable to recover it. 00:32:18.479 [2024-11-26 07:42:02.558979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.479 [2024-11-26 07:42:02.558988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.479 qpair failed and we were unable to recover it. 00:32:18.479 [2024-11-26 07:42:02.559317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.479 [2024-11-26 07:42:02.559326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.479 qpair failed and we were unable to recover it. 00:32:18.479 [2024-11-26 07:42:02.559610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.479 [2024-11-26 07:42:02.559620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.479 qpair failed and we were unable to recover it. 00:32:18.479 [2024-11-26 07:42:02.559915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.479 [2024-11-26 07:42:02.559924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.479 qpair failed and we were unable to recover it. 00:32:18.479 [2024-11-26 07:42:02.560243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.479 [2024-11-26 07:42:02.560251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.479 qpair failed and we were unable to recover it. 00:32:18.479 [2024-11-26 07:42:02.560290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.479 [2024-11-26 07:42:02.560297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.479 qpair failed and we were unable to recover it. 00:32:18.479 [2024-11-26 07:42:02.560593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.479 [2024-11-26 07:42:02.560602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.479 qpair failed and we were unable to recover it. 00:32:18.479 [2024-11-26 07:42:02.560897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.479 [2024-11-26 07:42:02.560905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.479 qpair failed and we were unable to recover it. 00:32:18.479 [2024-11-26 07:42:02.561214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.479 [2024-11-26 07:42:02.561223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.479 qpair failed and we were unable to recover it. 00:32:18.479 [2024-11-26 07:42:02.561581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.479 [2024-11-26 07:42:02.561590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.480 qpair failed and we were unable to recover it. 00:32:18.480 [2024-11-26 07:42:02.561930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.480 [2024-11-26 07:42:02.561940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.480 qpair failed and we were unable to recover it. 00:32:18.480 [2024-11-26 07:42:02.562251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.480 [2024-11-26 07:42:02.562260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.480 qpair failed and we were unable to recover it. 00:32:18.480 [2024-11-26 07:42:02.562604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.480 [2024-11-26 07:42:02.562612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.480 qpair failed and we were unable to recover it. 00:32:18.480 [2024-11-26 07:42:02.562875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.480 [2024-11-26 07:42:02.562884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.480 qpair failed and we were unable to recover it. 00:32:18.480 [2024-11-26 07:42:02.563190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.480 [2024-11-26 07:42:02.563199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.480 qpair failed and we were unable to recover it. 00:32:18.480 [2024-11-26 07:42:02.563557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.480 [2024-11-26 07:42:02.563566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.480 qpair failed and we were unable to recover it. 00:32:18.480 [2024-11-26 07:42:02.563722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.480 [2024-11-26 07:42:02.563733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.480 qpair failed and we were unable to recover it. 00:32:18.480 [2024-11-26 07:42:02.564051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.480 [2024-11-26 07:42:02.564060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.480 qpair failed and we were unable to recover it. 00:32:18.480 [2024-11-26 07:42:02.564393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.480 [2024-11-26 07:42:02.564401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.480 qpair failed and we were unable to recover it. 00:32:18.480 [2024-11-26 07:42:02.564714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.480 [2024-11-26 07:42:02.564722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.480 qpair failed and we were unable to recover it. 00:32:18.480 [2024-11-26 07:42:02.564923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.480 [2024-11-26 07:42:02.564933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.480 qpair failed and we were unable to recover it. 00:32:18.480 [2024-11-26 07:42:02.565205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.480 [2024-11-26 07:42:02.565215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.480 qpair failed and we were unable to recover it. 00:32:18.480 [2024-11-26 07:42:02.565548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.480 [2024-11-26 07:42:02.565557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.480 qpair failed and we were unable to recover it. 00:32:18.480 [2024-11-26 07:42:02.565805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.480 [2024-11-26 07:42:02.565813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.480 qpair failed and we were unable to recover it. 00:32:18.480 [2024-11-26 07:42:02.566142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.480 [2024-11-26 07:42:02.566150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.480 qpair failed and we were unable to recover it. 00:32:18.480 [2024-11-26 07:42:02.566427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.480 [2024-11-26 07:42:02.566436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.480 qpair failed and we were unable to recover it. 00:32:18.480 [2024-11-26 07:42:02.566739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.480 [2024-11-26 07:42:02.566748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.480 qpair failed and we were unable to recover it. 00:32:18.480 [2024-11-26 07:42:02.567064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.480 [2024-11-26 07:42:02.567074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.480 qpair failed and we were unable to recover it. 00:32:18.480 [2024-11-26 07:42:02.567469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.480 [2024-11-26 07:42:02.567478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.480 qpair failed and we were unable to recover it. 00:32:18.480 [2024-11-26 07:42:02.567768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.480 [2024-11-26 07:42:02.567776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.480 qpair failed and we were unable to recover it. 00:32:18.480 [2024-11-26 07:42:02.568072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.480 [2024-11-26 07:42:02.568081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.480 qpair failed and we were unable to recover it. 00:32:18.480 [2024-11-26 07:42:02.568406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.480 [2024-11-26 07:42:02.568415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.480 qpair failed and we were unable to recover it. 00:32:18.480 [2024-11-26 07:42:02.568726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.480 [2024-11-26 07:42:02.568736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.480 qpair failed and we were unable to recover it. 00:32:18.480 [2024-11-26 07:42:02.569055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.480 [2024-11-26 07:42:02.569063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.480 qpair failed and we were unable to recover it. 00:32:18.480 [2024-11-26 07:42:02.569245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.480 [2024-11-26 07:42:02.569253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.480 qpair failed and we were unable to recover it. 00:32:18.480 [2024-11-26 07:42:02.569559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.480 [2024-11-26 07:42:02.569568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.480 qpair failed and we were unable to recover it. 00:32:18.480 [2024-11-26 07:42:02.569874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.480 [2024-11-26 07:42:02.569884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.480 qpair failed and we were unable to recover it. 00:32:18.480 [2024-11-26 07:42:02.570056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.480 [2024-11-26 07:42:02.570065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.480 qpair failed and we were unable to recover it. 00:32:18.480 [2024-11-26 07:42:02.570190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.480 [2024-11-26 07:42:02.570197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.480 qpair failed and we were unable to recover it. 00:32:18.480 [2024-11-26 07:42:02.570514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.480 [2024-11-26 07:42:02.570523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.480 qpair failed and we were unable to recover it. 00:32:18.480 [2024-11-26 07:42:02.570849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.480 [2024-11-26 07:42:02.570858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.480 qpair failed and we were unable to recover it. 00:32:18.480 [2024-11-26 07:42:02.571177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.480 [2024-11-26 07:42:02.571186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.480 qpair failed and we were unable to recover it. 00:32:18.480 [2024-11-26 07:42:02.571500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.480 [2024-11-26 07:42:02.571508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.480 qpair failed and we were unable to recover it. 00:32:18.480 [2024-11-26 07:42:02.571868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.480 [2024-11-26 07:42:02.571878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.480 qpair failed and we were unable to recover it. 00:32:18.480 [2024-11-26 07:42:02.572097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.480 [2024-11-26 07:42:02.572104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.480 qpair failed and we were unable to recover it. 00:32:18.480 [2024-11-26 07:42:02.572199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.480 [2024-11-26 07:42:02.572207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.480 qpair failed and we were unable to recover it. 00:32:18.480 [2024-11-26 07:42:02.572510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.480 [2024-11-26 07:42:02.572519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.480 qpair failed and we were unable to recover it. 00:32:18.480 [2024-11-26 07:42:02.572692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.481 [2024-11-26 07:42:02.572702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.481 qpair failed and we were unable to recover it. 00:32:18.481 [2024-11-26 07:42:02.573006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.481 [2024-11-26 07:42:02.573015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.481 qpair failed and we were unable to recover it. 00:32:18.481 [2024-11-26 07:42:02.573198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.481 [2024-11-26 07:42:02.573206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.481 qpair failed and we were unable to recover it. 00:32:18.481 [2024-11-26 07:42:02.573393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.481 [2024-11-26 07:42:02.573401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.481 qpair failed and we were unable to recover it. 00:32:18.481 [2024-11-26 07:42:02.573594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.481 [2024-11-26 07:42:02.573603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.481 qpair failed and we were unable to recover it. 00:32:18.481 [2024-11-26 07:42:02.573895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.481 [2024-11-26 07:42:02.573903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.481 qpair failed and we were unable to recover it. 00:32:18.481 [2024-11-26 07:42:02.574100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.481 [2024-11-26 07:42:02.574108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.481 qpair failed and we were unable to recover it. 00:32:18.481 [2024-11-26 07:42:02.574433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.481 [2024-11-26 07:42:02.574450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.481 qpair failed and we were unable to recover it. 00:32:18.481 [2024-11-26 07:42:02.574748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.481 [2024-11-26 07:42:02.574757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.481 qpair failed and we were unable to recover it. 00:32:18.481 [2024-11-26 07:42:02.574964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.481 [2024-11-26 07:42:02.574974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.481 qpair failed and we were unable to recover it. 00:32:18.481 [2024-11-26 07:42:02.575324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.481 [2024-11-26 07:42:02.575332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.481 qpair failed and we were unable to recover it. 00:32:18.481 [2024-11-26 07:42:02.575507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.481 [2024-11-26 07:42:02.575517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.481 qpair failed and we were unable to recover it. 00:32:18.481 [2024-11-26 07:42:02.575722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.481 [2024-11-26 07:42:02.575731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.481 qpair failed and we were unable to recover it. 00:32:18.481 [2024-11-26 07:42:02.575915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.481 [2024-11-26 07:42:02.575924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.481 qpair failed and we were unable to recover it. 00:32:18.481 [2024-11-26 07:42:02.576136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.481 [2024-11-26 07:42:02.576145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.481 qpair failed and we were unable to recover it. 00:32:18.481 [2024-11-26 07:42:02.576472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.481 [2024-11-26 07:42:02.576480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.481 qpair failed and we were unable to recover it. 00:32:18.481 [2024-11-26 07:42:02.576714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.481 [2024-11-26 07:42:02.576722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.481 qpair failed and we were unable to recover it. 00:32:18.481 [2024-11-26 07:42:02.577031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.481 [2024-11-26 07:42:02.577040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.481 qpair failed and we were unable to recover it. 00:32:18.481 [2024-11-26 07:42:02.577329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.481 [2024-11-26 07:42:02.577337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.481 qpair failed and we were unable to recover it. 00:32:18.481 [2024-11-26 07:42:02.577511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.481 [2024-11-26 07:42:02.577520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.481 qpair failed and we were unable to recover it. 00:32:18.481 [2024-11-26 07:42:02.577812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.481 [2024-11-26 07:42:02.577821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.481 qpair failed and we were unable to recover it. 00:32:18.481 [2024-11-26 07:42:02.578151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.481 [2024-11-26 07:42:02.578160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.481 qpair failed and we were unable to recover it. 00:32:18.481 [2024-11-26 07:42:02.578495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.481 [2024-11-26 07:42:02.578504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.481 qpair failed and we were unable to recover it. 00:32:18.481 [2024-11-26 07:42:02.578667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.481 [2024-11-26 07:42:02.578675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.481 qpair failed and we were unable to recover it. 00:32:18.481 [2024-11-26 07:42:02.578868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.481 [2024-11-26 07:42:02.578877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.481 qpair failed and we were unable to recover it. 00:32:18.481 [2024-11-26 07:42:02.579174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.481 [2024-11-26 07:42:02.579183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.481 qpair failed and we were unable to recover it. 00:32:18.481 [2024-11-26 07:42:02.579493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.481 [2024-11-26 07:42:02.579502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.481 qpair failed and we were unable to recover it. 00:32:18.481 [2024-11-26 07:42:02.579807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.481 [2024-11-26 07:42:02.579816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.481 qpair failed and we were unable to recover it. 00:32:18.481 [2024-11-26 07:42:02.580142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.481 [2024-11-26 07:42:02.580151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.481 qpair failed and we were unable to recover it. 00:32:18.481 [2024-11-26 07:42:02.580461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.481 [2024-11-26 07:42:02.580469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.481 qpair failed and we were unable to recover it. 00:32:18.481 [2024-11-26 07:42:02.580674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.481 [2024-11-26 07:42:02.580682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.481 qpair failed and we were unable to recover it. 00:32:18.481 [2024-11-26 07:42:02.580983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.481 [2024-11-26 07:42:02.580992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.481 qpair failed and we were unable to recover it. 00:32:18.481 [2024-11-26 07:42:02.581312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.481 [2024-11-26 07:42:02.581320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.481 qpair failed and we were unable to recover it. 00:32:18.481 [2024-11-26 07:42:02.581486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.481 [2024-11-26 07:42:02.581496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.481 qpair failed and we were unable to recover it. 00:32:18.481 [2024-11-26 07:42:02.581727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.481 [2024-11-26 07:42:02.581737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.481 qpair failed and we were unable to recover it. 00:32:18.481 [2024-11-26 07:42:02.582033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.481 [2024-11-26 07:42:02.582041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.481 qpair failed and we were unable to recover it. 00:32:18.481 [2024-11-26 07:42:02.582331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.481 [2024-11-26 07:42:02.582340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.481 qpair failed and we were unable to recover it. 00:32:18.481 [2024-11-26 07:42:02.582671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.481 [2024-11-26 07:42:02.582681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.481 qpair failed and we were unable to recover it. 00:32:18.481 [2024-11-26 07:42:02.582889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.482 [2024-11-26 07:42:02.582897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.482 qpair failed and we were unable to recover it. 00:32:18.482 [2024-11-26 07:42:02.583228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.482 [2024-11-26 07:42:02.583236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.482 qpair failed and we were unable to recover it. 00:32:18.482 [2024-11-26 07:42:02.583410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.482 [2024-11-26 07:42:02.583418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.482 qpair failed and we were unable to recover it. 00:32:18.482 [2024-11-26 07:42:02.583700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.482 [2024-11-26 07:42:02.583710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.482 qpair failed and we were unable to recover it. 00:32:18.482 [2024-11-26 07:42:02.583930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.482 [2024-11-26 07:42:02.583939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.482 qpair failed and we were unable to recover it. 00:32:18.482 [2024-11-26 07:42:02.584208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.482 [2024-11-26 07:42:02.584215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.482 qpair failed and we were unable to recover it. 00:32:18.482 [2024-11-26 07:42:02.584545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.482 [2024-11-26 07:42:02.584554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.482 qpair failed and we were unable to recover it. 00:32:18.482 [2024-11-26 07:42:02.584870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.482 [2024-11-26 07:42:02.584879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.482 qpair failed and we were unable to recover it. 00:32:18.482 [2024-11-26 07:42:02.585191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.482 [2024-11-26 07:42:02.585200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.482 qpair failed and we were unable to recover it. 00:32:18.482 [2024-11-26 07:42:02.585513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.482 [2024-11-26 07:42:02.585522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.482 qpair failed and we were unable to recover it. 00:32:18.482 [2024-11-26 07:42:02.585822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.482 [2024-11-26 07:42:02.585831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.482 qpair failed and we were unable to recover it. 00:32:18.482 [2024-11-26 07:42:02.586108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.482 [2024-11-26 07:42:02.586117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.482 qpair failed and we were unable to recover it. 00:32:18.482 [2024-11-26 07:42:02.586449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.482 [2024-11-26 07:42:02.586458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.482 qpair failed and we were unable to recover it. 00:32:18.482 [2024-11-26 07:42:02.586741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.482 [2024-11-26 07:42:02.586749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.482 qpair failed and we were unable to recover it. 00:32:18.482 [2024-11-26 07:42:02.587054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.482 [2024-11-26 07:42:02.587061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.482 qpair failed and we were unable to recover it. 00:32:18.482 [2024-11-26 07:42:02.587394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.482 [2024-11-26 07:42:02.587402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.482 qpair failed and we were unable to recover it. 00:32:18.482 [2024-11-26 07:42:02.587590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.482 [2024-11-26 07:42:02.587599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.482 qpair failed and we were unable to recover it. 00:32:18.482 [2024-11-26 07:42:02.587922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.482 [2024-11-26 07:42:02.587931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.482 qpair failed and we were unable to recover it. 00:32:18.482 [2024-11-26 07:42:02.588265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.482 [2024-11-26 07:42:02.588273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.482 qpair failed and we were unable to recover it. 00:32:18.482 [2024-11-26 07:42:02.588460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.482 [2024-11-26 07:42:02.588468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.482 qpair failed and we were unable to recover it. 00:32:18.482 [2024-11-26 07:42:02.588747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.482 [2024-11-26 07:42:02.588756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.482 qpair failed and we were unable to recover it. 00:32:18.482 [2024-11-26 07:42:02.589049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.482 [2024-11-26 07:42:02.589058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.482 qpair failed and we were unable to recover it. 00:32:18.482 [2024-11-26 07:42:02.589392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.482 [2024-11-26 07:42:02.589400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.482 qpair failed and we were unable to recover it. 00:32:18.482 [2024-11-26 07:42:02.589699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.482 [2024-11-26 07:42:02.589707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.482 qpair failed and we were unable to recover it. 00:32:18.482 [2024-11-26 07:42:02.589925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.482 [2024-11-26 07:42:02.589933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.482 qpair failed and we were unable to recover it. 00:32:18.482 [2024-11-26 07:42:02.590265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.482 [2024-11-26 07:42:02.590273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.482 qpair failed and we were unable to recover it. 00:32:18.482 [2024-11-26 07:42:02.590480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.482 [2024-11-26 07:42:02.590488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.482 qpair failed and we were unable to recover it. 00:32:18.482 [2024-11-26 07:42:02.590814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.482 [2024-11-26 07:42:02.590823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.482 qpair failed and we were unable to recover it. 00:32:18.482 [2024-11-26 07:42:02.591039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.482 [2024-11-26 07:42:02.591048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.482 qpair failed and we were unable to recover it. 00:32:18.482 [2024-11-26 07:42:02.591380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.482 [2024-11-26 07:42:02.591389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.482 qpair failed and we were unable to recover it. 00:32:18.482 [2024-11-26 07:42:02.591705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.482 [2024-11-26 07:42:02.591713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.482 qpair failed and we were unable to recover it. 00:32:18.482 [2024-11-26 07:42:02.592028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.482 [2024-11-26 07:42:02.592038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.482 qpair failed and we were unable to recover it. 00:32:18.482 [2024-11-26 07:42:02.592225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.482 [2024-11-26 07:42:02.592233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.482 qpair failed and we were unable to recover it. 00:32:18.482 [2024-11-26 07:42:02.592435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.482 [2024-11-26 07:42:02.592443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.482 qpair failed and we were unable to recover it. 00:32:18.482 [2024-11-26 07:42:02.592664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.482 [2024-11-26 07:42:02.592674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.482 qpair failed and we were unable to recover it. 00:32:18.482 [2024-11-26 07:42:02.592872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.482 [2024-11-26 07:42:02.592881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.482 qpair failed and we were unable to recover it. 00:32:18.482 [2024-11-26 07:42:02.593059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.482 [2024-11-26 07:42:02.593066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.482 qpair failed and we were unable to recover it. 00:32:18.482 [2024-11-26 07:42:02.593423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.482 [2024-11-26 07:42:02.593432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.482 qpair failed and we were unable to recover it. 00:32:18.482 [2024-11-26 07:42:02.593725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.483 [2024-11-26 07:42:02.593732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.483 qpair failed and we were unable to recover it. 00:32:18.483 [2024-11-26 07:42:02.594039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.483 [2024-11-26 07:42:02.594047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.483 qpair failed and we were unable to recover it. 00:32:18.483 [2024-11-26 07:42:02.594374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.483 [2024-11-26 07:42:02.594383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.483 qpair failed and we were unable to recover it. 00:32:18.483 [2024-11-26 07:42:02.594673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.483 [2024-11-26 07:42:02.594682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.483 qpair failed and we were unable to recover it. 00:32:18.483 [2024-11-26 07:42:02.594910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.483 [2024-11-26 07:42:02.594918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.483 qpair failed and we were unable to recover it. 00:32:18.483 [2024-11-26 07:42:02.595241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.483 [2024-11-26 07:42:02.595250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.483 qpair failed and we were unable to recover it. 00:32:18.483 [2024-11-26 07:42:02.595482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.483 [2024-11-26 07:42:02.595490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.483 qpair failed and we were unable to recover it. 00:32:18.483 [2024-11-26 07:42:02.595823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.483 [2024-11-26 07:42:02.595832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.483 qpair failed and we were unable to recover it. 00:32:18.483 [2024-11-26 07:42:02.595926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.483 [2024-11-26 07:42:02.595933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.483 qpair failed and we were unable to recover it. 00:32:18.483 [2024-11-26 07:42:02.596121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.483 [2024-11-26 07:42:02.596130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.483 qpair failed and we were unable to recover it. 00:32:18.483 [2024-11-26 07:42:02.596454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.483 [2024-11-26 07:42:02.596462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.483 qpair failed and we were unable to recover it. 00:32:18.483 [2024-11-26 07:42:02.596789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.483 [2024-11-26 07:42:02.596797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.483 qpair failed and we were unable to recover it. 00:32:18.483 [2024-11-26 07:42:02.597175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.483 [2024-11-26 07:42:02.597183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.483 qpair failed and we were unable to recover it. 00:32:18.483 [2024-11-26 07:42:02.597500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.483 [2024-11-26 07:42:02.597509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.483 qpair failed and we were unable to recover it. 00:32:18.483 [2024-11-26 07:42:02.597828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.483 [2024-11-26 07:42:02.597836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.483 qpair failed and we were unable to recover it. 00:32:18.483 [2024-11-26 07:42:02.598055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.483 [2024-11-26 07:42:02.598063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.483 qpair failed and we were unable to recover it. 00:32:18.483 [2024-11-26 07:42:02.598363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.483 [2024-11-26 07:42:02.598371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.483 qpair failed and we were unable to recover it. 00:32:18.757 [2024-11-26 07:42:02.598664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.757 [2024-11-26 07:42:02.598674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.757 qpair failed and we were unable to recover it. 00:32:18.757 [2024-11-26 07:42:02.598927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.757 [2024-11-26 07:42:02.598937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.757 qpair failed and we were unable to recover it. 00:32:18.757 [2024-11-26 07:42:02.599266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.757 [2024-11-26 07:42:02.599275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.757 qpair failed and we were unable to recover it. 00:32:18.757 [2024-11-26 07:42:02.599456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.757 [2024-11-26 07:42:02.599465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.757 qpair failed and we were unable to recover it. 00:32:18.757 [2024-11-26 07:42:02.599780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.757 [2024-11-26 07:42:02.599788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.757 qpair failed and we were unable to recover it. 00:32:18.757 [2024-11-26 07:42:02.600160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.757 [2024-11-26 07:42:02.600169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.757 qpair failed and we were unable to recover it. 00:32:18.757 [2024-11-26 07:42:02.600376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.757 [2024-11-26 07:42:02.600384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.757 qpair failed and we were unable to recover it. 00:32:18.757 [2024-11-26 07:42:02.600716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.757 [2024-11-26 07:42:02.600726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.757 qpair failed and we were unable to recover it. 00:32:18.757 [2024-11-26 07:42:02.601034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.757 [2024-11-26 07:42:02.601043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.758 qpair failed and we were unable to recover it. 00:32:18.758 [2024-11-26 07:42:02.601345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.758 [2024-11-26 07:42:02.601353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.758 qpair failed and we were unable to recover it. 00:32:18.758 [2024-11-26 07:42:02.601727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.758 [2024-11-26 07:42:02.601735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.758 qpair failed and we were unable to recover it. 00:32:18.758 [2024-11-26 07:42:02.601957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.758 [2024-11-26 07:42:02.601965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.758 qpair failed and we were unable to recover it. 00:32:18.758 [2024-11-26 07:42:02.602290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.758 [2024-11-26 07:42:02.602298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.758 qpair failed and we were unable to recover it. 00:32:18.758 [2024-11-26 07:42:02.602659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.758 [2024-11-26 07:42:02.602668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.758 qpair failed and we were unable to recover it. 00:32:18.758 [2024-11-26 07:42:02.603004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.758 [2024-11-26 07:42:02.603013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.758 qpair failed and we were unable to recover it. 00:32:18.758 [2024-11-26 07:42:02.603325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.758 [2024-11-26 07:42:02.603333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.758 qpair failed and we were unable to recover it. 00:32:18.758 [2024-11-26 07:42:02.603630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.758 [2024-11-26 07:42:02.603638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.758 qpair failed and we were unable to recover it. 00:32:18.758 [2024-11-26 07:42:02.603829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.758 [2024-11-26 07:42:02.603837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.758 qpair failed and we were unable to recover it. 00:32:18.758 [2024-11-26 07:42:02.604069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.758 [2024-11-26 07:42:02.604078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.758 qpair failed and we were unable to recover it. 00:32:18.758 [2024-11-26 07:42:02.604394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.758 [2024-11-26 07:42:02.604403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.758 qpair failed and we were unable to recover it. 00:32:18.758 [2024-11-26 07:42:02.604705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.758 [2024-11-26 07:42:02.604714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.758 qpair failed and we were unable to recover it. 00:32:18.758 [2024-11-26 07:42:02.604915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.758 [2024-11-26 07:42:02.604922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.758 qpair failed and we were unable to recover it. 00:32:18.758 [2024-11-26 07:42:02.605248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.758 [2024-11-26 07:42:02.605256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.758 qpair failed and we were unable to recover it. 00:32:18.758 [2024-11-26 07:42:02.605569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.758 [2024-11-26 07:42:02.605578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.758 qpair failed and we were unable to recover it. 00:32:18.758 [2024-11-26 07:42:02.605899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.758 [2024-11-26 07:42:02.605908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.758 qpair failed and we were unable to recover it. 00:32:18.758 [2024-11-26 07:42:02.606246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.758 [2024-11-26 07:42:02.606254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.758 qpair failed and we were unable to recover it. 00:32:18.758 [2024-11-26 07:42:02.606554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.758 [2024-11-26 07:42:02.606563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.758 qpair failed and we were unable to recover it. 00:32:18.758 [2024-11-26 07:42:02.606871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.758 [2024-11-26 07:42:02.606880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.758 qpair failed and we were unable to recover it. 00:32:18.758 [2024-11-26 07:42:02.607185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.758 [2024-11-26 07:42:02.607194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.758 qpair failed and we were unable to recover it. 00:32:18.758 [2024-11-26 07:42:02.607533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.758 [2024-11-26 07:42:02.607541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.758 qpair failed and we were unable to recover it. 00:32:18.758 [2024-11-26 07:42:02.607699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.758 [2024-11-26 07:42:02.607707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.758 qpair failed and we were unable to recover it. 00:32:18.758 [2024-11-26 07:42:02.608010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.758 [2024-11-26 07:42:02.608019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.758 qpair failed and we were unable to recover it. 00:32:18.758 [2024-11-26 07:42:02.608363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.758 [2024-11-26 07:42:02.608373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.758 qpair failed and we were unable to recover it. 00:32:18.758 [2024-11-26 07:42:02.608675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.758 [2024-11-26 07:42:02.608683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.758 qpair failed and we were unable to recover it. 00:32:18.758 [2024-11-26 07:42:02.608977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.758 [2024-11-26 07:42:02.608985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.758 qpair failed and we were unable to recover it. 00:32:18.758 [2024-11-26 07:42:02.609348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.758 [2024-11-26 07:42:02.609357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.758 qpair failed and we were unable to recover it. 00:32:18.758 [2024-11-26 07:42:02.609576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.758 [2024-11-26 07:42:02.609584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.758 qpair failed and we were unable to recover it. 00:32:18.758 [2024-11-26 07:42:02.609895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.758 [2024-11-26 07:42:02.609904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.758 qpair failed and we were unable to recover it. 00:32:18.758 [2024-11-26 07:42:02.610170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.758 [2024-11-26 07:42:02.610177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.758 qpair failed and we were unable to recover it. 00:32:18.758 [2024-11-26 07:42:02.610487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.758 [2024-11-26 07:42:02.610495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.758 qpair failed and we were unable to recover it. 00:32:18.758 [2024-11-26 07:42:02.610709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.758 [2024-11-26 07:42:02.610717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.758 qpair failed and we were unable to recover it. 00:32:18.758 [2024-11-26 07:42:02.611048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.758 [2024-11-26 07:42:02.611059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.758 qpair failed and we were unable to recover it. 00:32:18.758 [2024-11-26 07:42:02.611380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.758 [2024-11-26 07:42:02.611388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.758 qpair failed and we were unable to recover it. 00:32:18.758 [2024-11-26 07:42:02.611570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.758 [2024-11-26 07:42:02.611577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.758 qpair failed and we were unable to recover it. 00:32:18.758 [2024-11-26 07:42:02.611868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.758 [2024-11-26 07:42:02.611877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.758 qpair failed and we were unable to recover it. 00:32:18.758 [2024-11-26 07:42:02.612067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.758 [2024-11-26 07:42:02.612075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.758 qpair failed and we were unable to recover it. 00:32:18.758 [2024-11-26 07:42:02.612404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.759 [2024-11-26 07:42:02.612412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.759 qpair failed and we were unable to recover it. 00:32:18.759 [2024-11-26 07:42:02.612718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.759 [2024-11-26 07:42:02.612726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.759 qpair failed and we were unable to recover it. 00:32:18.759 [2024-11-26 07:42:02.612881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.759 [2024-11-26 07:42:02.612890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.759 qpair failed and we were unable to recover it. 00:32:18.759 [2024-11-26 07:42:02.613239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.759 [2024-11-26 07:42:02.613248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.759 qpair failed and we were unable to recover it. 00:32:18.759 [2024-11-26 07:42:02.613537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.759 [2024-11-26 07:42:02.613545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.759 qpair failed and we were unable to recover it. 00:32:18.759 [2024-11-26 07:42:02.613733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.759 [2024-11-26 07:42:02.613741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.759 qpair failed and we were unable to recover it. 00:32:18.759 [2024-11-26 07:42:02.613946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.759 [2024-11-26 07:42:02.613954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.759 qpair failed and we were unable to recover it. 00:32:18.759 [2024-11-26 07:42:02.614320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.759 [2024-11-26 07:42:02.614329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.759 qpair failed and we were unable to recover it. 00:32:18.759 [2024-11-26 07:42:02.614634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.759 [2024-11-26 07:42:02.614643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.759 qpair failed and we were unable to recover it. 00:32:18.759 [2024-11-26 07:42:02.614979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.759 [2024-11-26 07:42:02.614988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.759 qpair failed and we were unable to recover it. 00:32:18.759 [2024-11-26 07:42:02.615326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.759 [2024-11-26 07:42:02.615335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.759 qpair failed and we were unable to recover it. 00:32:18.759 [2024-11-26 07:42:02.615648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.759 [2024-11-26 07:42:02.615657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.759 qpair failed and we were unable to recover it. 00:32:18.759 [2024-11-26 07:42:02.615977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.759 [2024-11-26 07:42:02.615986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.759 qpair failed and we were unable to recover it. 00:32:18.759 [2024-11-26 07:42:02.616327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.759 [2024-11-26 07:42:02.616336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.759 qpair failed and we were unable to recover it. 00:32:18.759 [2024-11-26 07:42:02.616646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.759 [2024-11-26 07:42:02.616654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.759 qpair failed and we were unable to recover it. 00:32:18.759 [2024-11-26 07:42:02.616889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.759 [2024-11-26 07:42:02.616897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.759 qpair failed and we were unable to recover it. 00:32:18.759 [2024-11-26 07:42:02.617236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.759 [2024-11-26 07:42:02.617244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.759 qpair failed and we were unable to recover it. 00:32:18.759 [2024-11-26 07:42:02.617552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.759 [2024-11-26 07:42:02.617561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.759 qpair failed and we were unable to recover it. 00:32:18.759 [2024-11-26 07:42:02.617872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.759 [2024-11-26 07:42:02.617881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.759 qpair failed and we were unable to recover it. 00:32:18.759 [2024-11-26 07:42:02.618198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.759 [2024-11-26 07:42:02.618207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.759 qpair failed and we were unable to recover it. 00:32:18.759 [2024-11-26 07:42:02.618516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.759 [2024-11-26 07:42:02.618524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.759 qpair failed and we were unable to recover it. 00:32:18.759 [2024-11-26 07:42:02.618672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.759 [2024-11-26 07:42:02.618680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.759 qpair failed and we were unable to recover it. 00:32:18.759 [2024-11-26 07:42:02.618983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.759 [2024-11-26 07:42:02.618994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.759 qpair failed and we were unable to recover it. 00:32:18.759 [2024-11-26 07:42:02.619319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.759 [2024-11-26 07:42:02.619327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.759 qpair failed and we were unable to recover it. 00:32:18.759 [2024-11-26 07:42:02.619634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.759 [2024-11-26 07:42:02.619642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.759 qpair failed and we were unable to recover it. 00:32:18.759 [2024-11-26 07:42:02.619935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.759 [2024-11-26 07:42:02.619943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.759 qpair failed and we were unable to recover it. 00:32:18.759 [2024-11-26 07:42:02.620268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.759 [2024-11-26 07:42:02.620277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.759 qpair failed and we were unable to recover it. 00:32:18.759 [2024-11-26 07:42:02.620590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.759 [2024-11-26 07:42:02.620599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.759 qpair failed and we were unable to recover it. 00:32:18.759 [2024-11-26 07:42:02.620939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.759 [2024-11-26 07:42:02.620947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.759 qpair failed and we were unable to recover it. 00:32:18.759 [2024-11-26 07:42:02.621274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.759 [2024-11-26 07:42:02.621283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.759 qpair failed and we were unable to recover it. 00:32:18.759 [2024-11-26 07:42:02.621593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.759 [2024-11-26 07:42:02.621601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.759 qpair failed and we were unable to recover it. 00:32:18.759 [2024-11-26 07:42:02.621908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.759 [2024-11-26 07:42:02.621917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.759 qpair failed and we were unable to recover it. 00:32:18.759 [2024-11-26 07:42:02.622234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.759 [2024-11-26 07:42:02.622243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.759 qpair failed and we were unable to recover it. 00:32:18.759 [2024-11-26 07:42:02.622550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.759 [2024-11-26 07:42:02.622559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.759 qpair failed and we were unable to recover it. 00:32:18.759 [2024-11-26 07:42:02.622874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.759 [2024-11-26 07:42:02.622883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.759 qpair failed and we were unable to recover it. 00:32:18.759 [2024-11-26 07:42:02.623184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.759 [2024-11-26 07:42:02.623192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.759 qpair failed and we were unable to recover it. 00:32:18.759 [2024-11-26 07:42:02.623336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.759 [2024-11-26 07:42:02.623344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.759 qpair failed and we were unable to recover it. 00:32:18.759 [2024-11-26 07:42:02.623645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.759 [2024-11-26 07:42:02.623654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.759 qpair failed and we were unable to recover it. 00:32:18.759 [2024-11-26 07:42:02.623938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.760 [2024-11-26 07:42:02.623946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.760 qpair failed and we were unable to recover it. 00:32:18.760 [2024-11-26 07:42:02.624262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.760 [2024-11-26 07:42:02.624270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.760 qpair failed and we were unable to recover it. 00:32:18.760 [2024-11-26 07:42:02.624459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.760 [2024-11-26 07:42:02.624467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.760 qpair failed and we were unable to recover it. 00:32:18.760 [2024-11-26 07:42:02.624782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.760 [2024-11-26 07:42:02.624790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.760 qpair failed and we were unable to recover it. 00:32:18.760 [2024-11-26 07:42:02.624979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.760 [2024-11-26 07:42:02.624987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.760 qpair failed and we were unable to recover it. 00:32:18.760 [2024-11-26 07:42:02.625189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.760 [2024-11-26 07:42:02.625197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.760 qpair failed and we were unable to recover it. 00:32:18.760 [2024-11-26 07:42:02.625493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.760 [2024-11-26 07:42:02.625502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.760 qpair failed and we were unable to recover it. 00:32:18.760 [2024-11-26 07:42:02.625821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.760 [2024-11-26 07:42:02.625830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.760 qpair failed and we were unable to recover it. 00:32:18.760 [2024-11-26 07:42:02.626005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.760 [2024-11-26 07:42:02.626013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.760 qpair failed and we were unable to recover it. 00:32:18.760 [2024-11-26 07:42:02.626277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.760 [2024-11-26 07:42:02.626285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.760 qpair failed and we were unable to recover it. 00:32:18.760 [2024-11-26 07:42:02.626468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.760 [2024-11-26 07:42:02.626475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.760 qpair failed and we were unable to recover it. 00:32:18.760 [2024-11-26 07:42:02.626776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.760 [2024-11-26 07:42:02.626784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.760 qpair failed and we were unable to recover it. 00:32:18.760 [2024-11-26 07:42:02.627101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.760 [2024-11-26 07:42:02.627110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.760 qpair failed and we were unable to recover it. 00:32:18.760 [2024-11-26 07:42:02.627417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.760 [2024-11-26 07:42:02.627425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.760 qpair failed and we were unable to recover it. 00:32:18.760 [2024-11-26 07:42:02.627740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.760 [2024-11-26 07:42:02.627749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.760 qpair failed and we were unable to recover it. 00:32:18.760 [2024-11-26 07:42:02.628058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.760 [2024-11-26 07:42:02.628067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.760 qpair failed and we were unable to recover it. 00:32:18.760 [2024-11-26 07:42:02.628370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.760 [2024-11-26 07:42:02.628379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.760 qpair failed and we were unable to recover it. 00:32:18.760 [2024-11-26 07:42:02.628690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.760 [2024-11-26 07:42:02.628698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.760 qpair failed and we were unable to recover it. 00:32:18.760 [2024-11-26 07:42:02.629019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.760 [2024-11-26 07:42:02.629028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.760 qpair failed and we were unable to recover it. 00:32:18.760 [2024-11-26 07:42:02.629359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.760 [2024-11-26 07:42:02.629368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.760 qpair failed and we were unable to recover it. 00:32:18.760 [2024-11-26 07:42:02.629656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.760 [2024-11-26 07:42:02.629664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.760 qpair failed and we were unable to recover it. 00:32:18.760 [2024-11-26 07:42:02.629972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.760 [2024-11-26 07:42:02.629981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.760 qpair failed and we were unable to recover it. 00:32:18.760 [2024-11-26 07:42:02.630136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.760 [2024-11-26 07:42:02.630144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.760 qpair failed and we were unable to recover it. 00:32:18.760 [2024-11-26 07:42:02.630463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.760 [2024-11-26 07:42:02.630472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.760 qpair failed and we were unable to recover it. 00:32:18.760 [2024-11-26 07:42:02.630803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.760 [2024-11-26 07:42:02.630814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.760 qpair failed and we were unable to recover it. 00:32:18.760 [2024-11-26 07:42:02.631127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.760 [2024-11-26 07:42:02.631135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.760 qpair failed and we were unable to recover it. 00:32:18.760 [2024-11-26 07:42:02.631420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.760 [2024-11-26 07:42:02.631428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.760 qpair failed and we were unable to recover it. 00:32:18.760 [2024-11-26 07:42:02.631700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.760 [2024-11-26 07:42:02.631709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.760 qpair failed and we were unable to recover it. 00:32:18.760 [2024-11-26 07:42:02.632022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.760 [2024-11-26 07:42:02.632031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.760 qpair failed and we were unable to recover it. 00:32:18.760 [2024-11-26 07:42:02.632343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.760 [2024-11-26 07:42:02.632351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.760 qpair failed and we were unable to recover it. 00:32:18.760 [2024-11-26 07:42:02.632663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.760 [2024-11-26 07:42:02.632671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.760 qpair failed and we were unable to recover it. 00:32:18.760 [2024-11-26 07:42:02.632879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.760 [2024-11-26 07:42:02.632887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.760 qpair failed and we were unable to recover it. 00:32:18.760 [2024-11-26 07:42:02.633098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.760 [2024-11-26 07:42:02.633107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.760 qpair failed and we were unable to recover it. 00:32:18.760 [2024-11-26 07:42:02.633414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.760 [2024-11-26 07:42:02.633422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.760 qpair failed and we were unable to recover it. 00:32:18.760 [2024-11-26 07:42:02.633681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.760 [2024-11-26 07:42:02.633689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.760 qpair failed and we were unable to recover it. 00:32:18.760 [2024-11-26 07:42:02.634043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.760 [2024-11-26 07:42:02.634052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.760 qpair failed and we were unable to recover it. 00:32:18.760 [2024-11-26 07:42:02.634385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.760 [2024-11-26 07:42:02.634394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.760 qpair failed and we were unable to recover it. 00:32:18.760 [2024-11-26 07:42:02.634704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.760 [2024-11-26 07:42:02.634712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.760 qpair failed and we were unable to recover it. 00:32:18.761 [2024-11-26 07:42:02.635071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.761 [2024-11-26 07:42:02.635080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.761 qpair failed and we were unable to recover it. 00:32:18.761 [2024-11-26 07:42:02.635395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.761 [2024-11-26 07:42:02.635403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.761 qpair failed and we were unable to recover it. 00:32:18.761 [2024-11-26 07:42:02.635699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.761 [2024-11-26 07:42:02.635707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.761 qpair failed and we were unable to recover it. 00:32:18.761 [2024-11-26 07:42:02.636023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.761 [2024-11-26 07:42:02.636031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.761 qpair failed and we were unable to recover it. 00:32:18.761 [2024-11-26 07:42:02.636397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.761 [2024-11-26 07:42:02.636405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.761 qpair failed and we were unable to recover it. 00:32:18.761 [2024-11-26 07:42:02.636711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.761 [2024-11-26 07:42:02.636719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.761 qpair failed and we were unable to recover it. 00:32:18.761 [2024-11-26 07:42:02.636876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.761 [2024-11-26 07:42:02.636885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.761 qpair failed and we were unable to recover it. 00:32:18.761 [2024-11-26 07:42:02.637210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.761 [2024-11-26 07:42:02.637220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.761 qpair failed and we were unable to recover it. 00:32:18.761 [2024-11-26 07:42:02.637530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.761 [2024-11-26 07:42:02.637538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.761 qpair failed and we were unable to recover it. 00:32:18.761 [2024-11-26 07:42:02.637806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.761 [2024-11-26 07:42:02.637814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.761 qpair failed and we were unable to recover it. 00:32:18.761 [2024-11-26 07:42:02.638129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.761 [2024-11-26 07:42:02.638138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.761 qpair failed and we were unable to recover it. 00:32:18.761 [2024-11-26 07:42:02.638461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.761 [2024-11-26 07:42:02.638470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.761 qpair failed and we were unable to recover it. 00:32:18.761 [2024-11-26 07:42:02.638647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.761 [2024-11-26 07:42:02.638657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.761 qpair failed and we were unable to recover it. 00:32:18.761 [2024-11-26 07:42:02.638969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.761 [2024-11-26 07:42:02.638977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.761 qpair failed and we were unable to recover it. 00:32:18.761 [2024-11-26 07:42:02.639339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.761 [2024-11-26 07:42:02.639348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.761 qpair failed and we were unable to recover it. 00:32:18.761 [2024-11-26 07:42:02.639543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.761 [2024-11-26 07:42:02.639551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.761 qpair failed and we were unable to recover it. 00:32:18.761 [2024-11-26 07:42:02.639744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.761 [2024-11-26 07:42:02.639752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.761 qpair failed and we were unable to recover it. 00:32:18.761 [2024-11-26 07:42:02.639966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.761 [2024-11-26 07:42:02.639975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.761 qpair failed and we were unable to recover it. 00:32:18.761 [2024-11-26 07:42:02.640268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.761 [2024-11-26 07:42:02.640276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.761 qpair failed and we were unable to recover it. 00:32:18.761 [2024-11-26 07:42:02.640584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.761 [2024-11-26 07:42:02.640592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.761 qpair failed and we were unable to recover it. 00:32:18.761 [2024-11-26 07:42:02.640905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.761 [2024-11-26 07:42:02.640915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.761 qpair failed and we were unable to recover it. 00:32:18.761 [2024-11-26 07:42:02.641230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.761 [2024-11-26 07:42:02.641238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.761 qpair failed and we were unable to recover it. 00:32:18.761 [2024-11-26 07:42:02.641531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.761 [2024-11-26 07:42:02.641541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.761 qpair failed and we were unable to recover it. 00:32:18.761 [2024-11-26 07:42:02.641851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.761 [2024-11-26 07:42:02.641858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.761 qpair failed and we were unable to recover it. 00:32:18.761 [2024-11-26 07:42:02.642193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.761 [2024-11-26 07:42:02.642202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.761 qpair failed and we were unable to recover it. 00:32:18.761 [2024-11-26 07:42:02.642493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.761 [2024-11-26 07:42:02.642501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.761 qpair failed and we were unable to recover it. 00:32:18.761 [2024-11-26 07:42:02.642818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.761 [2024-11-26 07:42:02.642829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.761 qpair failed and we were unable to recover it. 00:32:18.761 [2024-11-26 07:42:02.642982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.761 [2024-11-26 07:42:02.642992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.761 qpair failed and we were unable to recover it. 00:32:18.761 [2024-11-26 07:42:02.643305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.761 [2024-11-26 07:42:02.643313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.761 qpair failed and we were unable to recover it. 00:32:18.761 [2024-11-26 07:42:02.643642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.761 [2024-11-26 07:42:02.643650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.761 qpair failed and we were unable to recover it. 00:32:18.761 [2024-11-26 07:42:02.643964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.761 [2024-11-26 07:42:02.643973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.761 qpair failed and we were unable to recover it. 00:32:18.761 [2024-11-26 07:42:02.644314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.761 [2024-11-26 07:42:02.644322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.761 qpair failed and we were unable to recover it. 00:32:18.761 [2024-11-26 07:42:02.644630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.761 [2024-11-26 07:42:02.644638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.761 qpair failed and we were unable to recover it. 00:32:18.761 [2024-11-26 07:42:02.644922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.761 [2024-11-26 07:42:02.644931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.761 qpair failed and we were unable to recover it. 00:32:18.761 [2024-11-26 07:42:02.645213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.761 [2024-11-26 07:42:02.645221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.761 qpair failed and we were unable to recover it. 00:32:18.761 [2024-11-26 07:42:02.645557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.761 [2024-11-26 07:42:02.645565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.761 qpair failed and we were unable to recover it. 00:32:18.761 [2024-11-26 07:42:02.645873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.761 [2024-11-26 07:42:02.645881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.761 qpair failed and we were unable to recover it. 00:32:18.761 [2024-11-26 07:42:02.646194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.762 [2024-11-26 07:42:02.646202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.762 qpair failed and we were unable to recover it. 00:32:18.762 [2024-11-26 07:42:02.646509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.762 [2024-11-26 07:42:02.646518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.762 qpair failed and we were unable to recover it. 00:32:18.762 [2024-11-26 07:42:02.646809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.762 [2024-11-26 07:42:02.646818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.762 qpair failed and we were unable to recover it. 00:32:18.762 [2024-11-26 07:42:02.647098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.762 [2024-11-26 07:42:02.647107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.762 qpair failed and we were unable to recover it. 00:32:18.762 [2024-11-26 07:42:02.647414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.762 [2024-11-26 07:42:02.647423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.762 qpair failed and we were unable to recover it. 00:32:18.762 [2024-11-26 07:42:02.647735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.762 [2024-11-26 07:42:02.647744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.762 qpair failed and we were unable to recover it. 00:32:18.762 [2024-11-26 07:42:02.648031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.762 [2024-11-26 07:42:02.648040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.762 qpair failed and we were unable to recover it. 00:32:18.762 [2024-11-26 07:42:02.648230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.762 [2024-11-26 07:42:02.648237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.762 qpair failed and we were unable to recover it. 00:32:18.762 [2024-11-26 07:42:02.648563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.762 [2024-11-26 07:42:02.648571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.762 qpair failed and we were unable to recover it. 00:32:18.762 [2024-11-26 07:42:02.648777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.762 [2024-11-26 07:42:02.648785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.762 qpair failed and we were unable to recover it. 00:32:18.762 [2024-11-26 07:42:02.648974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.762 [2024-11-26 07:42:02.648982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.762 qpair failed and we were unable to recover it. 00:32:18.762 [2024-11-26 07:42:02.649297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.762 [2024-11-26 07:42:02.649305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.762 qpair failed and we were unable to recover it. 00:32:18.762 [2024-11-26 07:42:02.649612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.762 [2024-11-26 07:42:02.649621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.762 qpair failed and we were unable to recover it. 00:32:18.762 [2024-11-26 07:42:02.649922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.762 [2024-11-26 07:42:02.649931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.762 qpair failed and we were unable to recover it. 00:32:18.762 [2024-11-26 07:42:02.650124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.762 [2024-11-26 07:42:02.650132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.762 qpair failed and we were unable to recover it. 00:32:18.762 [2024-11-26 07:42:02.650454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.762 [2024-11-26 07:42:02.650462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.762 qpair failed and we were unable to recover it. 00:32:18.762 [2024-11-26 07:42:02.650771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.762 [2024-11-26 07:42:02.650780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.762 qpair failed and we were unable to recover it. 00:32:18.762 [2024-11-26 07:42:02.651087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.762 [2024-11-26 07:42:02.651096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.762 qpair failed and we were unable to recover it. 00:32:18.762 [2024-11-26 07:42:02.651385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.762 [2024-11-26 07:42:02.651393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.762 qpair failed and we were unable to recover it. 00:32:18.762 [2024-11-26 07:42:02.651707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.762 [2024-11-26 07:42:02.651715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.762 qpair failed and we were unable to recover it. 00:32:18.762 [2024-11-26 07:42:02.651900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.762 [2024-11-26 07:42:02.651908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.762 qpair failed and we were unable to recover it. 00:32:18.762 [2024-11-26 07:42:02.652222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.762 [2024-11-26 07:42:02.652231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.762 qpair failed and we were unable to recover it. 00:32:18.762 [2024-11-26 07:42:02.652523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.762 [2024-11-26 07:42:02.652532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.762 qpair failed and we were unable to recover it. 00:32:18.762 [2024-11-26 07:42:02.652845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.762 [2024-11-26 07:42:02.652852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.762 qpair failed and we were unable to recover it. 00:32:18.762 [2024-11-26 07:42:02.653164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.762 [2024-11-26 07:42:02.653174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.762 qpair failed and we were unable to recover it. 00:32:18.762 [2024-11-26 07:42:02.653482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.762 [2024-11-26 07:42:02.653491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.762 qpair failed and we were unable to recover it. 00:32:18.762 [2024-11-26 07:42:02.653782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.762 [2024-11-26 07:42:02.653791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.762 qpair failed and we were unable to recover it. 00:32:18.762 [2024-11-26 07:42:02.653988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.762 [2024-11-26 07:42:02.653997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.762 qpair failed and we were unable to recover it. 00:32:18.762 [2024-11-26 07:42:02.654167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.762 [2024-11-26 07:42:02.654175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.762 qpair failed and we were unable to recover it. 00:32:18.762 [2024-11-26 07:42:02.654344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.762 [2024-11-26 07:42:02.654354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.762 qpair failed and we were unable to recover it. 00:32:18.762 [2024-11-26 07:42:02.654548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.762 [2024-11-26 07:42:02.654557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.762 qpair failed and we were unable to recover it. 00:32:18.762 [2024-11-26 07:42:02.654833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.762 [2024-11-26 07:42:02.654841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.763 qpair failed and we were unable to recover it. 00:32:18.763 [2024-11-26 07:42:02.655012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.763 [2024-11-26 07:42:02.655020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.763 qpair failed and we were unable to recover it. 00:32:18.763 [2024-11-26 07:42:02.655395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.763 [2024-11-26 07:42:02.655403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.763 qpair failed and we were unable to recover it. 00:32:18.763 [2024-11-26 07:42:02.655683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.763 [2024-11-26 07:42:02.655691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.763 qpair failed and we were unable to recover it. 00:32:18.763 [2024-11-26 07:42:02.655998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.763 [2024-11-26 07:42:02.656007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.763 qpair failed and we were unable to recover it. 00:32:18.763 [2024-11-26 07:42:02.656314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.763 [2024-11-26 07:42:02.656323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.763 qpair failed and we were unable to recover it. 00:32:18.763 [2024-11-26 07:42:02.656630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.763 [2024-11-26 07:42:02.656640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.763 qpair failed and we were unable to recover it. 00:32:18.763 [2024-11-26 07:42:02.656932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.763 [2024-11-26 07:42:02.656940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.763 qpair failed and we were unable to recover it. 00:32:18.763 [2024-11-26 07:42:02.657298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.763 [2024-11-26 07:42:02.657306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.763 qpair failed and we were unable to recover it. 00:32:18.763 [2024-11-26 07:42:02.657467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.763 [2024-11-26 07:42:02.657477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.763 qpair failed and we were unable to recover it. 00:32:18.763 [2024-11-26 07:42:02.657805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.763 [2024-11-26 07:42:02.657813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.763 qpair failed and we were unable to recover it. 00:32:18.763 [2024-11-26 07:42:02.658122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.763 [2024-11-26 07:42:02.658131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.763 qpair failed and we were unable to recover it. 00:32:18.763 [2024-11-26 07:42:02.658304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.763 [2024-11-26 07:42:02.658312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.763 qpair failed and we were unable to recover it. 00:32:18.763 [2024-11-26 07:42:02.658643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.763 [2024-11-26 07:42:02.658651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.763 qpair failed and we were unable to recover it. 00:32:18.763 [2024-11-26 07:42:02.658976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.763 [2024-11-26 07:42:02.658985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.763 qpair failed and we were unable to recover it. 00:32:18.763 [2024-11-26 07:42:02.659305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.763 [2024-11-26 07:42:02.659313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.763 qpair failed and we were unable to recover it. 00:32:18.763 [2024-11-26 07:42:02.659619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.763 [2024-11-26 07:42:02.659629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.763 qpair failed and we were unable to recover it. 00:32:18.763 [2024-11-26 07:42:02.659939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.763 [2024-11-26 07:42:02.659947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.763 qpair failed and we were unable to recover it. 00:32:18.763 [2024-11-26 07:42:02.660302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.763 [2024-11-26 07:42:02.660311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.763 qpair failed and we were unable to recover it. 00:32:18.763 [2024-11-26 07:42:02.660488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.763 [2024-11-26 07:42:02.660496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.763 qpair failed and we were unable to recover it. 00:32:18.763 [2024-11-26 07:42:02.660824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.763 [2024-11-26 07:42:02.660834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.763 qpair failed and we were unable to recover it. 00:32:18.763 [2024-11-26 07:42:02.661150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.763 [2024-11-26 07:42:02.661159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.763 qpair failed and we were unable to recover it. 00:32:18.763 [2024-11-26 07:42:02.661467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.763 [2024-11-26 07:42:02.661476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.763 qpair failed and we were unable to recover it. 00:32:18.763 [2024-11-26 07:42:02.661782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.763 [2024-11-26 07:42:02.661791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.763 qpair failed and we were unable to recover it. 00:32:18.763 [2024-11-26 07:42:02.662077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.763 [2024-11-26 07:42:02.662086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.763 qpair failed and we were unable to recover it. 00:32:18.763 [2024-11-26 07:42:02.662259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.763 [2024-11-26 07:42:02.662268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.763 qpair failed and we were unable to recover it. 00:32:18.763 [2024-11-26 07:42:02.662602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.763 [2024-11-26 07:42:02.662611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.763 qpair failed and we were unable to recover it. 00:32:18.763 [2024-11-26 07:42:02.662942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.763 [2024-11-26 07:42:02.662951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.763 qpair failed and we were unable to recover it. 00:32:18.763 [2024-11-26 07:42:02.663222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.763 [2024-11-26 07:42:02.663231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.763 qpair failed and we were unable to recover it. 00:32:18.763 [2024-11-26 07:42:02.663546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.763 [2024-11-26 07:42:02.663554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.763 qpair failed and we were unable to recover it. 00:32:18.763 [2024-11-26 07:42:02.663867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.763 [2024-11-26 07:42:02.663876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.763 qpair failed and we were unable to recover it. 00:32:18.763 [2024-11-26 07:42:02.664189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.763 [2024-11-26 07:42:02.664196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.763 qpair failed and we were unable to recover it. 00:32:18.763 [2024-11-26 07:42:02.664517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.763 [2024-11-26 07:42:02.664526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.763 qpair failed and we were unable to recover it. 00:32:18.763 [2024-11-26 07:42:02.664840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.763 [2024-11-26 07:42:02.664848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.763 qpair failed and we were unable to recover it. 00:32:18.763 [2024-11-26 07:42:02.665013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.763 [2024-11-26 07:42:02.665021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.763 qpair failed and we were unable to recover it. 00:32:18.763 [2024-11-26 07:42:02.665386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.763 [2024-11-26 07:42:02.665394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.763 qpair failed and we were unable to recover it. 00:32:18.763 [2024-11-26 07:42:02.665710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.763 [2024-11-26 07:42:02.665719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.763 qpair failed and we were unable to recover it. 00:32:18.763 [2024-11-26 07:42:02.666026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.763 [2024-11-26 07:42:02.666035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.763 qpair failed and we were unable to recover it. 00:32:18.763 [2024-11-26 07:42:02.666281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.764 [2024-11-26 07:42:02.666292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.764 qpair failed and we were unable to recover it. 00:32:18.764 [2024-11-26 07:42:02.666658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.764 [2024-11-26 07:42:02.666666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.764 qpair failed and we were unable to recover it. 00:32:18.764 [2024-11-26 07:42:02.666978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.764 [2024-11-26 07:42:02.666986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.764 qpair failed and we were unable to recover it. 00:32:18.764 [2024-11-26 07:42:02.667171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.764 [2024-11-26 07:42:02.667180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.764 qpair failed and we were unable to recover it. 00:32:18.764 [2024-11-26 07:42:02.667500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.764 [2024-11-26 07:42:02.667509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.764 qpair failed and we were unable to recover it. 00:32:18.764 [2024-11-26 07:42:02.667799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.764 [2024-11-26 07:42:02.667807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.764 qpair failed and we were unable to recover it. 00:32:18.764 [2024-11-26 07:42:02.668158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.764 [2024-11-26 07:42:02.668166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.764 qpair failed and we were unable to recover it. 00:32:18.764 [2024-11-26 07:42:02.668488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.764 [2024-11-26 07:42:02.668496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.764 qpair failed and we were unable to recover it. 00:32:18.764 [2024-11-26 07:42:02.668658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.764 [2024-11-26 07:42:02.668666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.764 qpair failed and we were unable to recover it. 00:32:18.764 [2024-11-26 07:42:02.668968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.764 [2024-11-26 07:42:02.668976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.764 qpair failed and we were unable to recover it. 00:32:18.764 [2024-11-26 07:42:02.669153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.764 [2024-11-26 07:42:02.669162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.764 qpair failed and we were unable to recover it. 00:32:18.764 [2024-11-26 07:42:02.669476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.764 [2024-11-26 07:42:02.669485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.764 qpair failed and we were unable to recover it. 00:32:18.764 [2024-11-26 07:42:02.669689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.764 [2024-11-26 07:42:02.669698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.764 qpair failed and we were unable to recover it. 00:32:18.764 [2024-11-26 07:42:02.669987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.764 [2024-11-26 07:42:02.669995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.764 qpair failed and we were unable to recover it. 00:32:18.764 [2024-11-26 07:42:02.670307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.764 [2024-11-26 07:42:02.670316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.764 qpair failed and we were unable to recover it. 00:32:18.764 [2024-11-26 07:42:02.670533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.764 [2024-11-26 07:42:02.670541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.764 qpair failed and we were unable to recover it. 00:32:18.764 [2024-11-26 07:42:02.670735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.764 [2024-11-26 07:42:02.670743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.764 qpair failed and we were unable to recover it. 00:32:18.764 [2024-11-26 07:42:02.670943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.764 [2024-11-26 07:42:02.670951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.764 qpair failed and we were unable to recover it. 00:32:18.764 [2024-11-26 07:42:02.671229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.764 [2024-11-26 07:42:02.671237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.764 qpair failed and we were unable to recover it. 00:32:18.764 [2024-11-26 07:42:02.671449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.764 [2024-11-26 07:42:02.671457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.764 qpair failed and we were unable to recover it. 00:32:18.764 [2024-11-26 07:42:02.671782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.764 [2024-11-26 07:42:02.671790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.764 qpair failed and we were unable to recover it. 00:32:18.764 [2024-11-26 07:42:02.671984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.764 [2024-11-26 07:42:02.671992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.764 qpair failed and we were unable to recover it. 00:32:18.764 [2024-11-26 07:42:02.672291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.764 [2024-11-26 07:42:02.672300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.764 qpair failed and we were unable to recover it. 00:32:18.764 [2024-11-26 07:42:02.672596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.764 [2024-11-26 07:42:02.672605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.764 qpair failed and we were unable to recover it. 00:32:18.764 [2024-11-26 07:42:02.672835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.764 [2024-11-26 07:42:02.672843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.764 qpair failed and we were unable to recover it. 00:32:18.764 [2024-11-26 07:42:02.673156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.764 [2024-11-26 07:42:02.673165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.764 qpair failed and we were unable to recover it. 00:32:18.764 [2024-11-26 07:42:02.673335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.764 [2024-11-26 07:42:02.673343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.764 qpair failed and we were unable to recover it. 00:32:18.764 [2024-11-26 07:42:02.673675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.764 [2024-11-26 07:42:02.673684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.764 qpair failed and we were unable to recover it. 00:32:18.764 [2024-11-26 07:42:02.673952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.764 [2024-11-26 07:42:02.673960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.764 qpair failed and we were unable to recover it. 00:32:18.764 [2024-11-26 07:42:02.674334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.764 [2024-11-26 07:42:02.674342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.764 qpair failed and we were unable to recover it. 00:32:18.764 [2024-11-26 07:42:02.674676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.764 [2024-11-26 07:42:02.674684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.764 qpair failed and we were unable to recover it. 00:32:18.764 [2024-11-26 07:42:02.674878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.764 [2024-11-26 07:42:02.674887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.764 qpair failed and we were unable to recover it. 00:32:18.764 [2024-11-26 07:42:02.675238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.764 [2024-11-26 07:42:02.675247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.764 qpair failed and we were unable to recover it. 00:32:18.764 [2024-11-26 07:42:02.675453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.764 [2024-11-26 07:42:02.675462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.764 qpair failed and we were unable to recover it. 00:32:18.764 [2024-11-26 07:42:02.675780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.764 [2024-11-26 07:42:02.675789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.764 qpair failed and we were unable to recover it. 00:32:18.764 [2024-11-26 07:42:02.676158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.764 [2024-11-26 07:42:02.676166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.764 qpair failed and we were unable to recover it. 00:32:18.764 [2024-11-26 07:42:02.676482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.764 [2024-11-26 07:42:02.676491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.764 qpair failed and we were unable to recover it. 00:32:18.764 [2024-11-26 07:42:02.676799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.764 [2024-11-26 07:42:02.676807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.764 qpair failed and we were unable to recover it. 00:32:18.764 [2024-11-26 07:42:02.677009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.765 [2024-11-26 07:42:02.677017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.765 qpair failed and we were unable to recover it. 00:32:18.765 [2024-11-26 07:42:02.677276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.765 [2024-11-26 07:42:02.677284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.765 qpair failed and we were unable to recover it. 00:32:18.765 [2024-11-26 07:42:02.677594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.765 [2024-11-26 07:42:02.677603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.765 qpair failed and we were unable to recover it. 00:32:18.765 [2024-11-26 07:42:02.677900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.765 [2024-11-26 07:42:02.677908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.765 qpair failed and we were unable to recover it. 00:32:18.765 [2024-11-26 07:42:02.678222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.765 [2024-11-26 07:42:02.678230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.765 qpair failed and we were unable to recover it. 00:32:18.765 [2024-11-26 07:42:02.678545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.765 [2024-11-26 07:42:02.678553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.765 qpair failed and we were unable to recover it. 00:32:18.765 [2024-11-26 07:42:02.678864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.765 [2024-11-26 07:42:02.678873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.765 qpair failed and we were unable to recover it. 00:32:18.765 [2024-11-26 07:42:02.679179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.765 [2024-11-26 07:42:02.679187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.765 qpair failed and we were unable to recover it. 00:32:18.765 [2024-11-26 07:42:02.679348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.765 [2024-11-26 07:42:02.679357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.765 qpair failed and we were unable to recover it. 00:32:18.765 [2024-11-26 07:42:02.679591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.765 [2024-11-26 07:42:02.679599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.765 qpair failed and we were unable to recover it. 00:32:18.765 [2024-11-26 07:42:02.679920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.765 [2024-11-26 07:42:02.679929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.765 qpair failed and we were unable to recover it. 00:32:18.765 [2024-11-26 07:42:02.680218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.765 [2024-11-26 07:42:02.680225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.765 qpair failed and we were unable to recover it. 00:32:18.765 [2024-11-26 07:42:02.680533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.765 [2024-11-26 07:42:02.680541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.765 qpair failed and we were unable to recover it. 00:32:18.765 [2024-11-26 07:42:02.680850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.765 [2024-11-26 07:42:02.680859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.765 qpair failed and we were unable to recover it. 00:32:18.765 [2024-11-26 07:42:02.681076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.765 [2024-11-26 07:42:02.681083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.765 qpair failed and we were unable to recover it. 00:32:18.765 [2024-11-26 07:42:02.681274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.765 [2024-11-26 07:42:02.681281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.765 qpair failed and we were unable to recover it. 00:32:18.765 [2024-11-26 07:42:02.681560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.765 [2024-11-26 07:42:02.681569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.765 qpair failed and we were unable to recover it. 00:32:18.765 [2024-11-26 07:42:02.681835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.765 [2024-11-26 07:42:02.681844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.765 qpair failed and we were unable to recover it. 00:32:18.765 [2024-11-26 07:42:02.682073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.765 [2024-11-26 07:42:02.682081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.765 qpair failed and we were unable to recover it. 00:32:18.765 [2024-11-26 07:42:02.682252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.765 [2024-11-26 07:42:02.682260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.765 qpair failed and we were unable to recover it. 00:32:18.765 [2024-11-26 07:42:02.682576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.765 [2024-11-26 07:42:02.682585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.765 qpair failed and we were unable to recover it. 00:32:18.765 [2024-11-26 07:42:02.682762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.765 [2024-11-26 07:42:02.682771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.765 qpair failed and we were unable to recover it. 00:32:18.765 [2024-11-26 07:42:02.682937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.765 [2024-11-26 07:42:02.682947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.765 qpair failed and we were unable to recover it. 00:32:18.765 [2024-11-26 07:42:02.683274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.765 [2024-11-26 07:42:02.683284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.765 qpair failed and we were unable to recover it. 00:32:18.765 [2024-11-26 07:42:02.683591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.765 [2024-11-26 07:42:02.683600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.765 qpair failed and we were unable to recover it. 00:32:18.765 [2024-11-26 07:42:02.683904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.765 [2024-11-26 07:42:02.683913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.765 qpair failed and we were unable to recover it. 00:32:18.765 [2024-11-26 07:42:02.684239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.765 [2024-11-26 07:42:02.684247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.765 qpair failed and we were unable to recover it. 00:32:18.765 [2024-11-26 07:42:02.684539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.765 [2024-11-26 07:42:02.684548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.765 qpair failed and we were unable to recover it. 00:32:18.765 [2024-11-26 07:42:02.684735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.765 [2024-11-26 07:42:02.684742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.765 qpair failed and we were unable to recover it. 00:32:18.765 [2024-11-26 07:42:02.685034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.765 [2024-11-26 07:42:02.685042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.765 qpair failed and we were unable to recover it. 00:32:18.765 [2024-11-26 07:42:02.685364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.765 [2024-11-26 07:42:02.685373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.765 qpair failed and we were unable to recover it. 00:32:18.765 [2024-11-26 07:42:02.685706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.765 [2024-11-26 07:42:02.685716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.765 qpair failed and we were unable to recover it. 00:32:18.765 [2024-11-26 07:42:02.685889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.765 [2024-11-26 07:42:02.685899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.765 qpair failed and we were unable to recover it. 00:32:18.765 [2024-11-26 07:42:02.686305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.765 [2024-11-26 07:42:02.686314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.765 qpair failed and we were unable to recover it. 00:32:18.765 [2024-11-26 07:42:02.686621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.765 [2024-11-26 07:42:02.686630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.765 qpair failed and we were unable to recover it. 00:32:18.765 [2024-11-26 07:42:02.686934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.765 [2024-11-26 07:42:02.686942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.765 qpair failed and we were unable to recover it. 00:32:18.765 [2024-11-26 07:42:02.687272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.765 [2024-11-26 07:42:02.687281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.765 qpair failed and we were unable to recover it. 00:32:18.765 [2024-11-26 07:42:02.687525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.765 [2024-11-26 07:42:02.687534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.765 qpair failed and we were unable to recover it. 00:32:18.765 [2024-11-26 07:42:02.687837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.766 [2024-11-26 07:42:02.687845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.766 qpair failed and we were unable to recover it. 00:32:18.766 [2024-11-26 07:42:02.688155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.766 [2024-11-26 07:42:02.688164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.766 qpair failed and we were unable to recover it. 00:32:18.766 [2024-11-26 07:42:02.688451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.766 [2024-11-26 07:42:02.688459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.766 qpair failed and we were unable to recover it. 00:32:18.766 [2024-11-26 07:42:02.688771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.766 [2024-11-26 07:42:02.688779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.766 qpair failed and we were unable to recover it. 00:32:18.766 [2024-11-26 07:42:02.689091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.766 [2024-11-26 07:42:02.689100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.766 qpair failed and we were unable to recover it. 00:32:18.766 [2024-11-26 07:42:02.689401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.766 [2024-11-26 07:42:02.689409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.766 qpair failed and we were unable to recover it. 00:32:18.766 [2024-11-26 07:42:02.689598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.766 [2024-11-26 07:42:02.689605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.766 qpair failed and we were unable to recover it. 00:32:18.766 [2024-11-26 07:42:02.689763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.766 [2024-11-26 07:42:02.689772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.766 qpair failed and we were unable to recover it. 00:32:18.766 [2024-11-26 07:42:02.689977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.766 [2024-11-26 07:42:02.689986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.766 qpair failed and we were unable to recover it. 00:32:18.766 [2024-11-26 07:42:02.690335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.766 [2024-11-26 07:42:02.690343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.766 qpair failed and we were unable to recover it. 00:32:18.766 [2024-11-26 07:42:02.690697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.766 [2024-11-26 07:42:02.690706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.766 qpair failed and we were unable to recover it. 00:32:18.766 [2024-11-26 07:42:02.690902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.766 [2024-11-26 07:42:02.690910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.766 qpair failed and we were unable to recover it. 00:32:18.766 [2024-11-26 07:42:02.691197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.766 [2024-11-26 07:42:02.691205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.766 qpair failed and we were unable to recover it. 00:32:18.766 [2024-11-26 07:42:02.691504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.766 [2024-11-26 07:42:02.691512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.766 qpair failed and we were unable to recover it. 00:32:18.766 [2024-11-26 07:42:02.691692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.766 [2024-11-26 07:42:02.691700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.766 qpair failed and we were unable to recover it. 00:32:18.766 [2024-11-26 07:42:02.692111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.766 [2024-11-26 07:42:02.692119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.766 qpair failed and we were unable to recover it. 00:32:18.766 [2024-11-26 07:42:02.692431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.766 [2024-11-26 07:42:02.692440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.766 qpair failed and we were unable to recover it. 00:32:18.766 [2024-11-26 07:42:02.692772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.766 [2024-11-26 07:42:02.692780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.766 qpair failed and we were unable to recover it. 00:32:18.766 [2024-11-26 07:42:02.693113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.766 [2024-11-26 07:42:02.693122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.766 qpair failed and we were unable to recover it. 00:32:18.766 [2024-11-26 07:42:02.693421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.766 [2024-11-26 07:42:02.693429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.766 qpair failed and we were unable to recover it. 00:32:18.766 [2024-11-26 07:42:02.693636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.766 [2024-11-26 07:42:02.693644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.766 qpair failed and we were unable to recover it. 00:32:18.766 [2024-11-26 07:42:02.693841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.766 [2024-11-26 07:42:02.693851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.766 qpair failed and we were unable to recover it. 00:32:18.766 [2024-11-26 07:42:02.694138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.766 [2024-11-26 07:42:02.694146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.766 qpair failed and we were unable to recover it. 00:32:18.766 [2024-11-26 07:42:02.694421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.766 [2024-11-26 07:42:02.694429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.766 qpair failed and we were unable to recover it. 00:32:18.766 [2024-11-26 07:42:02.694740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.766 [2024-11-26 07:42:02.694748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.766 qpair failed and we were unable to recover it. 00:32:18.766 [2024-11-26 07:42:02.695053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.766 [2024-11-26 07:42:02.695061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.766 qpair failed and we were unable to recover it. 00:32:18.766 [2024-11-26 07:42:02.695242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.766 [2024-11-26 07:42:02.695250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.766 qpair failed and we were unable to recover it. 00:32:18.766 [2024-11-26 07:42:02.695528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.766 [2024-11-26 07:42:02.695536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.766 qpair failed and we were unable to recover it. 00:32:18.766 [2024-11-26 07:42:02.695859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.766 [2024-11-26 07:42:02.695869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.766 qpair failed and we were unable to recover it. 00:32:18.766 [2024-11-26 07:42:02.696078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.766 [2024-11-26 07:42:02.696085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.766 qpair failed and we were unable to recover it. 00:32:18.766 [2024-11-26 07:42:02.696479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.766 [2024-11-26 07:42:02.696487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.766 qpair failed and we were unable to recover it. 00:32:18.766 [2024-11-26 07:42:02.696849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.766 [2024-11-26 07:42:02.696857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.766 qpair failed and we were unable to recover it. 00:32:18.766 [2024-11-26 07:42:02.697197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.766 [2024-11-26 07:42:02.697206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.766 qpair failed and we were unable to recover it. 00:32:18.766 [2024-11-26 07:42:02.697534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.766 [2024-11-26 07:42:02.697542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.767 qpair failed and we were unable to recover it. 00:32:18.767 [2024-11-26 07:42:02.697848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.767 [2024-11-26 07:42:02.697856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.767 qpair failed and we were unable to recover it. 00:32:18.767 [2024-11-26 07:42:02.698165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.767 [2024-11-26 07:42:02.698174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.767 qpair failed and we were unable to recover it. 00:32:18.767 [2024-11-26 07:42:02.698497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.767 [2024-11-26 07:42:02.698506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.767 qpair failed and we were unable to recover it. 00:32:18.767 [2024-11-26 07:42:02.698683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.767 [2024-11-26 07:42:02.698693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.767 qpair failed and we were unable to recover it. 00:32:18.767 [2024-11-26 07:42:02.698940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.767 [2024-11-26 07:42:02.698949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.767 qpair failed and we were unable to recover it. 00:32:18.767 [2024-11-26 07:42:02.699278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.767 [2024-11-26 07:42:02.699286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.767 qpair failed and we were unable to recover it. 00:32:18.767 [2024-11-26 07:42:02.699639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.767 [2024-11-26 07:42:02.699647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.767 qpair failed and we were unable to recover it. 00:32:18.767 [2024-11-26 07:42:02.699932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.767 [2024-11-26 07:42:02.699941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.767 qpair failed and we were unable to recover it. 00:32:18.767 [2024-11-26 07:42:02.700265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.767 [2024-11-26 07:42:02.700274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.767 qpair failed and we were unable to recover it. 00:32:18.767 [2024-11-26 07:42:02.700619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.767 [2024-11-26 07:42:02.700627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.767 qpair failed and we were unable to recover it. 00:32:18.767 [2024-11-26 07:42:02.700935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.767 [2024-11-26 07:42:02.700945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.767 qpair failed and we were unable to recover it. 00:32:18.767 [2024-11-26 07:42:02.701271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.767 [2024-11-26 07:42:02.701279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.767 qpair failed and we were unable to recover it. 00:32:18.767 [2024-11-26 07:42:02.701590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.767 [2024-11-26 07:42:02.701599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.767 qpair failed and we were unable to recover it. 00:32:18.767 [2024-11-26 07:42:02.701867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.767 [2024-11-26 07:42:02.701876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.767 qpair failed and we were unable to recover it. 00:32:18.767 [2024-11-26 07:42:02.702075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.767 [2024-11-26 07:42:02.702083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.767 qpair failed and we were unable to recover it. 00:32:18.767 [2024-11-26 07:42:02.702407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.767 [2024-11-26 07:42:02.702415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.767 qpair failed and we were unable to recover it. 00:32:18.767 [2024-11-26 07:42:02.702734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.767 [2024-11-26 07:42:02.702743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.767 qpair failed and we were unable to recover it. 00:32:18.767 [2024-11-26 07:42:02.703057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.767 [2024-11-26 07:42:02.703065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.767 qpair failed and we were unable to recover it. 00:32:18.767 [2024-11-26 07:42:02.703389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.767 [2024-11-26 07:42:02.703398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.767 qpair failed and we were unable to recover it. 00:32:18.767 [2024-11-26 07:42:02.703584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.767 [2024-11-26 07:42:02.703593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.767 qpair failed and we were unable to recover it. 00:32:18.767 [2024-11-26 07:42:02.703934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.767 [2024-11-26 07:42:02.703942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.767 qpair failed and we were unable to recover it. 00:32:18.767 [2024-11-26 07:42:02.704296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.767 [2024-11-26 07:42:02.704305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.767 qpair failed and we were unable to recover it. 00:32:18.767 [2024-11-26 07:42:02.704466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.767 [2024-11-26 07:42:02.704475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.767 qpair failed and we were unable to recover it. 00:32:18.767 [2024-11-26 07:42:02.704680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.767 [2024-11-26 07:42:02.704689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.767 qpair failed and we were unable to recover it. 00:32:18.767 [2024-11-26 07:42:02.705009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.767 [2024-11-26 07:42:02.705018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.767 qpair failed and we were unable to recover it. 00:32:18.767 [2024-11-26 07:42:02.705361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.767 [2024-11-26 07:42:02.705370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.767 qpair failed and we were unable to recover it. 00:32:18.767 [2024-11-26 07:42:02.705671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.767 [2024-11-26 07:42:02.705679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.767 qpair failed and we were unable to recover it. 00:32:18.767 [2024-11-26 07:42:02.705857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.767 [2024-11-26 07:42:02.705866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.767 qpair failed and we were unable to recover it. 00:32:18.767 [2024-11-26 07:42:02.706142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.767 [2024-11-26 07:42:02.706151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.767 qpair failed and we were unable to recover it. 00:32:18.767 [2024-11-26 07:42:02.706339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.767 [2024-11-26 07:42:02.706347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.767 qpair failed and we were unable to recover it. 00:32:18.767 [2024-11-26 07:42:02.706661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.767 [2024-11-26 07:42:02.706671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.767 qpair failed and we were unable to recover it. 00:32:18.767 [2024-11-26 07:42:02.706969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.767 [2024-11-26 07:42:02.706978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.767 qpair failed and we were unable to recover it. 00:32:18.767 [2024-11-26 07:42:02.707181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.767 [2024-11-26 07:42:02.707189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.767 qpair failed and we were unable to recover it. 00:32:18.767 [2024-11-26 07:42:02.707475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.767 [2024-11-26 07:42:02.707483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.767 qpair failed and we were unable to recover it. 00:32:18.767 [2024-11-26 07:42:02.707794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.767 [2024-11-26 07:42:02.707803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.767 qpair failed and we were unable to recover it. 00:32:18.767 [2024-11-26 07:42:02.708150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.767 [2024-11-26 07:42:02.708159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.767 qpair failed and we were unable to recover it. 00:32:18.767 [2024-11-26 07:42:02.708473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.767 [2024-11-26 07:42:02.708482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.767 qpair failed and we were unable to recover it. 00:32:18.768 [2024-11-26 07:42:02.708650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.768 [2024-11-26 07:42:02.708659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.768 qpair failed and we were unable to recover it. 00:32:18.768 [2024-11-26 07:42:02.708822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.768 [2024-11-26 07:42:02.708830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.768 qpair failed and we were unable to recover it. 00:32:18.768 [2024-11-26 07:42:02.709141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.768 [2024-11-26 07:42:02.709150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.768 qpair failed and we were unable to recover it. 00:32:18.768 [2024-11-26 07:42:02.709458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.768 [2024-11-26 07:42:02.709466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.768 qpair failed and we were unable to recover it. 00:32:18.768 [2024-11-26 07:42:02.709781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.768 [2024-11-26 07:42:02.709789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.768 qpair failed and we were unable to recover it. 00:32:18.768 [2024-11-26 07:42:02.710105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.768 [2024-11-26 07:42:02.710114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.768 qpair failed and we were unable to recover it. 00:32:18.768 [2024-11-26 07:42:02.710403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.768 [2024-11-26 07:42:02.710410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.768 qpair failed and we were unable to recover it. 00:32:18.768 [2024-11-26 07:42:02.710757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.768 [2024-11-26 07:42:02.710766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.768 qpair failed and we were unable to recover it. 00:32:18.768 [2024-11-26 07:42:02.710958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.768 [2024-11-26 07:42:02.710966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.768 qpair failed and we were unable to recover it. 00:32:18.768 [2024-11-26 07:42:02.711286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.768 [2024-11-26 07:42:02.711294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.768 qpair failed and we were unable to recover it. 00:32:18.768 [2024-11-26 07:42:02.711608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.768 [2024-11-26 07:42:02.711617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.768 qpair failed and we were unable to recover it. 00:32:18.768 [2024-11-26 07:42:02.711931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.768 [2024-11-26 07:42:02.711939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.768 qpair failed and we were unable to recover it. 00:32:18.768 [2024-11-26 07:42:02.712251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.768 [2024-11-26 07:42:02.712260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.768 qpair failed and we were unable to recover it. 00:32:18.768 [2024-11-26 07:42:02.712571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.768 [2024-11-26 07:42:02.712580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.768 qpair failed and we were unable to recover it. 00:32:18.768 [2024-11-26 07:42:02.712905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.768 [2024-11-26 07:42:02.712913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.768 qpair failed and we were unable to recover it. 00:32:18.768 [2024-11-26 07:42:02.713119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.768 [2024-11-26 07:42:02.713127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.768 qpair failed and we were unable to recover it. 00:32:18.768 [2024-11-26 07:42:02.713404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.768 [2024-11-26 07:42:02.713412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.768 qpair failed and we were unable to recover it. 00:32:18.768 [2024-11-26 07:42:02.713730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.768 [2024-11-26 07:42:02.713738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.768 qpair failed and we were unable to recover it. 00:32:18.768 [2024-11-26 07:42:02.713949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.768 [2024-11-26 07:42:02.713957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.768 qpair failed and we were unable to recover it. 00:32:18.768 [2024-11-26 07:42:02.714280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.768 [2024-11-26 07:42:02.714288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.768 qpair failed and we were unable to recover it. 00:32:18.768 [2024-11-26 07:42:02.714615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.768 [2024-11-26 07:42:02.714623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.768 qpair failed and we were unable to recover it. 00:32:18.768 [2024-11-26 07:42:02.714778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.768 [2024-11-26 07:42:02.714786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.768 qpair failed and we were unable to recover it. 00:32:18.768 [2024-11-26 07:42:02.715098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.768 [2024-11-26 07:42:02.715108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.768 qpair failed and we were unable to recover it. 00:32:18.768 [2024-11-26 07:42:02.715425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.768 [2024-11-26 07:42:02.715433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.768 qpair failed and we were unable to recover it. 00:32:18.768 [2024-11-26 07:42:02.715711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.768 [2024-11-26 07:42:02.715719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.768 qpair failed and we were unable to recover it. 00:32:18.768 [2024-11-26 07:42:02.716030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.768 [2024-11-26 07:42:02.716039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.768 qpair failed and we were unable to recover it. 00:32:18.768 [2024-11-26 07:42:02.716353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.768 [2024-11-26 07:42:02.716361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.768 qpair failed and we were unable to recover it. 00:32:18.768 [2024-11-26 07:42:02.716675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.768 [2024-11-26 07:42:02.716683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.768 qpair failed and we were unable to recover it. 00:32:18.768 [2024-11-26 07:42:02.716993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.768 [2024-11-26 07:42:02.717001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.768 qpair failed and we were unable to recover it. 00:32:18.768 [2024-11-26 07:42:02.717319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.768 [2024-11-26 07:42:02.717327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.768 qpair failed and we were unable to recover it. 00:32:18.768 [2024-11-26 07:42:02.717633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.768 [2024-11-26 07:42:02.717643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.768 qpair failed and we were unable to recover it. 00:32:18.768 [2024-11-26 07:42:02.717920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.768 [2024-11-26 07:42:02.717930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.768 qpair failed and we were unable to recover it. 00:32:18.768 [2024-11-26 07:42:02.718228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.768 [2024-11-26 07:42:02.718236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.768 qpair failed and we were unable to recover it. 00:32:18.768 [2024-11-26 07:42:02.718543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.768 [2024-11-26 07:42:02.718551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.768 qpair failed and we were unable to recover it. 00:32:18.768 [2024-11-26 07:42:02.718895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.768 [2024-11-26 07:42:02.718903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.768 qpair failed and we were unable to recover it. 00:32:18.768 [2024-11-26 07:42:02.719213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.768 [2024-11-26 07:42:02.719221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.768 qpair failed and we were unable to recover it. 00:32:18.768 [2024-11-26 07:42:02.719547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.768 [2024-11-26 07:42:02.719554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.768 qpair failed and we were unable to recover it. 00:32:18.768 [2024-11-26 07:42:02.719882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.768 [2024-11-26 07:42:02.719891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.769 qpair failed and we were unable to recover it. 00:32:18.769 [2024-11-26 07:42:02.720164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.769 [2024-11-26 07:42:02.720172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.769 qpair failed and we were unable to recover it. 00:32:18.769 [2024-11-26 07:42:02.720404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.769 [2024-11-26 07:42:02.720412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.769 qpair failed and we were unable to recover it. 00:32:18.769 [2024-11-26 07:42:02.720717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.769 [2024-11-26 07:42:02.720727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.769 qpair failed and we were unable to recover it. 00:32:18.769 [2024-11-26 07:42:02.721042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.769 [2024-11-26 07:42:02.721051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.769 qpair failed and we were unable to recover it. 00:32:18.769 [2024-11-26 07:42:02.721398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.769 [2024-11-26 07:42:02.721407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.769 qpair failed and we were unable to recover it. 00:32:18.769 [2024-11-26 07:42:02.721715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.769 [2024-11-26 07:42:02.721724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.769 qpair failed and we were unable to recover it. 00:32:18.769 [2024-11-26 07:42:02.722031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.769 [2024-11-26 07:42:02.722039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.769 qpair failed and we were unable to recover it. 00:32:18.769 [2024-11-26 07:42:02.722251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.769 [2024-11-26 07:42:02.722259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.769 qpair failed and we were unable to recover it. 00:32:18.769 [2024-11-26 07:42:02.722566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.769 [2024-11-26 07:42:02.722574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.769 qpair failed and we were unable to recover it. 00:32:18.769 [2024-11-26 07:42:02.722759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.769 [2024-11-26 07:42:02.722766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.769 qpair failed and we were unable to recover it. 00:32:18.769 [2024-11-26 07:42:02.722972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.769 [2024-11-26 07:42:02.722981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.769 qpair failed and we were unable to recover it. 00:32:18.769 [2024-11-26 07:42:02.723282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.769 [2024-11-26 07:42:02.723290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.769 qpair failed and we were unable to recover it. 00:32:18.769 [2024-11-26 07:42:02.723629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.769 [2024-11-26 07:42:02.723637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.769 qpair failed and we were unable to recover it. 00:32:18.769 [2024-11-26 07:42:02.723815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.769 [2024-11-26 07:42:02.723822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.769 qpair failed and we were unable to recover it. 00:32:18.769 [2024-11-26 07:42:02.724095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.769 [2024-11-26 07:42:02.724104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.769 qpair failed and we were unable to recover it. 00:32:18.769 [2024-11-26 07:42:02.724421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.769 [2024-11-26 07:42:02.724433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.769 qpair failed and we were unable to recover it. 00:32:18.769 [2024-11-26 07:42:02.724712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.769 [2024-11-26 07:42:02.724720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.769 qpair failed and we were unable to recover it. 00:32:18.769 [2024-11-26 07:42:02.725032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.769 [2024-11-26 07:42:02.725041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.769 qpair failed and we were unable to recover it. 00:32:18.769 [2024-11-26 07:42:02.725359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.769 [2024-11-26 07:42:02.725368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.769 qpair failed and we were unable to recover it. 00:32:18.769 [2024-11-26 07:42:02.725683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.769 [2024-11-26 07:42:02.725692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.769 qpair failed and we were unable to recover it. 00:32:18.769 [2024-11-26 07:42:02.725869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.769 [2024-11-26 07:42:02.725878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.769 qpair failed and we were unable to recover it. 00:32:18.769 [2024-11-26 07:42:02.726184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.769 [2024-11-26 07:42:02.726192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.769 qpair failed and we were unable to recover it. 00:32:18.769 [2024-11-26 07:42:02.726493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.769 [2024-11-26 07:42:02.726502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.769 qpair failed and we were unable to recover it. 00:32:18.769 [2024-11-26 07:42:02.726809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.769 [2024-11-26 07:42:02.726817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.769 qpair failed and we were unable to recover it. 00:32:18.769 [2024-11-26 07:42:02.727107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.769 [2024-11-26 07:42:02.727117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.769 qpair failed and we were unable to recover it. 00:32:18.769 [2024-11-26 07:42:02.727426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.769 [2024-11-26 07:42:02.727435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.769 qpair failed and we were unable to recover it. 00:32:18.769 [2024-11-26 07:42:02.727746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.769 [2024-11-26 07:42:02.727755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.769 qpair failed and we were unable to recover it. 00:32:18.769 [2024-11-26 07:42:02.728082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.769 [2024-11-26 07:42:02.728090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.769 qpair failed and we were unable to recover it. 00:32:18.769 [2024-11-26 07:42:02.728389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.769 [2024-11-26 07:42:02.728398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.769 qpair failed and we were unable to recover it. 00:32:18.769 [2024-11-26 07:42:02.728711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.769 [2024-11-26 07:42:02.728720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.769 qpair failed and we were unable to recover it. 00:32:18.769 [2024-11-26 07:42:02.729048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.769 [2024-11-26 07:42:02.729056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.769 qpair failed and we were unable to recover it. 00:32:18.769 [2024-11-26 07:42:02.729399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.769 [2024-11-26 07:42:02.729407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.769 qpair failed and we were unable to recover it. 00:32:18.769 [2024-11-26 07:42:02.729707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.769 [2024-11-26 07:42:02.729716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.769 qpair failed and we were unable to recover it. 00:32:18.769 [2024-11-26 07:42:02.729944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.769 [2024-11-26 07:42:02.729952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.769 qpair failed and we were unable to recover it. 00:32:18.769 [2024-11-26 07:42:02.730274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.769 [2024-11-26 07:42:02.730283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.769 qpair failed and we were unable to recover it. 00:32:18.769 [2024-11-26 07:42:02.730589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.769 [2024-11-26 07:42:02.730597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.769 qpair failed and we were unable to recover it. 00:32:18.769 [2024-11-26 07:42:02.730762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.769 [2024-11-26 07:42:02.730771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.769 qpair failed and we were unable to recover it. 00:32:18.769 [2024-11-26 07:42:02.731065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.770 [2024-11-26 07:42:02.731075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.770 qpair failed and we were unable to recover it. 00:32:18.770 [2024-11-26 07:42:02.731391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.770 [2024-11-26 07:42:02.731400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.770 qpair failed and we were unable to recover it. 00:32:18.770 [2024-11-26 07:42:02.731604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.770 [2024-11-26 07:42:02.731612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.770 qpair failed and we were unable to recover it. 00:32:18.770 [2024-11-26 07:42:02.731920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.770 [2024-11-26 07:42:02.731928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.770 qpair failed and we were unable to recover it. 00:32:18.770 [2024-11-26 07:42:02.732245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.770 [2024-11-26 07:42:02.732254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.770 qpair failed and we were unable to recover it. 00:32:18.770 [2024-11-26 07:42:02.732600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.770 [2024-11-26 07:42:02.732609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.770 qpair failed and we were unable to recover it. 00:32:18.770 [2024-11-26 07:42:02.732913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.770 [2024-11-26 07:42:02.732922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.770 qpair failed and we were unable to recover it. 00:32:18.770 [2024-11-26 07:42:02.733106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.770 [2024-11-26 07:42:02.733113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.770 qpair failed and we were unable to recover it. 00:32:18.770 [2024-11-26 07:42:02.733424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.770 [2024-11-26 07:42:02.733432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.770 qpair failed and we were unable to recover it. 00:32:18.770 [2024-11-26 07:42:02.733742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.770 [2024-11-26 07:42:02.733750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.770 qpair failed and we were unable to recover it. 00:32:18.770 [2024-11-26 07:42:02.734040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.770 [2024-11-26 07:42:02.734048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.770 qpair failed and we were unable to recover it. 00:32:18.770 [2024-11-26 07:42:02.734372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.770 [2024-11-26 07:42:02.734381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.770 qpair failed and we were unable to recover it. 00:32:18.770 [2024-11-26 07:42:02.734593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.770 [2024-11-26 07:42:02.734602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.770 qpair failed and we were unable to recover it. 00:32:18.770 [2024-11-26 07:42:02.734910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.770 [2024-11-26 07:42:02.734919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.770 qpair failed and we were unable to recover it. 00:32:18.770 [2024-11-26 07:42:02.735230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.770 [2024-11-26 07:42:02.735239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.770 qpair failed and we were unable to recover it. 00:32:18.770 [2024-11-26 07:42:02.735474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.770 [2024-11-26 07:42:02.735482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.770 qpair failed and we were unable to recover it. 00:32:18.770 [2024-11-26 07:42:02.735790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.770 [2024-11-26 07:42:02.735798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.770 qpair failed and we were unable to recover it. 00:32:18.770 [2024-11-26 07:42:02.736007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.770 [2024-11-26 07:42:02.736015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.770 qpair failed and we were unable to recover it. 00:32:18.770 [2024-11-26 07:42:02.736344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.770 [2024-11-26 07:42:02.736354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.770 qpair failed and we were unable to recover it. 00:32:18.770 [2024-11-26 07:42:02.736643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.770 [2024-11-26 07:42:02.736652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.770 qpair failed and we were unable to recover it. 00:32:18.770 [2024-11-26 07:42:02.736954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.770 [2024-11-26 07:42:02.736963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.770 qpair failed and we were unable to recover it. 00:32:18.770 [2024-11-26 07:42:02.737133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.770 [2024-11-26 07:42:02.737142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.770 qpair failed and we were unable to recover it. 00:32:18.770 [2024-11-26 07:42:02.737325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.770 [2024-11-26 07:42:02.737334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.770 qpair failed and we were unable to recover it. 00:32:18.770 [2024-11-26 07:42:02.737644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.770 [2024-11-26 07:42:02.737652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.770 qpair failed and we were unable to recover it. 00:32:18.770 [2024-11-26 07:42:02.737930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.770 [2024-11-26 07:42:02.737938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.770 qpair failed and we were unable to recover it. 00:32:18.770 [2024-11-26 07:42:02.738280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.770 [2024-11-26 07:42:02.738288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.770 qpair failed and we were unable to recover it. 00:32:18.770 [2024-11-26 07:42:02.738584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.770 [2024-11-26 07:42:02.738592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.770 qpair failed and we were unable to recover it. 00:32:18.770 [2024-11-26 07:42:02.738964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.770 [2024-11-26 07:42:02.738972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.770 qpair failed and we were unable to recover it. 00:32:18.770 [2024-11-26 07:42:02.739269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.770 [2024-11-26 07:42:02.739278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.770 qpair failed and we were unable to recover it. 00:32:18.770 [2024-11-26 07:42:02.739585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.770 [2024-11-26 07:42:02.739594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.770 qpair failed and we were unable to recover it. 00:32:18.770 [2024-11-26 07:42:02.739903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.770 [2024-11-26 07:42:02.739912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.770 qpair failed and we were unable to recover it. 00:32:18.770 [2024-11-26 07:42:02.740216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.770 [2024-11-26 07:42:02.740225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.770 qpair failed and we were unable to recover it. 00:32:18.770 [2024-11-26 07:42:02.740515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.770 [2024-11-26 07:42:02.740524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.770 qpair failed and we were unable to recover it. 00:32:18.770 [2024-11-26 07:42:02.740821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.770 [2024-11-26 07:42:02.740829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.770 qpair failed and we were unable to recover it. 00:32:18.770 [2024-11-26 07:42:02.741137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.770 [2024-11-26 07:42:02.741147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.770 qpair failed and we were unable to recover it. 00:32:18.770 [2024-11-26 07:42:02.741437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.770 [2024-11-26 07:42:02.741445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.770 qpair failed and we were unable to recover it. 00:32:18.770 [2024-11-26 07:42:02.741753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.770 [2024-11-26 07:42:02.741762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.770 qpair failed and we were unable to recover it. 00:32:18.770 [2024-11-26 07:42:02.741932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.770 [2024-11-26 07:42:02.741940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.770 qpair failed and we were unable to recover it. 00:32:18.770 [2024-11-26 07:42:02.742158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.771 [2024-11-26 07:42:02.742165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.771 qpair failed and we were unable to recover it. 00:32:18.771 [2024-11-26 07:42:02.742500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.771 [2024-11-26 07:42:02.742508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.771 qpair failed and we were unable to recover it. 00:32:18.771 [2024-11-26 07:42:02.742819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.771 [2024-11-26 07:42:02.742828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.771 qpair failed and we were unable to recover it. 00:32:18.771 [2024-11-26 07:42:02.743152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.771 [2024-11-26 07:42:02.743161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.771 qpair failed and we were unable to recover it. 00:32:18.771 [2024-11-26 07:42:02.743472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.771 [2024-11-26 07:42:02.743481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.771 qpair failed and we were unable to recover it. 00:32:18.771 [2024-11-26 07:42:02.743777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.771 [2024-11-26 07:42:02.743785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.771 qpair failed and we were unable to recover it. 00:32:18.771 [2024-11-26 07:42:02.744080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.771 [2024-11-26 07:42:02.744088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.771 qpair failed and we were unable to recover it. 00:32:18.771 [2024-11-26 07:42:02.744287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.771 [2024-11-26 07:42:02.744295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.771 qpair failed and we were unable to recover it. 00:32:18.771 [2024-11-26 07:42:02.744590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.771 [2024-11-26 07:42:02.744599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.771 qpair failed and we were unable to recover it. 00:32:18.771 [2024-11-26 07:42:02.744788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.771 [2024-11-26 07:42:02.744796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.771 qpair failed and we were unable to recover it. 00:32:18.771 [2024-11-26 07:42:02.745077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.771 [2024-11-26 07:42:02.745085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.771 qpair failed and we were unable to recover it. 00:32:18.771 [2024-11-26 07:42:02.745442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.771 [2024-11-26 07:42:02.745451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.771 qpair failed and we were unable to recover it. 00:32:18.771 [2024-11-26 07:42:02.745763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.771 [2024-11-26 07:42:02.745772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.771 qpair failed and we were unable to recover it. 00:32:18.771 [2024-11-26 07:42:02.746132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.771 [2024-11-26 07:42:02.746140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.771 qpair failed and we were unable to recover it. 00:32:18.771 [2024-11-26 07:42:02.746456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.771 [2024-11-26 07:42:02.746465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.771 qpair failed and we were unable to recover it. 00:32:18.771 [2024-11-26 07:42:02.746620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.771 [2024-11-26 07:42:02.746630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.771 qpair failed and we were unable to recover it. 00:32:18.771 [2024-11-26 07:42:02.746928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.771 [2024-11-26 07:42:02.746937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.771 qpair failed and we were unable to recover it. 00:32:18.771 [2024-11-26 07:42:02.747308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.771 [2024-11-26 07:42:02.747317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.771 qpair failed and we were unable to recover it. 00:32:18.771 [2024-11-26 07:42:02.747626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.771 [2024-11-26 07:42:02.747634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.771 qpair failed and we were unable to recover it. 00:32:18.771 [2024-11-26 07:42:02.747955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.771 [2024-11-26 07:42:02.747965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.771 qpair failed and we were unable to recover it. 00:32:18.771 [2024-11-26 07:42:02.748287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.771 [2024-11-26 07:42:02.748296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.771 qpair failed and we were unable to recover it. 00:32:18.771 [2024-11-26 07:42:02.748596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.771 [2024-11-26 07:42:02.748604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.771 qpair failed and we were unable to recover it. 00:32:18.771 [2024-11-26 07:42:02.748793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.771 [2024-11-26 07:42:02.748801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.771 qpair failed and we were unable to recover it. 00:32:18.771 [2024-11-26 07:42:02.749095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.771 [2024-11-26 07:42:02.749103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.771 qpair failed and we were unable to recover it. 00:32:18.771 [2024-11-26 07:42:02.749405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.771 [2024-11-26 07:42:02.749413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.771 qpair failed and we were unable to recover it. 00:32:18.771 [2024-11-26 07:42:02.749585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.771 [2024-11-26 07:42:02.749594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.771 qpair failed and we were unable to recover it. 00:32:18.771 [2024-11-26 07:42:02.749907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.771 [2024-11-26 07:42:02.749916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.771 qpair failed and we were unable to recover it. 00:32:18.771 [2024-11-26 07:42:02.750260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.771 [2024-11-26 07:42:02.750268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.771 qpair failed and we were unable to recover it. 00:32:18.771 [2024-11-26 07:42:02.750671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.771 [2024-11-26 07:42:02.750679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.771 qpair failed and we were unable to recover it. 00:32:18.771 [2024-11-26 07:42:02.750976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.771 [2024-11-26 07:42:02.750984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.771 qpair failed and we were unable to recover it. 00:32:18.771 [2024-11-26 07:42:02.751323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.771 [2024-11-26 07:42:02.751331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.771 qpair failed and we were unable to recover it. 00:32:18.771 [2024-11-26 07:42:02.751532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.771 [2024-11-26 07:42:02.751539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.771 qpair failed and we were unable to recover it. 00:32:18.771 [2024-11-26 07:42:02.751818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.771 [2024-11-26 07:42:02.751827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.771 qpair failed and we were unable to recover it. 00:32:18.771 [2024-11-26 07:42:02.752032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.772 [2024-11-26 07:42:02.752041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.772 qpair failed and we were unable to recover it. 00:32:18.772 [2024-11-26 07:42:02.752365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.772 [2024-11-26 07:42:02.752374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.772 qpair failed and we were unable to recover it. 00:32:18.772 [2024-11-26 07:42:02.752702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.772 [2024-11-26 07:42:02.752711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.772 qpair failed and we were unable to recover it. 00:32:18.772 [2024-11-26 07:42:02.753028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.772 [2024-11-26 07:42:02.753036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.772 qpair failed and we were unable to recover it. 00:32:18.772 [2024-11-26 07:42:02.753386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.772 [2024-11-26 07:42:02.753395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.772 qpair failed and we were unable to recover it. 00:32:18.772 [2024-11-26 07:42:02.753714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.772 [2024-11-26 07:42:02.753722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.772 qpair failed and we were unable to recover it. 00:32:18.772 [2024-11-26 07:42:02.753997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.772 [2024-11-26 07:42:02.754006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.772 qpair failed and we were unable to recover it. 00:32:18.772 [2024-11-26 07:42:02.754316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.772 [2024-11-26 07:42:02.754324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.772 qpair failed and we were unable to recover it. 00:32:18.772 [2024-11-26 07:42:02.754503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.772 [2024-11-26 07:42:02.754511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.772 qpair failed and we were unable to recover it. 00:32:18.772 [2024-11-26 07:42:02.754834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.772 [2024-11-26 07:42:02.754842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.772 qpair failed and we were unable to recover it. 00:32:18.772 [2024-11-26 07:42:02.755151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.772 [2024-11-26 07:42:02.755160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.772 qpair failed and we were unable to recover it. 00:32:18.772 [2024-11-26 07:42:02.755467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.772 [2024-11-26 07:42:02.755476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.772 qpair failed and we were unable to recover it. 00:32:18.772 [2024-11-26 07:42:02.755537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.772 [2024-11-26 07:42:02.755544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.772 qpair failed and we were unable to recover it. 00:32:18.772 [2024-11-26 07:42:02.755828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.772 [2024-11-26 07:42:02.755836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.772 qpair failed and we were unable to recover it. 00:32:18.772 [2024-11-26 07:42:02.756149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.772 [2024-11-26 07:42:02.756158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.772 qpair failed and we were unable to recover it. 00:32:18.772 [2024-11-26 07:42:02.756464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.772 [2024-11-26 07:42:02.756474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.772 qpair failed and we were unable to recover it. 00:32:18.772 [2024-11-26 07:42:02.756778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.772 [2024-11-26 07:42:02.756786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.772 qpair failed and we were unable to recover it. 00:32:18.772 [2024-11-26 07:42:02.757119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.772 [2024-11-26 07:42:02.757129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.772 qpair failed and we were unable to recover it. 00:32:18.772 [2024-11-26 07:42:02.757426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.772 [2024-11-26 07:42:02.757435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.772 qpair failed and we were unable to recover it. 00:32:18.772 [2024-11-26 07:42:02.757722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.772 [2024-11-26 07:42:02.757730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.772 qpair failed and we were unable to recover it. 00:32:18.772 [2024-11-26 07:42:02.758030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.772 [2024-11-26 07:42:02.758038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.772 qpair failed and we were unable to recover it. 00:32:18.772 [2024-11-26 07:42:02.758342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.772 [2024-11-26 07:42:02.758352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.772 qpair failed and we were unable to recover it. 00:32:18.772 [2024-11-26 07:42:02.758664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.772 [2024-11-26 07:42:02.758672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.772 qpair failed and we were unable to recover it. 00:32:18.772 [2024-11-26 07:42:02.759015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.772 [2024-11-26 07:42:02.759025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.772 qpair failed and we were unable to recover it. 00:32:18.772 [2024-11-26 07:42:02.759291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.772 [2024-11-26 07:42:02.759299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.772 qpair failed and we were unable to recover it. 00:32:18.772 [2024-11-26 07:42:02.759636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.772 [2024-11-26 07:42:02.759644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.772 qpair failed and we were unable to recover it. 00:32:18.772 [2024-11-26 07:42:02.759969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.772 [2024-11-26 07:42:02.759978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.772 qpair failed and we were unable to recover it. 00:32:18.772 [2024-11-26 07:42:02.760295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.772 [2024-11-26 07:42:02.760305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.772 qpair failed and we were unable to recover it. 00:32:18.772 [2024-11-26 07:42:02.760623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.772 [2024-11-26 07:42:02.760632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.772 qpair failed and we were unable to recover it. 00:32:18.772 [2024-11-26 07:42:02.760836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.772 [2024-11-26 07:42:02.760844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.772 qpair failed and we were unable to recover it. 00:32:18.772 [2024-11-26 07:42:02.761000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.772 [2024-11-26 07:42:02.761009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.772 qpair failed and we were unable to recover it. 00:32:18.772 [2024-11-26 07:42:02.761222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.772 [2024-11-26 07:42:02.761230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.772 qpair failed and we were unable to recover it. 00:32:18.772 [2024-11-26 07:42:02.761422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.772 [2024-11-26 07:42:02.761430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.772 qpair failed and we were unable to recover it. 00:32:18.772 [2024-11-26 07:42:02.761731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.772 [2024-11-26 07:42:02.761738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.772 qpair failed and we were unable to recover it. 00:32:18.772 [2024-11-26 07:42:02.761904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.772 [2024-11-26 07:42:02.761913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.772 qpair failed and we were unable to recover it. 00:32:18.772 [2024-11-26 07:42:02.762219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.772 [2024-11-26 07:42:02.762228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.772 qpair failed and we were unable to recover it. 00:32:18.772 [2024-11-26 07:42:02.762534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.772 [2024-11-26 07:42:02.762543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.772 qpair failed and we were unable to recover it. 00:32:18.772 [2024-11-26 07:42:02.762849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.772 [2024-11-26 07:42:02.762857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.772 qpair failed and we were unable to recover it. 00:32:18.773 [2024-11-26 07:42:02.763181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.773 [2024-11-26 07:42:02.763191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.773 qpair failed and we were unable to recover it. 00:32:18.773 [2024-11-26 07:42:02.763496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.773 [2024-11-26 07:42:02.763504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.773 qpair failed and we were unable to recover it. 00:32:18.773 [2024-11-26 07:42:02.763690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.773 [2024-11-26 07:42:02.763698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.773 qpair failed and we were unable to recover it. 00:32:18.773 [2024-11-26 07:42:02.764015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.773 [2024-11-26 07:42:02.764025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.773 qpair failed and we were unable to recover it. 00:32:18.773 [2024-11-26 07:42:02.764335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.773 [2024-11-26 07:42:02.764344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.773 qpair failed and we were unable to recover it. 00:32:18.773 [2024-11-26 07:42:02.764654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.773 [2024-11-26 07:42:02.764663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.773 qpair failed and we were unable to recover it. 00:32:18.773 [2024-11-26 07:42:02.764992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.773 [2024-11-26 07:42:02.765001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.773 qpair failed and we were unable to recover it. 00:32:18.773 [2024-11-26 07:42:02.765308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.773 [2024-11-26 07:42:02.765318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.773 qpair failed and we were unable to recover it. 00:32:18.773 [2024-11-26 07:42:02.765622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.773 [2024-11-26 07:42:02.765630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.773 qpair failed and we were unable to recover it. 00:32:18.773 [2024-11-26 07:42:02.765977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.773 [2024-11-26 07:42:02.765986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.773 qpair failed and we were unable to recover it. 00:32:18.773 [2024-11-26 07:42:02.766268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.773 [2024-11-26 07:42:02.766277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.773 qpair failed and we were unable to recover it. 00:32:18.773 [2024-11-26 07:42:02.766613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.773 [2024-11-26 07:42:02.766622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.773 qpair failed and we were unable to recover it. 00:32:18.773 [2024-11-26 07:42:02.766930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.773 [2024-11-26 07:42:02.766939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.773 qpair failed and we were unable to recover it. 00:32:18.773 [2024-11-26 07:42:02.767114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.773 [2024-11-26 07:42:02.767122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.773 qpair failed and we were unable to recover it. 00:32:18.773 [2024-11-26 07:42:02.767450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.773 [2024-11-26 07:42:02.767458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.773 qpair failed and we were unable to recover it. 00:32:18.773 [2024-11-26 07:42:02.767785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.773 [2024-11-26 07:42:02.767794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.773 qpair failed and we were unable to recover it. 00:32:18.773 [2024-11-26 07:42:02.768148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.773 [2024-11-26 07:42:02.768158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.773 qpair failed and we were unable to recover it. 00:32:18.773 [2024-11-26 07:42:02.768469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.773 [2024-11-26 07:42:02.768479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.773 qpair failed and we were unable to recover it. 00:32:18.773 [2024-11-26 07:42:02.768784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.773 [2024-11-26 07:42:02.768793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.773 qpair failed and we were unable to recover it. 00:32:18.773 [2024-11-26 07:42:02.769110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.773 [2024-11-26 07:42:02.769120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.773 qpair failed and we were unable to recover it. 00:32:18.773 [2024-11-26 07:42:02.769319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.773 [2024-11-26 07:42:02.769327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.773 qpair failed and we were unable to recover it. 00:32:18.773 [2024-11-26 07:42:02.769517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.773 [2024-11-26 07:42:02.769526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.773 qpair failed and we were unable to recover it. 00:32:18.773 [2024-11-26 07:42:02.769831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.773 [2024-11-26 07:42:02.769840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.773 qpair failed and we were unable to recover it. 00:32:18.773 [2024-11-26 07:42:02.770160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.773 [2024-11-26 07:42:02.770170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.773 qpair failed and we were unable to recover it. 00:32:18.773 [2024-11-26 07:42:02.770469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.773 [2024-11-26 07:42:02.770478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.773 qpair failed and we were unable to recover it. 00:32:18.773 [2024-11-26 07:42:02.770664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.773 [2024-11-26 07:42:02.770673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.773 qpair failed and we were unable to recover it. 00:32:18.773 [2024-11-26 07:42:02.770942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.773 [2024-11-26 07:42:02.770951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.773 qpair failed and we were unable to recover it. 00:32:18.773 [2024-11-26 07:42:02.771300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.773 [2024-11-26 07:42:02.771309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.773 qpair failed and we were unable to recover it. 00:32:18.773 [2024-11-26 07:42:02.771611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.773 [2024-11-26 07:42:02.771620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.773 qpair failed and we were unable to recover it. 00:32:18.773 [2024-11-26 07:42:02.771928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.773 [2024-11-26 07:42:02.771937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.773 qpair failed and we were unable to recover it. 00:32:18.773 [2024-11-26 07:42:02.772241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.773 [2024-11-26 07:42:02.772248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.773 qpair failed and we were unable to recover it. 00:32:18.773 [2024-11-26 07:42:02.772438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.773 [2024-11-26 07:42:02.772446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.773 qpair failed and we were unable to recover it. 00:32:18.773 [2024-11-26 07:42:02.772715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.773 [2024-11-26 07:42:02.772723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.773 qpair failed and we were unable to recover it. 00:32:18.773 [2024-11-26 07:42:02.773033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.773 [2024-11-26 07:42:02.773041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.773 qpair failed and we were unable to recover it. 00:32:18.773 [2024-11-26 07:42:02.773373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.773 [2024-11-26 07:42:02.773381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.773 qpair failed and we were unable to recover it. 00:32:18.773 [2024-11-26 07:42:02.773456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.773 [2024-11-26 07:42:02.773464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.773 qpair failed and we were unable to recover it. 00:32:18.773 [2024-11-26 07:42:02.773629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.773 [2024-11-26 07:42:02.773637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.773 qpair failed and we were unable to recover it. 00:32:18.773 [2024-11-26 07:42:02.773952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.773 [2024-11-26 07:42:02.773961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.773 qpair failed and we were unable to recover it. 00:32:18.774 [2024-11-26 07:42:02.774320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.774 [2024-11-26 07:42:02.774329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.774 qpair failed and we were unable to recover it. 00:32:18.774 [2024-11-26 07:42:02.774650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.774 [2024-11-26 07:42:02.774658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.774 qpair failed and we were unable to recover it. 00:32:18.774 [2024-11-26 07:42:02.774970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.774 [2024-11-26 07:42:02.774978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.774 qpair failed and we were unable to recover it. 00:32:18.774 [2024-11-26 07:42:02.775296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.774 [2024-11-26 07:42:02.775304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.774 qpair failed and we were unable to recover it. 00:32:18.774 [2024-11-26 07:42:02.775611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.774 [2024-11-26 07:42:02.775620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.774 qpair failed and we were unable to recover it. 00:32:18.774 [2024-11-26 07:42:02.775793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.774 [2024-11-26 07:42:02.775802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.774 qpair failed and we were unable to recover it. 00:32:18.774 [2024-11-26 07:42:02.776101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.774 [2024-11-26 07:42:02.776110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.774 qpair failed and we were unable to recover it. 00:32:18.774 [2024-11-26 07:42:02.776410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.774 [2024-11-26 07:42:02.776419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.774 qpair failed and we were unable to recover it. 00:32:18.774 [2024-11-26 07:42:02.776624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.774 [2024-11-26 07:42:02.776632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.774 qpair failed and we were unable to recover it. 00:32:18.774 [2024-11-26 07:42:02.776703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.774 [2024-11-26 07:42:02.776712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.774 qpair failed and we were unable to recover it. 00:32:18.774 [2024-11-26 07:42:02.776998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.774 [2024-11-26 07:42:02.777007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.774 qpair failed and we were unable to recover it. 00:32:18.774 [2024-11-26 07:42:02.777327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.774 [2024-11-26 07:42:02.777335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.774 qpair failed and we were unable to recover it. 00:32:18.774 [2024-11-26 07:42:02.777646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.774 [2024-11-26 07:42:02.777656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.774 qpair failed and we were unable to recover it. 00:32:18.774 [2024-11-26 07:42:02.777943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.774 [2024-11-26 07:42:02.777952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.774 qpair failed and we were unable to recover it. 00:32:18.774 [2024-11-26 07:42:02.778271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.774 [2024-11-26 07:42:02.778279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.774 qpair failed and we were unable to recover it. 00:32:18.774 [2024-11-26 07:42:02.778603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.774 [2024-11-26 07:42:02.778611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.774 qpair failed and we were unable to recover it. 00:32:18.774 [2024-11-26 07:42:02.778932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.774 [2024-11-26 07:42:02.778941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.774 qpair failed and we were unable to recover it. 00:32:18.774 [2024-11-26 07:42:02.779320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.774 [2024-11-26 07:42:02.779328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.774 qpair failed and we were unable to recover it. 00:32:18.774 [2024-11-26 07:42:02.779467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.774 [2024-11-26 07:42:02.779477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.774 qpair failed and we were unable to recover it. 00:32:18.774 [2024-11-26 07:42:02.779833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.774 [2024-11-26 07:42:02.779842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.774 qpair failed and we were unable to recover it. 00:32:18.774 [2024-11-26 07:42:02.780163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.774 [2024-11-26 07:42:02.780173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.774 qpair failed and we were unable to recover it. 00:32:18.774 [2024-11-26 07:42:02.780483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.774 [2024-11-26 07:42:02.780491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.774 qpair failed and we were unable to recover it. 00:32:18.774 [2024-11-26 07:42:02.780795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.774 [2024-11-26 07:42:02.780804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.774 qpair failed and we were unable to recover it. 00:32:18.774 [2024-11-26 07:42:02.781108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.774 [2024-11-26 07:42:02.781116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.774 qpair failed and we were unable to recover it. 00:32:18.774 [2024-11-26 07:42:02.781433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.774 [2024-11-26 07:42:02.781442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.774 qpair failed and we were unable to recover it. 00:32:18.774 [2024-11-26 07:42:02.781750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.774 [2024-11-26 07:42:02.781758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.774 qpair failed and we were unable to recover it. 00:32:18.774 [2024-11-26 07:42:02.782048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.774 [2024-11-26 07:42:02.782057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.774 qpair failed and we were unable to recover it. 00:32:18.774 [2024-11-26 07:42:02.782221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.774 [2024-11-26 07:42:02.782231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.774 qpair failed and we were unable to recover it. 00:32:18.774 [2024-11-26 07:42:02.782548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.774 [2024-11-26 07:42:02.782557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.774 qpair failed and we were unable to recover it. 00:32:18.774 [2024-11-26 07:42:02.782727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.774 [2024-11-26 07:42:02.782734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.774 qpair failed and we were unable to recover it. 00:32:18.774 [2024-11-26 07:42:02.783055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.774 [2024-11-26 07:42:02.783064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.774 qpair failed and we were unable to recover it. 00:32:18.774 [2024-11-26 07:42:02.783380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.774 [2024-11-26 07:42:02.783389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.774 qpair failed and we were unable to recover it. 00:32:18.774 [2024-11-26 07:42:02.783701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.774 [2024-11-26 07:42:02.783710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.774 qpair failed and we were unable to recover it. 00:32:18.774 [2024-11-26 07:42:02.783907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.774 [2024-11-26 07:42:02.783916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.774 qpair failed and we were unable to recover it. 00:32:18.774 [2024-11-26 07:42:02.784247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.774 [2024-11-26 07:42:02.784255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.774 qpair failed and we were unable to recover it. 00:32:18.774 [2024-11-26 07:42:02.784549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.774 [2024-11-26 07:42:02.784558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.774 qpair failed and we were unable to recover it. 00:32:18.774 [2024-11-26 07:42:02.784887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.774 [2024-11-26 07:42:02.784896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.774 qpair failed and we were unable to recover it. 00:32:18.774 [2024-11-26 07:42:02.785305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.774 [2024-11-26 07:42:02.785314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.775 qpair failed and we were unable to recover it. 00:32:18.775 [2024-11-26 07:42:02.785620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.775 [2024-11-26 07:42:02.785629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.775 qpair failed and we were unable to recover it. 00:32:18.775 [2024-11-26 07:42:02.785926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.775 [2024-11-26 07:42:02.785935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.775 qpair failed and we were unable to recover it. 00:32:18.775 [2024-11-26 07:42:02.786262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.775 [2024-11-26 07:42:02.786271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.775 qpair failed and we were unable to recover it. 00:32:18.775 [2024-11-26 07:42:02.786576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.775 [2024-11-26 07:42:02.786584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.775 qpair failed and we were unable to recover it. 00:32:18.775 [2024-11-26 07:42:02.786884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.775 [2024-11-26 07:42:02.786892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.775 qpair failed and we were unable to recover it. 00:32:18.775 [2024-11-26 07:42:02.787240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.775 [2024-11-26 07:42:02.787248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.775 qpair failed and we were unable to recover it. 00:32:18.775 [2024-11-26 07:42:02.787556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.775 [2024-11-26 07:42:02.787566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.775 qpair failed and we were unable to recover it. 00:32:18.775 [2024-11-26 07:42:02.787873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.775 [2024-11-26 07:42:02.787882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.775 qpair failed and we were unable to recover it. 00:32:18.775 [2024-11-26 07:42:02.788199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.775 [2024-11-26 07:42:02.788208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.775 qpair failed and we were unable to recover it. 00:32:18.775 [2024-11-26 07:42:02.788380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.775 [2024-11-26 07:42:02.788389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.775 qpair failed and we were unable to recover it. 00:32:18.775 [2024-11-26 07:42:02.788705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.775 [2024-11-26 07:42:02.788713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.775 qpair failed and we were unable to recover it. 00:32:18.775 [2024-11-26 07:42:02.788912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.775 [2024-11-26 07:42:02.788921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.775 qpair failed and we were unable to recover it. 00:32:18.775 [2024-11-26 07:42:02.789194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.775 [2024-11-26 07:42:02.789202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.775 qpair failed and we were unable to recover it. 00:32:18.775 [2024-11-26 07:42:02.789532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.775 [2024-11-26 07:42:02.789541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.775 qpair failed and we were unable to recover it. 00:32:18.775 [2024-11-26 07:42:02.789865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.775 [2024-11-26 07:42:02.789874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.775 qpair failed and we were unable to recover it. 00:32:18.775 [2024-11-26 07:42:02.790149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.775 [2024-11-26 07:42:02.790157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.775 qpair failed and we were unable to recover it. 00:32:18.775 [2024-11-26 07:42:02.790467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.775 [2024-11-26 07:42:02.790476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.775 qpair failed and we were unable to recover it. 00:32:18.775 [2024-11-26 07:42:02.790804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.775 [2024-11-26 07:42:02.790813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.775 qpair failed and we were unable to recover it. 00:32:18.775 [2024-11-26 07:42:02.791024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.775 [2024-11-26 07:42:02.791033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.775 qpair failed and we were unable to recover it. 00:32:18.775 [2024-11-26 07:42:02.791228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.775 [2024-11-26 07:42:02.791236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.775 qpair failed and we were unable to recover it. 00:32:18.775 [2024-11-26 07:42:02.791514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.775 [2024-11-26 07:42:02.791525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.775 qpair failed and we were unable to recover it. 00:32:18.775 [2024-11-26 07:42:02.791829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.775 [2024-11-26 07:42:02.791839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.775 qpair failed and we were unable to recover it. 00:32:18.775 [2024-11-26 07:42:02.792223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.775 [2024-11-26 07:42:02.792233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.775 qpair failed and we were unable to recover it. 00:32:18.775 [2024-11-26 07:42:02.792541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.775 [2024-11-26 07:42:02.792550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.775 qpair failed and we were unable to recover it. 00:32:18.775 [2024-11-26 07:42:02.792766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.775 [2024-11-26 07:42:02.792776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.775 qpair failed and we were unable to recover it. 00:32:18.775 [2024-11-26 07:42:02.793098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.775 [2024-11-26 07:42:02.793107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.775 qpair failed and we were unable to recover it. 00:32:18.775 [2024-11-26 07:42:02.793410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.775 [2024-11-26 07:42:02.793419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.775 qpair failed and we were unable to recover it. 00:32:18.775 [2024-11-26 07:42:02.793725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.775 [2024-11-26 07:42:02.793735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.775 qpair failed and we were unable to recover it. 00:32:18.775 [2024-11-26 07:42:02.794074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.775 [2024-11-26 07:42:02.794084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.775 qpair failed and we were unable to recover it. 00:32:18.775 [2024-11-26 07:42:02.794411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.775 [2024-11-26 07:42:02.794419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.775 qpair failed and we were unable to recover it. 00:32:18.775 [2024-11-26 07:42:02.794733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.775 [2024-11-26 07:42:02.794742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.775 qpair failed and we were unable to recover it. 00:32:18.775 [2024-11-26 07:42:02.795026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.775 [2024-11-26 07:42:02.795035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.775 qpair failed and we were unable to recover it. 00:32:18.775 [2024-11-26 07:42:02.795214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.775 [2024-11-26 07:42:02.795223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.775 qpair failed and we were unable to recover it. 00:32:18.775 [2024-11-26 07:42:02.795562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.775 [2024-11-26 07:42:02.795570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.775 qpair failed and we were unable to recover it. 00:32:18.775 [2024-11-26 07:42:02.795871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.775 [2024-11-26 07:42:02.795880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.775 qpair failed and we were unable to recover it. 00:32:18.775 [2024-11-26 07:42:02.796190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.775 [2024-11-26 07:42:02.796198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.775 qpair failed and we were unable to recover it. 00:32:18.775 [2024-11-26 07:42:02.796489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.775 [2024-11-26 07:42:02.796497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.775 qpair failed and we were unable to recover it. 00:32:18.775 [2024-11-26 07:42:02.796796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.776 [2024-11-26 07:42:02.796805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.776 qpair failed and we were unable to recover it. 00:32:18.776 [2024-11-26 07:42:02.797106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.776 [2024-11-26 07:42:02.797116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.776 qpair failed and we were unable to recover it. 00:32:18.776 [2024-11-26 07:42:02.797433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.776 [2024-11-26 07:42:02.797441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.776 qpair failed and we were unable to recover it. 00:32:18.776 [2024-11-26 07:42:02.797627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.776 [2024-11-26 07:42:02.797636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.776 qpair failed and we were unable to recover it. 00:32:18.776 [2024-11-26 07:42:02.797932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.776 [2024-11-26 07:42:02.797941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.776 qpair failed and we were unable to recover it. 00:32:18.776 [2024-11-26 07:42:02.798308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.776 [2024-11-26 07:42:02.798317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.776 qpair failed and we were unable to recover it. 00:32:18.776 [2024-11-26 07:42:02.798617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.776 [2024-11-26 07:42:02.798626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.776 qpair failed and we were unable to recover it. 00:32:18.776 [2024-11-26 07:42:02.798938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.776 [2024-11-26 07:42:02.798947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.776 qpair failed and we were unable to recover it. 00:32:18.776 [2024-11-26 07:42:02.799139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.776 [2024-11-26 07:42:02.799149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.776 qpair failed and we were unable to recover it. 00:32:18.776 [2024-11-26 07:42:02.799421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.776 [2024-11-26 07:42:02.799429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.776 qpair failed and we were unable to recover it. 00:32:18.776 [2024-11-26 07:42:02.799741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.776 [2024-11-26 07:42:02.799749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.776 qpair failed and we were unable to recover it. 00:32:18.776 [2024-11-26 07:42:02.800055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.776 [2024-11-26 07:42:02.800064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.776 qpair failed and we were unable to recover it. 00:32:18.776 [2024-11-26 07:42:02.800352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.776 [2024-11-26 07:42:02.800362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.776 qpair failed and we were unable to recover it. 00:32:18.776 [2024-11-26 07:42:02.800666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.776 [2024-11-26 07:42:02.800675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.776 qpair failed and we were unable to recover it. 00:32:18.776 [2024-11-26 07:42:02.800984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.776 [2024-11-26 07:42:02.800993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.776 qpair failed and we were unable to recover it. 00:32:18.776 [2024-11-26 07:42:02.801300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.776 [2024-11-26 07:42:02.801308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.776 qpair failed and we were unable to recover it. 00:32:18.776 [2024-11-26 07:42:02.801602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.776 [2024-11-26 07:42:02.801610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.776 qpair failed and we were unable to recover it. 00:32:18.776 [2024-11-26 07:42:02.801889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.776 [2024-11-26 07:42:02.801898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.776 qpair failed and we were unable to recover it. 00:32:18.776 [2024-11-26 07:42:02.802234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.776 [2024-11-26 07:42:02.802242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.776 qpair failed and we were unable to recover it. 00:32:18.776 [2024-11-26 07:42:02.802559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.776 [2024-11-26 07:42:02.802569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.776 qpair failed and we were unable to recover it. 00:32:18.776 [2024-11-26 07:42:02.802852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.776 [2024-11-26 07:42:02.802864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.776 qpair failed and we were unable to recover it. 00:32:18.776 [2024-11-26 07:42:02.803154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.776 [2024-11-26 07:42:02.803162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.776 qpair failed and we were unable to recover it. 00:32:18.776 [2024-11-26 07:42:02.803471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.776 [2024-11-26 07:42:02.803479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.776 qpair failed and we were unable to recover it. 00:32:18.776 [2024-11-26 07:42:02.803787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.776 [2024-11-26 07:42:02.803798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.776 qpair failed and we were unable to recover it. 00:32:18.776 [2024-11-26 07:42:02.804101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.776 [2024-11-26 07:42:02.804110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.776 qpair failed and we were unable to recover it. 00:32:18.776 [2024-11-26 07:42:02.804422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.776 [2024-11-26 07:42:02.804431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.776 qpair failed and we were unable to recover it. 00:32:18.776 [2024-11-26 07:42:02.804731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.776 [2024-11-26 07:42:02.804740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.776 qpair failed and we were unable to recover it. 00:32:18.776 [2024-11-26 07:42:02.805035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.776 [2024-11-26 07:42:02.805044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.776 qpair failed and we were unable to recover it. 00:32:18.776 [2024-11-26 07:42:02.805371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.776 [2024-11-26 07:42:02.805380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.776 qpair failed and we were unable to recover it. 00:32:18.776 [2024-11-26 07:42:02.805566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.776 [2024-11-26 07:42:02.805575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.776 qpair failed and we were unable to recover it. 00:32:18.776 [2024-11-26 07:42:02.805905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.776 [2024-11-26 07:42:02.805914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.776 qpair failed and we were unable to recover it. 00:32:18.776 [2024-11-26 07:42:02.806228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.776 [2024-11-26 07:42:02.806236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.776 qpair failed and we were unable to recover it. 00:32:18.776 [2024-11-26 07:42:02.806608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.776 [2024-11-26 07:42:02.806616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.776 qpair failed and we were unable to recover it. 00:32:18.776 [2024-11-26 07:42:02.806923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.776 [2024-11-26 07:42:02.806932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.776 qpair failed and we were unable to recover it. 00:32:18.776 [2024-11-26 07:42:02.807206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.777 [2024-11-26 07:42:02.807215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.777 qpair failed and we were unable to recover it. 00:32:18.777 [2024-11-26 07:42:02.807538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.777 [2024-11-26 07:42:02.807547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.777 qpair failed and we were unable to recover it. 00:32:18.777 [2024-11-26 07:42:02.807835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.777 [2024-11-26 07:42:02.807844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.777 qpair failed and we were unable to recover it. 00:32:18.777 [2024-11-26 07:42:02.808065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.777 [2024-11-26 07:42:02.808073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.777 qpair failed and we were unable to recover it. 00:32:18.777 [2024-11-26 07:42:02.808276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.777 [2024-11-26 07:42:02.808286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.777 qpair failed and we were unable to recover it. 00:32:18.777 [2024-11-26 07:42:02.808594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.777 [2024-11-26 07:42:02.808603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.777 qpair failed and we were unable to recover it. 00:32:18.777 [2024-11-26 07:42:02.808895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.777 [2024-11-26 07:42:02.808903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.777 qpair failed and we were unable to recover it. 00:32:18.777 [2024-11-26 07:42:02.809235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.777 [2024-11-26 07:42:02.809243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.777 qpair failed and we were unable to recover it. 00:32:18.777 [2024-11-26 07:42:02.809558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.777 [2024-11-26 07:42:02.809566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.777 qpair failed and we were unable to recover it. 00:32:18.777 [2024-11-26 07:42:02.809872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.777 [2024-11-26 07:42:02.809881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.777 qpair failed and we were unable to recover it. 00:32:18.777 [2024-11-26 07:42:02.810182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.777 [2024-11-26 07:42:02.810192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.777 qpair failed and we were unable to recover it. 00:32:18.777 [2024-11-26 07:42:02.810373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.777 [2024-11-26 07:42:02.810383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.777 qpair failed and we were unable to recover it. 00:32:18.777 [2024-11-26 07:42:02.810701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.777 [2024-11-26 07:42:02.810710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.777 qpair failed and we were unable to recover it. 00:32:18.777 [2024-11-26 07:42:02.811062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.777 [2024-11-26 07:42:02.811071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.777 qpair failed and we were unable to recover it. 00:32:18.777 [2024-11-26 07:42:02.811274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.777 [2024-11-26 07:42:02.811282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.777 qpair failed and we were unable to recover it. 00:32:18.777 [2024-11-26 07:42:02.811565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.777 [2024-11-26 07:42:02.811573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.777 qpair failed and we were unable to recover it. 00:32:18.777 [2024-11-26 07:42:02.811884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.777 [2024-11-26 07:42:02.811892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.777 qpair failed and we were unable to recover it. 00:32:18.777 [2024-11-26 07:42:02.812061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.777 [2024-11-26 07:42:02.812070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.777 qpair failed and we were unable to recover it. 00:32:18.777 [2024-11-26 07:42:02.812259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.777 [2024-11-26 07:42:02.812267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.777 qpair failed and we were unable to recover it. 00:32:18.777 [2024-11-26 07:42:02.812643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.777 [2024-11-26 07:42:02.812651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.777 qpair failed and we were unable to recover it. 00:32:18.777 [2024-11-26 07:42:02.812952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.777 [2024-11-26 07:42:02.812961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.777 qpair failed and we were unable to recover it. 00:32:18.777 [2024-11-26 07:42:02.813289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.777 [2024-11-26 07:42:02.813297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.777 qpair failed and we were unable to recover it. 00:32:18.777 [2024-11-26 07:42:02.813625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.777 [2024-11-26 07:42:02.813635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.777 qpair failed and we were unable to recover it. 00:32:18.777 [2024-11-26 07:42:02.813939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.777 [2024-11-26 07:42:02.813948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.777 qpair failed and we were unable to recover it. 00:32:18.777 [2024-11-26 07:42:02.814304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.777 [2024-11-26 07:42:02.814312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.777 qpair failed and we were unable to recover it. 00:32:18.777 [2024-11-26 07:42:02.814656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.777 [2024-11-26 07:42:02.814664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.777 qpair failed and we were unable to recover it. 00:32:18.777 [2024-11-26 07:42:02.814833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.777 [2024-11-26 07:42:02.814842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.777 qpair failed and we were unable to recover it. 00:32:18.777 [2024-11-26 07:42:02.815123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.777 [2024-11-26 07:42:02.815132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.777 qpair failed and we were unable to recover it. 00:32:18.777 [2024-11-26 07:42:02.815428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.777 [2024-11-26 07:42:02.815437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.777 qpair failed and we were unable to recover it. 00:32:18.777 [2024-11-26 07:42:02.815751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.777 [2024-11-26 07:42:02.815762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.777 qpair failed and we were unable to recover it. 00:32:18.777 [2024-11-26 07:42:02.815958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.777 [2024-11-26 07:42:02.815967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.777 qpair failed and we were unable to recover it. 00:32:18.777 [2024-11-26 07:42:02.816162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.777 [2024-11-26 07:42:02.816170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.777 qpair failed and we were unable to recover it. 00:32:18.777 [2024-11-26 07:42:02.816378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.777 [2024-11-26 07:42:02.816387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.777 qpair failed and we were unable to recover it. 00:32:18.777 [2024-11-26 07:42:02.816697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.777 [2024-11-26 07:42:02.816705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.777 qpair failed and we were unable to recover it. 00:32:18.777 [2024-11-26 07:42:02.817014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.777 [2024-11-26 07:42:02.817023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.777 qpair failed and we were unable to recover it. 00:32:18.777 [2024-11-26 07:42:02.817338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.777 [2024-11-26 07:42:02.817346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.777 qpair failed and we were unable to recover it. 00:32:18.777 [2024-11-26 07:42:02.817663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.777 [2024-11-26 07:42:02.817671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.777 qpair failed and we were unable to recover it. 00:32:18.777 [2024-11-26 07:42:02.817984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.777 [2024-11-26 07:42:02.817994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.777 qpair failed and we were unable to recover it. 00:32:18.778 [2024-11-26 07:42:02.818337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.778 [2024-11-26 07:42:02.818346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.778 qpair failed and we were unable to recover it. 00:32:18.778 [2024-11-26 07:42:02.818562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.778 [2024-11-26 07:42:02.818571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.778 qpair failed and we were unable to recover it. 00:32:18.778 [2024-11-26 07:42:02.818853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.778 [2024-11-26 07:42:02.818864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.778 qpair failed and we were unable to recover it. 00:32:18.778 [2024-11-26 07:42:02.819156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.778 [2024-11-26 07:42:02.819165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.778 qpair failed and we were unable to recover it. 00:32:18.778 [2024-11-26 07:42:02.819504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.778 [2024-11-26 07:42:02.819513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.778 qpair failed and we were unable to recover it. 00:32:18.778 [2024-11-26 07:42:02.819823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.778 [2024-11-26 07:42:02.819831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.778 qpair failed and we were unable to recover it. 00:32:18.778 [2024-11-26 07:42:02.820123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.778 [2024-11-26 07:42:02.820131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.778 qpair failed and we were unable to recover it. 00:32:18.778 [2024-11-26 07:42:02.820440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.778 [2024-11-26 07:42:02.820448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.778 qpair failed and we were unable to recover it. 00:32:18.778 [2024-11-26 07:42:02.820737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.778 [2024-11-26 07:42:02.820746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.778 qpair failed and we were unable to recover it. 00:32:18.778 [2024-11-26 07:42:02.821072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.778 [2024-11-26 07:42:02.821081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.778 qpair failed and we were unable to recover it. 00:32:18.778 [2024-11-26 07:42:02.821344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.778 [2024-11-26 07:42:02.821353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.778 qpair failed and we were unable to recover it. 00:32:18.778 [2024-11-26 07:42:02.821660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.778 [2024-11-26 07:42:02.821668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.778 qpair failed and we were unable to recover it. 00:32:18.778 [2024-11-26 07:42:02.821974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.778 [2024-11-26 07:42:02.821982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.778 qpair failed and we were unable to recover it. 00:32:18.778 [2024-11-26 07:42:02.822306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.778 [2024-11-26 07:42:02.822315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.778 qpair failed and we were unable to recover it. 00:32:18.778 [2024-11-26 07:42:02.822624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.778 [2024-11-26 07:42:02.822633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.778 qpair failed and we were unable to recover it. 00:32:18.778 [2024-11-26 07:42:02.822922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.778 [2024-11-26 07:42:02.822930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.778 qpair failed and we were unable to recover it. 00:32:18.778 [2024-11-26 07:42:02.823115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.778 [2024-11-26 07:42:02.823123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.778 qpair failed and we were unable to recover it. 00:32:18.778 [2024-11-26 07:42:02.823299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.778 [2024-11-26 07:42:02.823308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.778 qpair failed and we were unable to recover it. 00:32:18.778 [2024-11-26 07:42:02.823627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.778 [2024-11-26 07:42:02.823636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.778 qpair failed and we were unable to recover it. 00:32:18.778 [2024-11-26 07:42:02.823827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.778 [2024-11-26 07:42:02.823835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.778 qpair failed and we were unable to recover it. 00:32:18.778 [2024-11-26 07:42:02.824113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.778 [2024-11-26 07:42:02.824122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.778 qpair failed and we were unable to recover it. 00:32:18.778 [2024-11-26 07:42:02.824432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.778 [2024-11-26 07:42:02.824440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.778 qpair failed and we were unable to recover it. 00:32:18.778 [2024-11-26 07:42:02.824748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.778 [2024-11-26 07:42:02.824757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.778 qpair failed and we were unable to recover it. 00:32:18.778 [2024-11-26 07:42:02.824830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.778 [2024-11-26 07:42:02.824838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.778 qpair failed and we were unable to recover it. 00:32:18.778 [2024-11-26 07:42:02.825122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.778 [2024-11-26 07:42:02.825132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.778 qpair failed and we were unable to recover it. 00:32:18.778 [2024-11-26 07:42:02.825305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.778 [2024-11-26 07:42:02.825315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.778 qpair failed and we were unable to recover it. 00:32:18.778 [2024-11-26 07:42:02.825625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.778 [2024-11-26 07:42:02.825634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.778 qpair failed and we were unable to recover it. 00:32:18.778 [2024-11-26 07:42:02.825941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.778 [2024-11-26 07:42:02.825951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.778 qpair failed and we were unable to recover it. 00:32:18.778 [2024-11-26 07:42:02.826305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.778 [2024-11-26 07:42:02.826313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.778 qpair failed and we were unable to recover it. 00:32:18.778 [2024-11-26 07:42:02.826603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.778 [2024-11-26 07:42:02.826612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.778 qpair failed and we were unable to recover it. 00:32:18.778 [2024-11-26 07:42:02.826912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.778 [2024-11-26 07:42:02.826921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.778 qpair failed and we were unable to recover it. 00:32:18.778 [2024-11-26 07:42:02.827187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.778 [2024-11-26 07:42:02.827197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.778 qpair failed and we were unable to recover it. 00:32:18.778 [2024-11-26 07:42:02.827522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.778 [2024-11-26 07:42:02.827530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.778 qpair failed and we were unable to recover it. 00:32:18.778 [2024-11-26 07:42:02.827811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.778 [2024-11-26 07:42:02.827818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.778 qpair failed and we were unable to recover it. 00:32:18.778 [2024-11-26 07:42:02.828128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.778 [2024-11-26 07:42:02.828137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.778 qpair failed and we were unable to recover it. 00:32:18.778 [2024-11-26 07:42:02.828451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.778 [2024-11-26 07:42:02.828459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.778 qpair failed and we were unable to recover it. 00:32:18.778 [2024-11-26 07:42:02.828771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.778 [2024-11-26 07:42:02.828780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.778 qpair failed and we were unable to recover it. 00:32:18.778 [2024-11-26 07:42:02.829089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.778 [2024-11-26 07:42:02.829107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.778 qpair failed and we were unable to recover it. 00:32:18.779 [2024-11-26 07:42:02.829432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.779 [2024-11-26 07:42:02.829440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.779 qpair failed and we were unable to recover it. 00:32:18.779 [2024-11-26 07:42:02.829750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.779 [2024-11-26 07:42:02.829759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.779 qpair failed and we were unable to recover it. 00:32:18.779 [2024-11-26 07:42:02.830074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.779 [2024-11-26 07:42:02.830082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.779 qpair failed and we were unable to recover it. 00:32:18.779 [2024-11-26 07:42:02.830409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.779 [2024-11-26 07:42:02.830419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.779 qpair failed and we were unable to recover it. 00:32:18.779 [2024-11-26 07:42:02.830735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.779 [2024-11-26 07:42:02.830744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.779 qpair failed and we were unable to recover it. 00:32:18.779 [2024-11-26 07:42:02.831024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.779 [2024-11-26 07:42:02.831032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.779 qpair failed and we were unable to recover it. 00:32:18.779 [2024-11-26 07:42:02.831372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.779 [2024-11-26 07:42:02.831380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.779 qpair failed and we were unable to recover it. 00:32:18.779 [2024-11-26 07:42:02.831685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.779 [2024-11-26 07:42:02.831695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.779 qpair failed and we were unable to recover it. 00:32:18.779 [2024-11-26 07:42:02.832001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.779 [2024-11-26 07:42:02.832011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.779 qpair failed and we were unable to recover it. 00:32:18.779 [2024-11-26 07:42:02.832366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.779 [2024-11-26 07:42:02.832374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.779 qpair failed and we were unable to recover it. 00:32:18.779 [2024-11-26 07:42:02.832700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.779 [2024-11-26 07:42:02.832709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.779 qpair failed and we were unable to recover it. 00:32:18.779 [2024-11-26 07:42:02.832921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.779 [2024-11-26 07:42:02.832930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.779 qpair failed and we were unable to recover it. 00:32:18.779 [2024-11-26 07:42:02.833083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.779 [2024-11-26 07:42:02.833091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.779 qpair failed and we were unable to recover it. 00:32:18.779 [2024-11-26 07:42:02.833418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.779 [2024-11-26 07:42:02.833425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.779 qpair failed and we were unable to recover it. 00:32:18.779 [2024-11-26 07:42:02.833745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.779 [2024-11-26 07:42:02.833753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.779 qpair failed and we were unable to recover it. 00:32:18.779 [2024-11-26 07:42:02.834072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.779 [2024-11-26 07:42:02.834080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.779 qpair failed and we were unable to recover it. 00:32:18.779 [2024-11-26 07:42:02.834397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.779 [2024-11-26 07:42:02.834406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.779 qpair failed and we were unable to recover it. 00:32:18.779 [2024-11-26 07:42:02.834615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.779 [2024-11-26 07:42:02.834623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.779 qpair failed and we were unable to recover it. 00:32:18.779 [2024-11-26 07:42:02.834930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.779 [2024-11-26 07:42:02.834939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.779 qpair failed and we were unable to recover it. 00:32:18.779 [2024-11-26 07:42:02.835121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.779 [2024-11-26 07:42:02.835130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.779 qpair failed and we were unable to recover it. 00:32:18.779 [2024-11-26 07:42:02.835443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.779 [2024-11-26 07:42:02.835451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.779 qpair failed and we were unable to recover it. 00:32:18.779 [2024-11-26 07:42:02.835760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.779 [2024-11-26 07:42:02.835768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.779 qpair failed and we were unable to recover it. 00:32:18.779 [2024-11-26 07:42:02.836076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.779 [2024-11-26 07:42:02.836086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.779 qpair failed and we were unable to recover it. 00:32:18.779 [2024-11-26 07:42:02.836429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.779 [2024-11-26 07:42:02.836438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.779 qpair failed and we were unable to recover it. 00:32:18.779 [2024-11-26 07:42:02.836630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.779 [2024-11-26 07:42:02.836638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.779 qpair failed and we were unable to recover it. 00:32:18.779 [2024-11-26 07:42:02.836929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.779 [2024-11-26 07:42:02.836938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.779 qpair failed and we were unable to recover it. 00:32:18.779 [2024-11-26 07:42:02.837256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.779 [2024-11-26 07:42:02.837265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.779 qpair failed and we were unable to recover it. 00:32:18.779 [2024-11-26 07:42:02.837568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.779 [2024-11-26 07:42:02.837578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.779 qpair failed and we were unable to recover it. 00:32:18.779 [2024-11-26 07:42:02.837895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.779 [2024-11-26 07:42:02.837904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.779 qpair failed and we were unable to recover it. 00:32:18.779 [2024-11-26 07:42:02.838236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.779 [2024-11-26 07:42:02.838245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.779 qpair failed and we were unable to recover it. 00:32:18.779 [2024-11-26 07:42:02.838456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.779 [2024-11-26 07:42:02.838464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.779 qpair failed and we were unable to recover it. 00:32:18.779 [2024-11-26 07:42:02.838810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.779 [2024-11-26 07:42:02.838818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.779 qpair failed and we were unable to recover it. 00:32:18.779 [2024-11-26 07:42:02.838986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.779 [2024-11-26 07:42:02.838994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.779 qpair failed and we were unable to recover it. 00:32:18.779 [2024-11-26 07:42:02.839287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.779 [2024-11-26 07:42:02.839297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.779 qpair failed and we were unable to recover it. 00:32:18.779 [2024-11-26 07:42:02.839618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.779 [2024-11-26 07:42:02.839627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.779 qpair failed and we were unable to recover it. 00:32:18.779 [2024-11-26 07:42:02.839921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.779 [2024-11-26 07:42:02.839931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.779 qpair failed and we were unable to recover it. 00:32:18.779 [2024-11-26 07:42:02.840152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.779 [2024-11-26 07:42:02.840160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.779 qpair failed and we were unable to recover it. 00:32:18.779 [2024-11-26 07:42:02.840489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.780 [2024-11-26 07:42:02.840498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.780 qpair failed and we were unable to recover it. 00:32:18.780 [2024-11-26 07:42:02.840693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.780 [2024-11-26 07:42:02.840701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.780 qpair failed and we were unable to recover it. 00:32:18.780 [2024-11-26 07:42:02.840868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.780 [2024-11-26 07:42:02.840877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.780 qpair failed and we were unable to recover it. 00:32:18.780 [2024-11-26 07:42:02.841175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.780 [2024-11-26 07:42:02.841183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.780 qpair failed and we were unable to recover it. 00:32:18.780 [2024-11-26 07:42:02.841396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.780 [2024-11-26 07:42:02.841404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.780 qpair failed and we were unable to recover it. 00:32:18.780 [2024-11-26 07:42:02.841726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.780 [2024-11-26 07:42:02.841735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.780 qpair failed and we were unable to recover it. 00:32:18.780 [2024-11-26 07:42:02.842030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.780 [2024-11-26 07:42:02.842038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.780 qpair failed and we were unable to recover it. 00:32:18.780 [2024-11-26 07:42:02.842229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.780 [2024-11-26 07:42:02.842238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.780 qpair failed and we were unable to recover it. 00:32:18.780 [2024-11-26 07:42:02.842410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.780 [2024-11-26 07:42:02.842421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.780 qpair failed and we were unable to recover it. 00:32:18.780 [2024-11-26 07:42:02.842750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.780 [2024-11-26 07:42:02.842759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.780 qpair failed and we were unable to recover it. 00:32:18.780 [2024-11-26 07:42:02.843096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.780 [2024-11-26 07:42:02.843106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.780 qpair failed and we were unable to recover it. 00:32:18.780 [2024-11-26 07:42:02.843291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.780 [2024-11-26 07:42:02.843299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.780 qpair failed and we were unable to recover it. 00:32:18.780 [2024-11-26 07:42:02.843619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.780 [2024-11-26 07:42:02.843629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.780 qpair failed and we were unable to recover it. 00:32:18.780 [2024-11-26 07:42:02.843927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.780 [2024-11-26 07:42:02.843936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.780 qpair failed and we were unable to recover it. 00:32:18.780 [2024-11-26 07:42:02.844221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.780 [2024-11-26 07:42:02.844229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.780 qpair failed and we were unable to recover it. 00:32:18.780 [2024-11-26 07:42:02.844536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.780 [2024-11-26 07:42:02.844544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.780 qpair failed and we were unable to recover it. 00:32:18.780 [2024-11-26 07:42:02.844856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.780 [2024-11-26 07:42:02.844868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.780 qpair failed and we were unable to recover it. 00:32:18.780 [2024-11-26 07:42:02.845148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.780 [2024-11-26 07:42:02.845156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.780 qpair failed and we were unable to recover it. 00:32:18.780 [2024-11-26 07:42:02.845453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.780 [2024-11-26 07:42:02.845462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.780 qpair failed and we were unable to recover it. 00:32:18.780 [2024-11-26 07:42:02.845646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.780 [2024-11-26 07:42:02.845654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.780 qpair failed and we were unable to recover it. 00:32:18.780 [2024-11-26 07:42:02.845987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.780 [2024-11-26 07:42:02.845997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.780 qpair failed and we were unable to recover it. 00:32:18.780 [2024-11-26 07:42:02.846304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.780 [2024-11-26 07:42:02.846312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.780 qpair failed and we were unable to recover it. 00:32:18.780 [2024-11-26 07:42:02.846602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.780 [2024-11-26 07:42:02.846610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.780 qpair failed and we were unable to recover it. 00:32:18.780 [2024-11-26 07:42:02.846875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.780 [2024-11-26 07:42:02.846883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.780 qpair failed and we were unable to recover it. 00:32:18.780 [2024-11-26 07:42:02.847152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.780 [2024-11-26 07:42:02.847160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.780 qpair failed and we were unable to recover it. 00:32:18.780 [2024-11-26 07:42:02.847342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.780 [2024-11-26 07:42:02.847350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.780 qpair failed and we were unable to recover it. 00:32:18.780 [2024-11-26 07:42:02.847672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.780 [2024-11-26 07:42:02.847680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.780 qpair failed and we were unable to recover it. 00:32:18.780 [2024-11-26 07:42:02.847994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.780 [2024-11-26 07:42:02.848003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.780 qpair failed and we were unable to recover it. 00:32:18.780 [2024-11-26 07:42:02.848311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.780 [2024-11-26 07:42:02.848321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.780 qpair failed and we were unable to recover it. 00:32:18.780 [2024-11-26 07:42:02.848624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.780 [2024-11-26 07:42:02.848632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.780 qpair failed and we were unable to recover it. 00:32:18.780 [2024-11-26 07:42:02.848922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.780 [2024-11-26 07:42:02.848931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.780 qpair failed and we were unable to recover it. 00:32:18.780 [2024-11-26 07:42:02.849116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.780 [2024-11-26 07:42:02.849124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.780 qpair failed and we were unable to recover it. 00:32:18.780 [2024-11-26 07:42:02.849449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.780 [2024-11-26 07:42:02.849458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.780 qpair failed and we were unable to recover it. 00:32:18.780 [2024-11-26 07:42:02.849765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.780 [2024-11-26 07:42:02.849774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.780 qpair failed and we were unable to recover it. 00:32:18.780 [2024-11-26 07:42:02.849959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.781 [2024-11-26 07:42:02.849968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.781 qpair failed and we were unable to recover it. 00:32:18.781 [2024-11-26 07:42:02.850239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.781 [2024-11-26 07:42:02.850247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.781 qpair failed and we were unable to recover it. 00:32:18.781 [2024-11-26 07:42:02.850556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.781 [2024-11-26 07:42:02.850567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.781 qpair failed and we were unable to recover it. 00:32:18.781 [2024-11-26 07:42:02.850878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.781 [2024-11-26 07:42:02.850887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.781 qpair failed and we were unable to recover it. 00:32:18.781 [2024-11-26 07:42:02.851094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.781 [2024-11-26 07:42:02.851102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.781 qpair failed and we were unable to recover it. 00:32:18.781 [2024-11-26 07:42:02.851399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.781 [2024-11-26 07:42:02.851407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.781 qpair failed and we were unable to recover it. 00:32:18.781 [2024-11-26 07:42:02.851712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.781 [2024-11-26 07:42:02.851720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.781 qpair failed and we were unable to recover it. 00:32:18.781 [2024-11-26 07:42:02.852028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.781 [2024-11-26 07:42:02.852037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.781 qpair failed and we were unable to recover it. 00:32:18.781 [2024-11-26 07:42:02.852222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.781 [2024-11-26 07:42:02.852231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.781 qpair failed and we were unable to recover it. 00:32:18.781 [2024-11-26 07:42:02.852538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.781 [2024-11-26 07:42:02.852547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.781 qpair failed and we were unable to recover it. 00:32:18.781 [2024-11-26 07:42:02.852859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.781 [2024-11-26 07:42:02.852870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.781 qpair failed and we were unable to recover it. 00:32:18.781 [2024-11-26 07:42:02.853156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.781 [2024-11-26 07:42:02.853164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.781 qpair failed and we were unable to recover it. 00:32:18.781 [2024-11-26 07:42:02.853463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.781 [2024-11-26 07:42:02.853471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.781 qpair failed and we were unable to recover it. 00:32:18.781 [2024-11-26 07:42:02.853775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.781 [2024-11-26 07:42:02.853785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.781 qpair failed and we were unable to recover it. 00:32:18.781 [2024-11-26 07:42:02.853962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.781 [2024-11-26 07:42:02.853970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.781 qpair failed and we were unable to recover it. 00:32:18.781 [2024-11-26 07:42:02.854309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.781 [2024-11-26 07:42:02.854318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.781 qpair failed and we were unable to recover it. 00:32:18.781 [2024-11-26 07:42:02.854634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.781 [2024-11-26 07:42:02.854643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.781 qpair failed and we were unable to recover it. 00:32:18.781 [2024-11-26 07:42:02.854921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.781 [2024-11-26 07:42:02.854930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.781 qpair failed and we were unable to recover it. 00:32:18.781 [2024-11-26 07:42:02.855099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.781 [2024-11-26 07:42:02.855107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.781 qpair failed and we were unable to recover it. 00:32:18.781 [2024-11-26 07:42:02.855407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.781 [2024-11-26 07:42:02.855415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.781 qpair failed and we were unable to recover it. 00:32:18.781 [2024-11-26 07:42:02.855731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.781 [2024-11-26 07:42:02.855740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.781 qpair failed and we were unable to recover it. 00:32:18.781 [2024-11-26 07:42:02.856056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.781 [2024-11-26 07:42:02.856065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.781 qpair failed and we were unable to recover it. 00:32:18.781 [2024-11-26 07:42:02.856392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.781 [2024-11-26 07:42:02.856400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.781 qpair failed and we were unable to recover it. 00:32:18.781 [2024-11-26 07:42:02.856708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.781 [2024-11-26 07:42:02.856716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.781 qpair failed and we were unable to recover it. 00:32:18.781 [2024-11-26 07:42:02.857040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.781 [2024-11-26 07:42:02.857048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.781 qpair failed and we were unable to recover it. 00:32:18.781 [2024-11-26 07:42:02.857333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.781 [2024-11-26 07:42:02.857341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.781 qpair failed and we were unable to recover it. 00:32:18.781 [2024-11-26 07:42:02.857526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.781 [2024-11-26 07:42:02.857533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.781 qpair failed and we were unable to recover it. 00:32:18.781 [2024-11-26 07:42:02.857802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.781 [2024-11-26 07:42:02.857811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.781 qpair failed and we were unable to recover it. 00:32:18.781 [2024-11-26 07:42:02.857981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.781 [2024-11-26 07:42:02.857990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.781 qpair failed and we were unable to recover it. 00:32:18.781 [2024-11-26 07:42:02.858326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.781 [2024-11-26 07:42:02.858334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.781 qpair failed and we were unable to recover it. 00:32:18.781 [2024-11-26 07:42:02.858640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.781 [2024-11-26 07:42:02.858649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.781 qpair failed and we were unable to recover it. 00:32:18.781 [2024-11-26 07:42:02.858964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.781 [2024-11-26 07:42:02.858973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.781 qpair failed and we were unable to recover it. 00:32:18.781 [2024-11-26 07:42:02.859305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.781 [2024-11-26 07:42:02.859314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.781 qpair failed and we were unable to recover it. 00:32:18.781 [2024-11-26 07:42:02.859646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.781 [2024-11-26 07:42:02.859655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.781 qpair failed and we were unable to recover it. 00:32:18.781 [2024-11-26 07:42:02.859976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.781 [2024-11-26 07:42:02.859985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.781 qpair failed and we were unable to recover it. 00:32:18.781 [2024-11-26 07:42:02.860260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.781 [2024-11-26 07:42:02.860267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.781 qpair failed and we were unable to recover it. 00:32:18.781 [2024-11-26 07:42:02.860586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.781 [2024-11-26 07:42:02.860595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.781 qpair failed and we were unable to recover it. 00:32:18.781 [2024-11-26 07:42:02.860886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.781 [2024-11-26 07:42:02.860895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.781 qpair failed and we were unable to recover it. 00:32:18.782 [2024-11-26 07:42:02.861212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.782 [2024-11-26 07:42:02.861220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.782 qpair failed and we were unable to recover it. 00:32:18.782 [2024-11-26 07:42:02.861529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.782 [2024-11-26 07:42:02.861538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.782 qpair failed and we were unable to recover it. 00:32:18.782 [2024-11-26 07:42:02.861835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.782 [2024-11-26 07:42:02.861844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.782 qpair failed and we were unable to recover it. 00:32:18.782 [2024-11-26 07:42:02.862141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.782 [2024-11-26 07:42:02.862150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.782 qpair failed and we were unable to recover it. 00:32:18.782 [2024-11-26 07:42:02.862493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.782 [2024-11-26 07:42:02.862503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.782 qpair failed and we were unable to recover it. 00:32:18.782 [2024-11-26 07:42:02.862814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.782 [2024-11-26 07:42:02.862822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.782 qpair failed and we were unable to recover it. 00:32:18.782 [2024-11-26 07:42:02.863110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.782 [2024-11-26 07:42:02.863119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.782 qpair failed and we were unable to recover it. 00:32:18.782 [2024-11-26 07:42:02.863407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.782 [2024-11-26 07:42:02.863416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.782 qpair failed and we were unable to recover it. 00:32:18.782 [2024-11-26 07:42:02.863741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.782 [2024-11-26 07:42:02.863749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.782 qpair failed and we were unable to recover it. 00:32:18.782 [2024-11-26 07:42:02.864078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.782 [2024-11-26 07:42:02.864088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.782 qpair failed and we were unable to recover it. 00:32:18.782 [2024-11-26 07:42:02.864440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.782 [2024-11-26 07:42:02.864449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.782 qpair failed and we were unable to recover it. 00:32:18.782 [2024-11-26 07:42:02.864749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.782 [2024-11-26 07:42:02.864758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.782 qpair failed and we were unable to recover it. 00:32:18.782 [2024-11-26 07:42:02.865045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.782 [2024-11-26 07:42:02.865054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.782 qpair failed and we were unable to recover it. 00:32:18.782 [2024-11-26 07:42:02.865371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.782 [2024-11-26 07:42:02.865380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.782 qpair failed and we were unable to recover it. 00:32:18.782 [2024-11-26 07:42:02.865697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.782 [2024-11-26 07:42:02.865706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.782 qpair failed and we were unable to recover it. 00:32:18.782 [2024-11-26 07:42:02.866001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.782 [2024-11-26 07:42:02.866009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.782 qpair failed and we were unable to recover it. 00:32:18.782 [2024-11-26 07:42:02.866182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.782 [2024-11-26 07:42:02.866190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.782 qpair failed and we were unable to recover it. 00:32:18.782 [2024-11-26 07:42:02.866443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.782 [2024-11-26 07:42:02.866451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.782 qpair failed and we were unable to recover it. 00:32:18.782 [2024-11-26 07:42:02.866765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.782 [2024-11-26 07:42:02.866773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.782 qpair failed and we were unable to recover it. 00:32:18.782 [2024-11-26 07:42:02.867081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.782 [2024-11-26 07:42:02.867090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.782 qpair failed and we were unable to recover it. 00:32:18.782 [2024-11-26 07:42:02.867405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.782 [2024-11-26 07:42:02.867413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.782 qpair failed and we were unable to recover it. 00:32:18.782 [2024-11-26 07:42:02.867728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.782 [2024-11-26 07:42:02.867737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.782 qpair failed and we were unable to recover it. 00:32:18.782 [2024-11-26 07:42:02.867926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.782 [2024-11-26 07:42:02.867936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.782 qpair failed and we were unable to recover it. 00:32:18.782 [2024-11-26 07:42:02.868198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.782 [2024-11-26 07:42:02.868207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.782 qpair failed and we were unable to recover it. 00:32:18.782 [2024-11-26 07:42:02.868508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.782 [2024-11-26 07:42:02.868516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.782 qpair failed and we were unable to recover it. 00:32:18.782 [2024-11-26 07:42:02.868746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.782 [2024-11-26 07:42:02.868754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.782 qpair failed and we were unable to recover it. 00:32:18.782 [2024-11-26 07:42:02.869055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.782 [2024-11-26 07:42:02.869065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.782 qpair failed and we were unable to recover it. 00:32:18.782 [2024-11-26 07:42:02.869363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.782 [2024-11-26 07:42:02.869372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.782 qpair failed and we were unable to recover it. 00:32:18.782 [2024-11-26 07:42:02.869728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.782 [2024-11-26 07:42:02.869736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.782 qpair failed and we were unable to recover it. 00:32:18.782 [2024-11-26 07:42:02.870031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.782 [2024-11-26 07:42:02.870039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.782 qpair failed and we were unable to recover it. 00:32:18.782 [2024-11-26 07:42:02.870359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.782 [2024-11-26 07:42:02.870368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.782 qpair failed and we were unable to recover it. 00:32:18.782 [2024-11-26 07:42:02.870660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.782 [2024-11-26 07:42:02.870669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.782 qpair failed and we were unable to recover it. 00:32:18.782 [2024-11-26 07:42:02.870988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.782 [2024-11-26 07:42:02.870996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.782 qpair failed and we were unable to recover it. 00:32:18.782 [2024-11-26 07:42:02.871303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.782 [2024-11-26 07:42:02.871312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.782 qpair failed and we were unable to recover it. 00:32:18.782 [2024-11-26 07:42:02.871629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.782 [2024-11-26 07:42:02.871637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.782 qpair failed and we were unable to recover it. 00:32:18.782 [2024-11-26 07:42:02.871947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.782 [2024-11-26 07:42:02.871956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.782 qpair failed and we were unable to recover it. 00:32:18.782 [2024-11-26 07:42:02.872266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.782 [2024-11-26 07:42:02.872274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.782 qpair failed and we were unable to recover it. 00:32:18.782 [2024-11-26 07:42:02.872583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.782 [2024-11-26 07:42:02.872592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.782 qpair failed and we were unable to recover it. 00:32:18.783 [2024-11-26 07:42:02.872896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.783 [2024-11-26 07:42:02.872905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.783 qpair failed and we were unable to recover it. 00:32:18.783 [2024-11-26 07:42:02.873240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.783 [2024-11-26 07:42:02.873249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.783 qpair failed and we were unable to recover it. 00:32:18.783 [2024-11-26 07:42:02.873563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.783 [2024-11-26 07:42:02.873572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:18.783 qpair failed and we were unable to recover it. 00:32:19.059 [2024-11-26 07:42:02.873881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-11-26 07:42:02.873890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.059 qpair failed and we were unable to recover it. 00:32:19.059 [2024-11-26 07:42:02.874198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-11-26 07:42:02.874208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.059 qpair failed and we were unable to recover it. 00:32:19.059 [2024-11-26 07:42:02.874501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-11-26 07:42:02.874509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.059 qpair failed and we were unable to recover it. 00:32:19.059 [2024-11-26 07:42:02.874817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-11-26 07:42:02.874826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.059 qpair failed and we were unable to recover it. 00:32:19.059 [2024-11-26 07:42:02.874982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-11-26 07:42:02.874991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.059 qpair failed and we were unable to recover it. 00:32:19.059 [2024-11-26 07:42:02.875200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-11-26 07:42:02.875209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.059 qpair failed and we were unable to recover it. 00:32:19.059 [2024-11-26 07:42:02.875542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-11-26 07:42:02.875550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.059 qpair failed and we were unable to recover it. 00:32:19.059 [2024-11-26 07:42:02.875857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-11-26 07:42:02.875870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.059 qpair failed and we were unable to recover it. 00:32:19.059 [2024-11-26 07:42:02.876026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-11-26 07:42:02.876036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.059 qpair failed and we were unable to recover it. 00:32:19.059 [2024-11-26 07:42:02.876357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-11-26 07:42:02.876365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.059 qpair failed and we were unable to recover it. 00:32:19.059 [2024-11-26 07:42:02.876653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-11-26 07:42:02.876661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.059 qpair failed and we were unable to recover it. 00:32:19.059 [2024-11-26 07:42:02.876965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-11-26 07:42:02.876973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.059 qpair failed and we were unable to recover it. 00:32:19.059 [2024-11-26 07:42:02.877303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-11-26 07:42:02.877313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.059 qpair failed and we were unable to recover it. 00:32:19.059 [2024-11-26 07:42:02.877592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-11-26 07:42:02.877600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.059 qpair failed and we were unable to recover it. 00:32:19.059 [2024-11-26 07:42:02.877880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-11-26 07:42:02.877889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.059 qpair failed and we were unable to recover it. 00:32:19.059 [2024-11-26 07:42:02.878170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-11-26 07:42:02.878178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.059 qpair failed and we were unable to recover it. 00:32:19.059 [2024-11-26 07:42:02.878492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-11-26 07:42:02.878501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.059 qpair failed and we were unable to recover it. 00:32:19.059 [2024-11-26 07:42:02.878806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-11-26 07:42:02.878814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.059 qpair failed and we were unable to recover it. 00:32:19.059 [2024-11-26 07:42:02.879120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-11-26 07:42:02.879129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.059 qpair failed and we were unable to recover it. 00:32:19.059 [2024-11-26 07:42:02.879426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-11-26 07:42:02.879434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.059 qpair failed and we were unable to recover it. 00:32:19.059 [2024-11-26 07:42:02.879738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-11-26 07:42:02.879748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.059 qpair failed and we were unable to recover it. 00:32:19.059 [2024-11-26 07:42:02.880079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-11-26 07:42:02.880087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.059 qpair failed and we were unable to recover it. 00:32:19.059 [2024-11-26 07:42:02.880381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-11-26 07:42:02.880390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.059 qpair failed and we were unable to recover it. 00:32:19.059 [2024-11-26 07:42:02.880699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-11-26 07:42:02.880708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.059 qpair failed and we were unable to recover it. 00:32:19.059 [2024-11-26 07:42:02.881021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-11-26 07:42:02.881029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.059 qpair failed and we were unable to recover it. 00:32:19.059 [2024-11-26 07:42:02.881355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-11-26 07:42:02.881364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.059 qpair failed and we were unable to recover it. 00:32:19.059 [2024-11-26 07:42:02.881655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-11-26 07:42:02.881663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.059 qpair failed and we were unable to recover it. 00:32:19.059 [2024-11-26 07:42:02.882014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-11-26 07:42:02.882022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.059 qpair failed and we were unable to recover it. 00:32:19.059 [2024-11-26 07:42:02.882319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-11-26 07:42:02.882327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.059 qpair failed and we were unable to recover it. 00:32:19.059 [2024-11-26 07:42:02.882637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-11-26 07:42:02.882645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.059 qpair failed and we were unable to recover it. 00:32:19.059 [2024-11-26 07:42:02.882937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.059 [2024-11-26 07:42:02.882946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.059 qpair failed and we were unable to recover it. 00:32:19.059 [2024-11-26 07:42:02.883282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-11-26 07:42:02.883290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:42:02.883596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-11-26 07:42:02.883605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:42:02.883899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-11-26 07:42:02.883907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:42:02.884213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-11-26 07:42:02.884221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:42:02.884527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-11-26 07:42:02.884535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:42:02.884752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-11-26 07:42:02.884760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:42:02.885091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-11-26 07:42:02.885099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:42:02.885387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-11-26 07:42:02.885395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:42:02.885707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-11-26 07:42:02.885715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:42:02.886028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-11-26 07:42:02.886038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:42:02.886335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-11-26 07:42:02.886343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:42:02.886692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-11-26 07:42:02.886701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:42:02.886885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-11-26 07:42:02.886896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:42:02.887208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-11-26 07:42:02.887216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:42:02.887405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-11-26 07:42:02.887413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:42:02.887744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-11-26 07:42:02.887753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:42:02.888034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-11-26 07:42:02.888042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:42:02.888214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-11-26 07:42:02.888222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:42:02.888582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-11-26 07:42:02.888591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:42:02.888789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-11-26 07:42:02.888797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:42:02.889099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-11-26 07:42:02.889109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:42:02.889422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-11-26 07:42:02.889431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:42:02.889584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-11-26 07:42:02.889594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:42:02.889903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-11-26 07:42:02.889912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:42:02.890234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-11-26 07:42:02.890242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:42:02.890607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-11-26 07:42:02.890615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:42:02.890931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-11-26 07:42:02.890939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:42:02.891125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-11-26 07:42:02.891134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:42:02.891294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-11-26 07:42:02.891302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:42:02.891606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-11-26 07:42:02.891614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:42:02.891923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-11-26 07:42:02.891932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:42:02.892233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-11-26 07:42:02.892243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:42:02.892553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-11-26 07:42:02.892561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:42:02.892880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-11-26 07:42:02.892889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:42:02.893202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-11-26 07:42:02.893210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:42:02.893497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-11-26 07:42:02.893505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:42:02.893786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-11-26 07:42:02.893794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:42:02.894182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.060 [2024-11-26 07:42:02.894190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:42:02.894490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.061 [2024-11-26 07:42:02.894499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:42:02.894787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.061 [2024-11-26 07:42:02.894795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:42:02.895101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.061 [2024-11-26 07:42:02.895110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:42:02.895423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.061 [2024-11-26 07:42:02.895431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:42:02.895600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.061 [2024-11-26 07:42:02.895608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:42:02.895943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.061 [2024-11-26 07:42:02.895951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:42:02.896235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.061 [2024-11-26 07:42:02.896243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:42:02.896557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.061 [2024-11-26 07:42:02.896566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:42:02.896743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.061 [2024-11-26 07:42:02.896751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:42:02.897041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.061 [2024-11-26 07:42:02.897049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:42:02.897368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.061 [2024-11-26 07:42:02.897377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:42:02.897688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.061 [2024-11-26 07:42:02.897696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:42:02.898007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.061 [2024-11-26 07:42:02.898015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:42:02.898322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.061 [2024-11-26 07:42:02.898330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:42:02.898644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.061 [2024-11-26 07:42:02.898655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:42:02.898925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.061 [2024-11-26 07:42:02.898934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:42:02.899156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.061 [2024-11-26 07:42:02.899163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:42:02.899476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.061 [2024-11-26 07:42:02.899484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:42:02.899783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.061 [2024-11-26 07:42:02.899792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:42:02.900104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.061 [2024-11-26 07:42:02.900113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:42:02.900317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.061 [2024-11-26 07:42:02.900325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:42:02.900508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.061 [2024-11-26 07:42:02.900517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:42:02.900780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.061 [2024-11-26 07:42:02.900788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:42:02.901108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.061 [2024-11-26 07:42:02.901117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:42:02.901419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.061 [2024-11-26 07:42:02.901427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:42:02.901714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.061 [2024-11-26 07:42:02.901722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:42:02.902033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.061 [2024-11-26 07:42:02.902041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:42:02.902359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.061 [2024-11-26 07:42:02.902368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:42:02.902719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.061 [2024-11-26 07:42:02.902727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:42:02.903051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.061 [2024-11-26 07:42:02.903061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:42:02.903374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.061 [2024-11-26 07:42:02.903382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:42:02.903696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.061 [2024-11-26 07:42:02.903705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:42:02.904024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.061 [2024-11-26 07:42:02.904033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:42:02.904345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.061 [2024-11-26 07:42:02.904354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:42:02.904668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.061 [2024-11-26 07:42:02.904676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:42:02.904989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.061 [2024-11-26 07:42:02.904998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:42:02.905359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.061 [2024-11-26 07:42:02.905367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:42:02.905675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.062 [2024-11-26 07:42:02.905684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:42:02.906007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.062 [2024-11-26 07:42:02.906015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:42:02.906335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.062 [2024-11-26 07:42:02.906343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:42:02.906525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.062 [2024-11-26 07:42:02.906531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:42:02.906724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.062 [2024-11-26 07:42:02.906731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:42:02.907023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.062 [2024-11-26 07:42:02.907030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:42:02.907353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.062 [2024-11-26 07:42:02.907360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:42:02.907670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.062 [2024-11-26 07:42:02.907677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:42:02.907994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.062 [2024-11-26 07:42:02.908000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:42:02.908314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.062 [2024-11-26 07:42:02.908320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:42:02.908627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.062 [2024-11-26 07:42:02.908633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:42:02.908947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.062 [2024-11-26 07:42:02.908954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:42:02.909273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.062 [2024-11-26 07:42:02.909280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:42:02.909555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.062 [2024-11-26 07:42:02.909561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:42:02.909870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.062 [2024-11-26 07:42:02.909879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:42:02.910170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.062 [2024-11-26 07:42:02.910179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:42:02.910506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.062 [2024-11-26 07:42:02.910514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:42:02.910824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.062 [2024-11-26 07:42:02.910833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:42:02.911031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.062 [2024-11-26 07:42:02.911039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:42:02.911343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.062 [2024-11-26 07:42:02.911352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:42:02.911688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.062 [2024-11-26 07:42:02.911697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:42:02.912009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.062 [2024-11-26 07:42:02.912019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:42:02.912345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.062 [2024-11-26 07:42:02.912355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:42:02.912659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.062 [2024-11-26 07:42:02.912668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:42:02.912915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.062 [2024-11-26 07:42:02.912924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:42:02.913228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.062 [2024-11-26 07:42:02.913237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:42:02.913542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.062 [2024-11-26 07:42:02.913551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:42:02.913866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.062 [2024-11-26 07:42:02.913876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:42:02.914167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.062 [2024-11-26 07:42:02.914176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:42:02.914491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.062 [2024-11-26 07:42:02.914501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:42:02.914815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.062 [2024-11-26 07:42:02.914824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:42:02.915144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.062 [2024-11-26 07:42:02.915153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:42:02.915488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.062 [2024-11-26 07:42:02.915498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:42:02.915808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.062 [2024-11-26 07:42:02.915817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:42:02.916004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.062 [2024-11-26 07:42:02.916014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:42:02.916289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.062 [2024-11-26 07:42:02.916299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:42:02.916591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.062 [2024-11-26 07:42:02.916600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:42:02.916918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.062 [2024-11-26 07:42:02.916927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.063 [2024-11-26 07:42:02.917264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.063 [2024-11-26 07:42:02.917272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.063 qpair failed and we were unable to recover it. 00:32:19.063 [2024-11-26 07:42:02.917580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.063 [2024-11-26 07:42:02.917589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.063 qpair failed and we were unable to recover it. 00:32:19.063 [2024-11-26 07:42:02.917895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.063 [2024-11-26 07:42:02.917904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.063 qpair failed and we were unable to recover it. 00:32:19.063 [2024-11-26 07:42:02.918076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.063 [2024-11-26 07:42:02.918087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.063 qpair failed and we were unable to recover it. 00:32:19.063 [2024-11-26 07:42:02.918349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.063 [2024-11-26 07:42:02.918358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.063 qpair failed and we were unable to recover it. 00:32:19.063 [2024-11-26 07:42:02.918665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.063 [2024-11-26 07:42:02.918674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.063 qpair failed and we were unable to recover it. 00:32:19.063 [2024-11-26 07:42:02.918976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.063 [2024-11-26 07:42:02.918984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.063 qpair failed and we were unable to recover it. 00:32:19.063 [2024-11-26 07:42:02.919297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.063 [2024-11-26 07:42:02.919305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.063 qpair failed and we were unable to recover it. 00:32:19.063 [2024-11-26 07:42:02.919614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.063 [2024-11-26 07:42:02.919623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.063 qpair failed and we were unable to recover it. 00:32:19.063 [2024-11-26 07:42:02.919936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.063 [2024-11-26 07:42:02.919945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.063 qpair failed and we were unable to recover it. 00:32:19.063 [2024-11-26 07:42:02.920277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.063 [2024-11-26 07:42:02.920286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.063 qpair failed and we were unable to recover it. 00:32:19.063 [2024-11-26 07:42:02.920597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.063 [2024-11-26 07:42:02.920606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.063 qpair failed and we were unable to recover it. 00:32:19.063 [2024-11-26 07:42:02.920910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.063 [2024-11-26 07:42:02.920919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.063 qpair failed and we were unable to recover it. 00:32:19.063 [2024-11-26 07:42:02.921251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.063 [2024-11-26 07:42:02.921259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.063 qpair failed and we were unable to recover it. 00:32:19.063 [2024-11-26 07:42:02.921555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.063 [2024-11-26 07:42:02.921563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.063 qpair failed and we were unable to recover it. 00:32:19.063 [2024-11-26 07:42:02.921877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.063 [2024-11-26 07:42:02.921886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.063 qpair failed and we were unable to recover it. 00:32:19.063 [2024-11-26 07:42:02.922101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.063 [2024-11-26 07:42:02.922109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.063 qpair failed and we were unable to recover it. 00:32:19.063 [2024-11-26 07:42:02.922429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.063 [2024-11-26 07:42:02.922437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.063 qpair failed and we were unable to recover it. 00:32:19.063 [2024-11-26 07:42:02.922747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.063 [2024-11-26 07:42:02.922755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.063 qpair failed and we were unable to recover it. 00:32:19.063 [2024-11-26 07:42:02.923042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.063 [2024-11-26 07:42:02.923051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.063 qpair failed and we were unable to recover it. 00:32:19.063 [2024-11-26 07:42:02.923375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.063 [2024-11-26 07:42:02.923384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.063 qpair failed and we were unable to recover it. 00:32:19.063 [2024-11-26 07:42:02.923726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.063 [2024-11-26 07:42:02.923735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.063 qpair failed and we were unable to recover it. 00:32:19.063 [2024-11-26 07:42:02.924025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.063 [2024-11-26 07:42:02.924034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.063 qpair failed and we were unable to recover it. 00:32:19.063 [2024-11-26 07:42:02.924233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.063 [2024-11-26 07:42:02.924241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.063 qpair failed and we were unable to recover it. 00:32:19.063 [2024-11-26 07:42:02.924467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.063 [2024-11-26 07:42:02.924475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.063 qpair failed and we were unable to recover it. 00:32:19.063 [2024-11-26 07:42:02.924573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.063 [2024-11-26 07:42:02.924579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.063 qpair failed and we were unable to recover it. 00:32:19.063 [2024-11-26 07:42:02.924859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.063 [2024-11-26 07:42:02.924872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.063 qpair failed and we were unable to recover it. 00:32:19.063 [2024-11-26 07:42:02.925184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.063 [2024-11-26 07:42:02.925193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.063 qpair failed and we were unable to recover it. 00:32:19.063 [2024-11-26 07:42:02.925503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.063 [2024-11-26 07:42:02.925512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.063 qpair failed and we were unable to recover it. 00:32:19.063 [2024-11-26 07:42:02.925826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.063 [2024-11-26 07:42:02.925835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.063 qpair failed and we were unable to recover it. 00:32:19.063 [2024-11-26 07:42:02.926161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.063 [2024-11-26 07:42:02.926169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.063 qpair failed and we were unable to recover it. 00:32:19.063 [2024-11-26 07:42:02.926489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.064 [2024-11-26 07:42:02.926497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.064 qpair failed and we were unable to recover it. 00:32:19.064 [2024-11-26 07:42:02.926813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.064 [2024-11-26 07:42:02.926822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.064 qpair failed and we were unable to recover it. 00:32:19.064 [2024-11-26 07:42:02.927144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.064 [2024-11-26 07:42:02.927153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.064 qpair failed and we were unable to recover it. 00:32:19.064 [2024-11-26 07:42:02.927448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.064 [2024-11-26 07:42:02.927456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.064 qpair failed and we were unable to recover it. 00:32:19.064 [2024-11-26 07:42:02.927772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.064 [2024-11-26 07:42:02.927781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.064 qpair failed and we were unable to recover it. 00:32:19.064 [2024-11-26 07:42:02.927958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.064 [2024-11-26 07:42:02.927967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.064 qpair failed and we were unable to recover it. 00:32:19.064 [2024-11-26 07:42:02.928279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.064 [2024-11-26 07:42:02.928287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.064 qpair failed and we were unable to recover it. 00:32:19.064 [2024-11-26 07:42:02.928624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.064 [2024-11-26 07:42:02.928634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.064 qpair failed and we were unable to recover it. 00:32:19.064 [2024-11-26 07:42:02.928937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.064 [2024-11-26 07:42:02.928946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.064 qpair failed and we were unable to recover it. 00:32:19.064 [2024-11-26 07:42:02.929256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.064 [2024-11-26 07:42:02.929264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.064 qpair failed and we were unable to recover it. 00:32:19.064 [2024-11-26 07:42:02.929549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.064 [2024-11-26 07:42:02.929558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.064 qpair failed and we were unable to recover it. 00:32:19.064 [2024-11-26 07:42:02.929887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.064 [2024-11-26 07:42:02.929896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.064 qpair failed and we were unable to recover it. 00:32:19.064 [2024-11-26 07:42:02.930217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.064 [2024-11-26 07:42:02.930225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.064 qpair failed and we were unable to recover it. 00:32:19.064 [2024-11-26 07:42:02.930538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.064 [2024-11-26 07:42:02.930547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.064 qpair failed and we were unable to recover it. 00:32:19.064 [2024-11-26 07:42:02.930740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.064 [2024-11-26 07:42:02.930748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.064 qpair failed and we were unable to recover it. 00:32:19.064 [2024-11-26 07:42:02.930903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.064 [2024-11-26 07:42:02.930913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.064 qpair failed and we were unable to recover it. 00:32:19.064 [2024-11-26 07:42:02.931196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.064 [2024-11-26 07:42:02.931204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.064 qpair failed and we were unable to recover it. 00:32:19.064 [2024-11-26 07:42:02.931517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.064 [2024-11-26 07:42:02.931525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.064 qpair failed and we were unable to recover it. 00:32:19.064 [2024-11-26 07:42:02.931717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.064 [2024-11-26 07:42:02.931725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.064 qpair failed and we were unable to recover it. 00:32:19.064 [2024-11-26 07:42:02.932017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.064 [2024-11-26 07:42:02.932026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.064 qpair failed and we were unable to recover it. 00:32:19.064 [2024-11-26 07:42:02.932333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.064 [2024-11-26 07:42:02.932340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.064 qpair failed and we were unable to recover it. 00:32:19.064 [2024-11-26 07:42:02.932683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.064 [2024-11-26 07:42:02.932692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.064 qpair failed and we were unable to recover it. 00:32:19.064 [2024-11-26 07:42:02.932903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.064 [2024-11-26 07:42:02.932912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.064 qpair failed and we were unable to recover it. 00:32:19.064 [2024-11-26 07:42:02.933081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.064 [2024-11-26 07:42:02.933090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.064 qpair failed and we were unable to recover it. 00:32:19.064 [2024-11-26 07:42:02.933358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.064 [2024-11-26 07:42:02.933366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.064 qpair failed and we were unable to recover it. 00:32:19.064 [2024-11-26 07:42:02.933687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.064 [2024-11-26 07:42:02.933696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.064 qpair failed and we were unable to recover it. 00:32:19.064 [2024-11-26 07:42:02.934024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.064 [2024-11-26 07:42:02.934033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.064 qpair failed and we were unable to recover it. 00:32:19.064 [2024-11-26 07:42:02.934362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.064 [2024-11-26 07:42:02.934371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.064 qpair failed and we were unable to recover it. 00:32:19.064 [2024-11-26 07:42:02.934649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.064 [2024-11-26 07:42:02.934657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.064 qpair failed and we were unable to recover it. 00:32:19.064 [2024-11-26 07:42:02.934970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.064 [2024-11-26 07:42:02.934979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.064 qpair failed and we were unable to recover it. 00:32:19.064 [2024-11-26 07:42:02.935144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.064 [2024-11-26 07:42:02.935153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.064 qpair failed and we were unable to recover it. 00:32:19.064 [2024-11-26 07:42:02.935488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.064 [2024-11-26 07:42:02.935497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.064 qpair failed and we were unable to recover it. 00:32:19.064 [2024-11-26 07:42:02.935812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.064 [2024-11-26 07:42:02.935822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.064 qpair failed and we were unable to recover it. 00:32:19.064 [2024-11-26 07:42:02.936036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.064 [2024-11-26 07:42:02.936045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.064 qpair failed and we were unable to recover it. 00:32:19.064 [2024-11-26 07:42:02.936365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.064 [2024-11-26 07:42:02.936374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.064 qpair failed and we were unable to recover it. 00:32:19.064 [2024-11-26 07:42:02.936664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.064 [2024-11-26 07:42:02.936674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.064 qpair failed and we were unable to recover it. 00:32:19.064 [2024-11-26 07:42:02.936856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.064 [2024-11-26 07:42:02.936868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.064 qpair failed and we were unable to recover it. 00:32:19.064 [2024-11-26 07:42:02.937197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.064 [2024-11-26 07:42:02.937205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.065 qpair failed and we were unable to recover it. 00:32:19.065 [2024-11-26 07:42:02.937558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.065 [2024-11-26 07:42:02.937566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.065 qpair failed and we were unable to recover it. 00:32:19.065 [2024-11-26 07:42:02.937948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.065 [2024-11-26 07:42:02.937956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.065 qpair failed and we were unable to recover it. 00:32:19.065 [2024-11-26 07:42:02.938129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.065 [2024-11-26 07:42:02.938137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.065 qpair failed and we were unable to recover it. 00:32:19.065 [2024-11-26 07:42:02.938458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.065 [2024-11-26 07:42:02.938467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.065 qpair failed and we were unable to recover it. 00:32:19.065 [2024-11-26 07:42:02.938773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.065 [2024-11-26 07:42:02.938782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.065 qpair failed and we were unable to recover it. 00:32:19.065 [2024-11-26 07:42:02.939155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.065 [2024-11-26 07:42:02.939164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.065 qpair failed and we were unable to recover it. 00:32:19.065 [2024-11-26 07:42:02.939474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.065 [2024-11-26 07:42:02.939482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.065 qpair failed and we were unable to recover it. 00:32:19.065 [2024-11-26 07:42:02.939792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.065 [2024-11-26 07:42:02.939800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.065 qpair failed and we were unable to recover it. 00:32:19.065 [2024-11-26 07:42:02.940113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.065 [2024-11-26 07:42:02.940123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.065 qpair failed and we were unable to recover it. 00:32:19.065 [2024-11-26 07:42:02.940432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.065 [2024-11-26 07:42:02.940440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.065 qpair failed and we were unable to recover it. 00:32:19.065 [2024-11-26 07:42:02.940758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.065 [2024-11-26 07:42:02.940766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.065 qpair failed and we were unable to recover it. 00:32:19.065 [2024-11-26 07:42:02.941105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.065 [2024-11-26 07:42:02.941114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.065 qpair failed and we were unable to recover it. 00:32:19.065 [2024-11-26 07:42:02.941461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.065 [2024-11-26 07:42:02.941469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.065 qpair failed and we were unable to recover it. 00:32:19.065 [2024-11-26 07:42:02.941656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.065 [2024-11-26 07:42:02.941664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.065 qpair failed and we were unable to recover it. 00:32:19.065 [2024-11-26 07:42:02.941988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.065 [2024-11-26 07:42:02.941998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.065 qpair failed and we were unable to recover it. 00:32:19.065 [2024-11-26 07:42:02.942317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.065 [2024-11-26 07:42:02.942325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.065 qpair failed and we were unable to recover it. 00:32:19.065 [2024-11-26 07:42:02.942628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.065 [2024-11-26 07:42:02.942637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.065 qpair failed and we were unable to recover it. 00:32:19.065 [2024-11-26 07:42:02.942967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.065 [2024-11-26 07:42:02.942977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.065 qpair failed and we were unable to recover it. 00:32:19.065 [2024-11-26 07:42:02.943287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.065 [2024-11-26 07:42:02.943297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.065 qpair failed and we were unable to recover it. 00:32:19.065 [2024-11-26 07:42:02.943604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.065 [2024-11-26 07:42:02.943614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.065 qpair failed and we were unable to recover it. 00:32:19.065 [2024-11-26 07:42:02.943778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.065 [2024-11-26 07:42:02.943789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.065 qpair failed and we were unable to recover it. 00:32:19.065 [2024-11-26 07:42:02.944106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.065 [2024-11-26 07:42:02.944115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.065 qpair failed and we were unable to recover it. 00:32:19.065 [2024-11-26 07:42:02.944427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.065 [2024-11-26 07:42:02.944437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.065 qpair failed and we were unable to recover it. 00:32:19.065 [2024-11-26 07:42:02.944645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.065 [2024-11-26 07:42:02.944654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.065 qpair failed and we were unable to recover it. 00:32:19.065 [2024-11-26 07:42:02.944962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.065 [2024-11-26 07:42:02.944971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.065 qpair failed and we were unable to recover it. 00:32:19.065 [2024-11-26 07:42:02.945157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.065 [2024-11-26 07:42:02.945165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.065 qpair failed and we were unable to recover it. 00:32:19.065 [2024-11-26 07:42:02.945513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.065 [2024-11-26 07:42:02.945522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.065 qpair failed and we were unable to recover it. 00:32:19.065 [2024-11-26 07:42:02.945840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.065 [2024-11-26 07:42:02.945849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.065 qpair failed and we were unable to recover it. 00:32:19.065 [2024-11-26 07:42:02.946186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.065 [2024-11-26 07:42:02.946194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.065 qpair failed and we were unable to recover it. 00:32:19.065 [2024-11-26 07:42:02.946383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.065 [2024-11-26 07:42:02.946391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.065 qpair failed and we were unable to recover it. 00:32:19.065 [2024-11-26 07:42:02.946689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.065 [2024-11-26 07:42:02.946698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.065 qpair failed and we were unable to recover it. 00:32:19.065 [2024-11-26 07:42:02.946975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.065 [2024-11-26 07:42:02.946984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.065 qpair failed and we were unable to recover it. 00:32:19.065 [2024-11-26 07:42:02.947292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.065 [2024-11-26 07:42:02.947302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.065 qpair failed and we were unable to recover it. 00:32:19.065 [2024-11-26 07:42:02.947613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.065 [2024-11-26 07:42:02.947622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.065 qpair failed and we were unable to recover it. 00:32:19.065 [2024-11-26 07:42:02.947955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.065 [2024-11-26 07:42:02.947964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.065 qpair failed and we were unable to recover it. 00:32:19.065 [2024-11-26 07:42:02.948287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.065 [2024-11-26 07:42:02.948296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.065 qpair failed and we were unable to recover it. 00:32:19.065 [2024-11-26 07:42:02.948606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.065 [2024-11-26 07:42:02.948615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.065 qpair failed and we were unable to recover it. 00:32:19.065 [2024-11-26 07:42:02.948803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.948812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.949111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.949119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.949428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.949436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.949539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.949547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.949813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.949821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.950141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.950150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.950442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.950451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.950756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.950765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.950937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.950945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.951253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.951262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.951588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.951597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.951869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.951877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.952153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.952161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.952225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.952232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.952513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.952521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.952812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.952821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.952977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.952988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.953293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.953301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.953631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.953640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.953776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.953786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.954075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.954085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.954436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.954445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.954617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.954627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.954947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.954956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.955130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.955138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.955430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.955438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.955627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.955635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.955837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.955845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.956161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.956170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.956487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.956495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.956802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.956811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.957107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.957116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.957274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.957282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.957638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.957646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.957959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.957967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.958133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.958141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.958483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.958491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.958808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.958817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.959129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.959137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.959296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.959304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.959495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.959503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.959829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.959837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.960156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.960165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.960477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.960485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.960799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.960808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.961110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.961119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.961280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.961289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.961486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.961495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.961690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.961699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.961944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.961953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.962264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.962272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.962594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.962603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.962804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.962812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.963175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.963184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.963491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.963500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.963646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.963655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.963853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.963863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.964144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.964152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.964498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.066 [2024-11-26 07:42:02.964506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.066 qpair failed and we were unable to recover it. 00:32:19.066 [2024-11-26 07:42:02.964813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.964821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.965163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.965173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.965479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.965488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.965775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.965784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.966062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.966070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.966460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.966468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.966775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.966783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.966970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.966978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.967274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.967282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.967603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.967612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.967919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.967928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.968258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.968266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.968479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.968486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.968805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.968813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.969125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.969133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.969429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.969437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.969746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.969755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.970030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.970038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.970398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.970407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.970710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.970718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.970889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.970899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.971078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.971088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.971387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.971395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.971677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.971685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.971998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.972007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.972337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.972346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.972671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.972679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.973011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.973020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.973218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.973226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.973545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.973553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.973860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.973880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.974216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.974225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.974534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.974542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.974733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.974741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.974934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.974942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.975275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.975283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.975598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.975607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.975925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.975934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.976317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.976325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.976653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.976662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.976830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.976839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.977110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.977120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.977444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.977453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.977778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.977787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.978096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.978104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.978410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.978418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.978726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.978735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.979112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.979122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.979431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.979440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.979746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.979755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.979928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.979938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.980282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.980291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.980421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.980430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.980731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.980740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.981025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.981034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.981345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.981355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.981558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.981566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.981737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.981744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.981940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.981948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.982280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.982288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.982670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.982678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.982840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.982847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.067 [2024-11-26 07:42:02.983158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.067 [2024-11-26 07:42:02.983167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.067 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.983325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.983335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.983520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.983528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.983749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.983756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.984067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.984076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.984364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.984372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.984562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.984570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.984855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.984866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.985153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.985161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.985454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.985463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.985769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.985777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.986095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.986104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.986408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.986417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.986707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.986716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.987030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.987038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.987365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.987374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.987673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.987682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.987979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.987987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.988300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.988308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.988619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.988629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.988905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.988914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.989139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.989147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.989486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.989494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.989804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.989812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.990179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.990187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.990519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.990536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.990835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.990843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.991159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.991168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.991527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.991536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.991846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.991855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.992145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.992153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.992342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.992350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.992586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.992595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.992813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.992821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.993006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.993015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.993333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.993341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.993654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.993663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.993968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.993977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.994291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.994301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.994617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.994625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.994940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.994949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.995240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.995248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.995549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.995558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.995871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.995880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.996201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.996209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.996499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.996507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.996703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.996712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.997043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.997053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.997383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.997391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.997679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.997687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.998003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.998012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.998341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.998350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.998658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.998666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.998964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.998973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.999164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.999172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.999484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.999492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:02.999800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:02.999808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:03.000118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:03.000128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:03.000433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:03.000442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:03.000592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:03.000601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:03.000870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:03.000879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:03.001168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:03.001176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:03.001501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:03.001511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:03.001803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:03.001811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:03.002111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:03.002119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:03.002407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:03.002416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:03.002725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:03.002734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:03.003089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:03.003099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:03.003278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:03.003286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:03.003631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:03.003640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:03.003947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:03.003955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:03.004270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.068 [2024-11-26 07:42:03.004278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.068 qpair failed and we were unable to recover it. 00:32:19.068 [2024-11-26 07:42:03.004599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.004608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.004995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.005003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.005325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.005335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.005649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.005657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.005828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.005836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.006181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.006190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.006497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.006506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.006726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.006734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.007050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.007060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.007362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.007371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.007679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.007688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.007871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.007881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.008189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.008197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.008494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.008502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.008813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.008822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.009157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.009166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.009474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.009484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.009784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.009792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.010101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.010110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.010417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.010426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.010696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.010704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.011033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.011042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.011327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.011336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.011645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.011653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.011954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.011963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.012298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.012305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.012465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.012473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.012802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.012812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.013109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.013118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.013406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.013414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.013726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.013735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.013939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.013948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.013994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.014001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.014193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.014200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.014505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.014515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.014826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.014835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.015146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.015155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.015314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.015323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.015663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.015672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.015979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.015988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.016301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.016310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.016703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.016712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.016990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.016999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.017301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.017310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.017489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.017497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.017772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.017781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.018066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.018074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.018392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.018401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.018709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.018719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.019080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.019089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.019390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.019399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.019710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.019718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.020027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.020036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.020364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.020374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.020539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.020547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.020870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.020878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.021153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.021162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.021480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.021489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.021792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.021801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.022108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.022117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.022331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.022339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.022638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.022646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.022938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.022947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.069 [2024-11-26 07:42:03.023266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.069 [2024-11-26 07:42:03.023274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.069 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.023588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.023597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.023754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.023764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.024106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.024114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.024423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.024435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.024743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.024752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.024937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.024946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.025220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.025229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.025563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.025572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.025880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.025889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.026098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.026106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.026409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.026418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.026730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.026738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.027127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.027136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.027434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.027442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.027722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.027731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.027918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.027926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.028285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.028294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.028607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.028616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.028801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.028810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.029007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.029015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.029340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.029349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.029665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.029675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.029964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.029973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.030303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.030312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.030616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.030625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.030934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.030943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.031275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.031283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.031588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.031597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.031896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.031905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.032103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.032111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.032414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.032423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.032729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.032737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.033093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.033102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.033410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.033419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.033707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.033716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.034069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.034078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.034391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.034401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.034709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.034718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.035047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.035057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.035302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.035310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.035619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.035629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.035939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.035948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.036151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.036159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.036511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.036521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.036828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.036836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.036994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.037004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.037321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.037330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.037647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.037657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.037965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.037974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.038282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.038292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.038620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.038628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.039006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.039015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.039326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.039334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.039522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.039530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.039878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.039887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.040202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.040210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.040503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.040511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.040790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.040799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.041103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.041111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.041420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.041428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.041738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.041748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.042031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.042048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.042345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.042353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.042664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.042674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.043000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.043009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.043354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.043363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.043670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.043678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.043986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.044004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.044343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.044352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.044660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.044669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.044957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.070 [2024-11-26 07:42:03.044966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.070 qpair failed and we were unable to recover it. 00:32:19.070 [2024-11-26 07:42:03.045290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.045299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.045587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.045595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.045769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.045777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.046114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.046123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.046295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.046303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.046663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.046672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.047002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.047011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.047170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.047179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.047490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.047499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.047806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.047815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.048120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.048130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.048314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.048323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.048634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.048644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.048979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.048988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.049326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.049343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.049520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.049530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.049709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.049717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.050036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.050044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.050375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.050392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.050690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.050700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.051017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.051025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.051365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.051374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.051523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.051531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.051694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.051703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.051976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.052017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.052218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.052232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.052575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.052587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.052933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.052948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.053281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.053293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.053604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.053616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.053903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.053917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.054240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.054253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.054562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.054573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.054880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.054892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.055224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.055235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.055570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.055581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.055889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.055902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.056231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.056243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.056552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.056565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.056904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.056919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.057204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.057216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.057524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.057537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.057873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.057886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.058228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.058240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.058444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.058455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.058749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.058760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.059091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.059104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.059273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.059286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.059618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.059629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.059936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.059949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.060143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.060154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.060337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.060348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.060717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.060728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.061039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.061051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.061267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.061279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.061610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.061621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.061926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.061938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.062249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.062260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.062576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.062588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.062928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.062940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.063278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.063290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.063464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.063475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.063755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.063766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.064093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.064105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.064382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.064393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.064703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.064714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.064907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.064919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.065237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.065248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.065561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.065573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.065754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.065765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.066103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.066116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.071 qpair failed and we were unable to recover it. 00:32:19.071 [2024-11-26 07:42:03.066457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.071 [2024-11-26 07:42:03.066469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.066671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.066683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.066899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.066911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.067234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.067245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.067428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.067438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.067767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.067778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.068008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.068019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.068331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.068342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.068679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.068691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.069101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.069113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.069416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.069428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.069732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.069743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.070027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.070038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.070367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.070379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.070680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.070693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.070999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.071011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.071347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.071358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.071546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.071556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.071875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.071886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.072192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.072203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.072537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.072549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.072738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.072750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.073020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.073032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.073354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.073366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.073549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.073561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.073751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.073762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.073968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.073980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.074309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.074320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.074659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.074671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.074979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.074991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.075323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.075335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.075689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.075700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.075888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.075900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.076225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.076236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.076545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.076556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.076892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.076904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.077210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.077227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.077546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.077558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.077894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.077906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.078219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.078230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.078566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.078578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.078942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.078954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.079263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.079275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.079577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.079589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.079796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.079807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.080122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.080133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.080441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.080453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.080761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.080772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.081115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.081127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.081433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.081445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.081760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.081772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.082099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.082111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.082441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.082453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.082797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.082809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.082997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.083010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.083194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.083206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.083503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.083514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.083817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.083829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.084140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.084152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.084460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.084473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.084767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.084780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.085093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.085106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.085413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.085426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.085734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.072 [2024-11-26 07:42:03.085748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.072 qpair failed and we were unable to recover it. 00:32:19.072 [2024-11-26 07:42:03.085968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.085980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.086216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.086228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.086538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.086551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.086860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.086877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.087159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.087170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.087478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.087498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.087790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.087801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.088130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.088143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.088490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.088501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.088796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.088808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.089107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.089118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.089435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.089448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.089781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.089792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.090129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.090141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.090370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.090381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.090600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.090611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.090824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.090836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.091112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.091123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.091429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.091441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.091735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.091746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.091984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.091995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.092304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.092315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.092627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.092639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.092931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.092943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.093254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.093266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.093571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.093583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.093883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.093898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.094232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.094243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.094416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.094427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.094703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.094715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.094923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.094934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.095245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.095256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.095560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.095572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.095785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.095797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.096111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.096123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.096484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.096495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.096827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.096839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.097155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.097167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.097468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.097480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.097780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.097792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.098018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.098029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.098354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.098365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.098665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.098677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.099062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.099074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.099368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.099379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.099688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.099699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.099993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.100004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.100184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.100195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.100492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.100503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.100819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.100831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.101013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.101025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.101357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.101369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.101706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.101718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.102051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.102063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.102394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.102405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.102718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.102729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.103011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.103023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.103349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.103360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.103665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.103677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.103978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.103990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.104332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.104344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.104646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.104657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.105040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.105052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.105364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.105376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.105558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.105570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.105875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.105886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.106198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.106210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.106538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.106550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.106743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.106754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.107058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.107069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.107382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.107393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.107694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.107705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.108038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.073 [2024-11-26 07:42:03.108051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.073 qpair failed and we were unable to recover it. 00:32:19.073 [2024-11-26 07:42:03.108364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.108376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.108688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.108700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.109091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.109102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.109417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.109429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.109738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.109749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.110047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.110059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.110384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.110395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.110730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.110742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.111087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.111100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.111429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.111440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.111750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.111761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.112038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.112050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.112377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.112388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.112683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.112693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.112891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.112901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.113222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.113234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.113542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.113554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.113728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.113740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.114026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.114037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.114337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.114348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.114688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.114700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.115056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.115069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.115391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.115403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.115744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.115755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.115978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.115989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.116317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.116328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.116632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.116644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.116974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.116986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.117312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.117324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.117627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.117639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.117987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.117999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.118330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.118340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.118487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.118498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.118762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.118772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.119080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.119091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.119401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.119412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.119693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.119704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.120017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.120030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.120358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.120369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.120676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.120688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.120995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.121007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.121322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.121333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.121532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.121543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.121945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.121957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.122038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.122047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.122327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.122337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.122672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.122683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.122996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.123008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.123212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.123227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.123515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.123526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.123855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.123869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.124258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.124269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.124570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.124582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.124868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.124879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.125130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.125141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.125443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.125454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.125803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.125814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.125997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.126007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.126319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.126331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.126632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.126643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.126983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.126995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.127224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.127236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.127554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.127566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.127869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.127881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.128064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.128076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.128375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.074 [2024-11-26 07:42:03.128387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.074 qpair failed and we were unable to recover it. 00:32:19.074 [2024-11-26 07:42:03.128693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.128704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.129047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.129059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.129376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.129387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.129727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.129739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.130088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.130100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.130411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.130423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.130734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.130746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.130930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.130943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.131245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.131258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.131511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.131523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.131855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.131871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.132209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.132220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.132543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.132555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.132857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.132871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.133164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.133176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.133504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.133515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.133815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.133827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.134021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.134032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.134361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.134373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.134588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.134599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.135017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.135029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.135337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.135349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.135667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.135678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.135882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.135893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.136198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.136209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.136538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.136549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.136807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.136819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.137131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.137144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.137507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.137518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.137828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.137839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.138139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.138151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.138383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.138393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.138701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.138714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.139065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.139077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.139392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.139404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.139601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.139612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.139943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.139954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.140280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.140292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.140638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.140649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.140973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.140984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.141300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.141311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.141497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.141509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.141835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.141846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.142153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.142165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.142471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.142482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.142749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.142760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.143048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.143059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.143357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.143369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.143550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.143562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.143853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.143867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.144191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.144204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.144547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.144558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.144919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.144930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.145303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.145314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.145624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.145635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.145813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.145825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.146148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.146160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.146468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.146480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.146809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.146820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.147107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.147119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.147451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.147463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.147796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.147809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.148138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.148151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.148454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.148467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.148805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.148817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.149155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.149168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.149473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.149485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.149821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.149834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.150163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.150176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.150381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.150393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.150769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.150781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.151086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.151098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.075 [2024-11-26 07:42:03.151407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.075 [2024-11-26 07:42:03.151419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.075 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.151728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.151740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.152043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.152055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.152241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.152252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.152581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.152591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.152902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.152917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.153077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.153089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.153424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.153436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.153744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.153755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.154080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.154092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.154420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.154431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.154692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.154703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.155016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.155028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.155342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.155354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.155703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.155714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.156001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.156013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.156344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.156355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.156658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.156670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.156987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.156999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.157293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.157304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.157485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.157497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.157826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.157838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.158167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.158179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.158334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.158344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.158633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.158645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.158982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.158993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.159199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.159209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.159499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.159511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.159820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.159832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.160165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.160176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.160487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.160499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.160704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.160715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.161014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.161028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.161341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.161353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.161651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.161662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.161995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.162008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.162331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.162342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.162653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.162664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.162975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.162987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.163345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.163356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.163666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.163677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.164006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.164017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.164227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.164237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.164522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.164534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.164834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.164847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.165151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.165162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.165488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.165500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.165840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.165852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.166195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.166207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.166509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.166521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.166815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.166827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.167169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.167182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.167545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.167557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.167860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.167875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.168213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.168224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.168505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.168515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.168802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.168814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.169170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.169182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.169544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.169556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.169729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.169740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.170045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.170058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.170371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.170381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.170688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.170700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.170973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.170984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.171285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.171296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.171631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.171641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.171832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.171843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.172133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.172144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.172491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.172503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.172813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.172824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.173139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.173151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.173433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.173444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.173744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.173754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.174090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.076 [2024-11-26 07:42:03.174102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.076 qpair failed and we were unable to recover it. 00:32:19.076 [2024-11-26 07:42:03.174409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.077 [2024-11-26 07:42:03.174419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.352 qpair failed and we were unable to recover it. 00:32:19.352 [2024-11-26 07:42:03.174755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.352 [2024-11-26 07:42:03.174768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.352 qpair failed and we were unable to recover it. 00:32:19.352 [2024-11-26 07:42:03.175080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.352 [2024-11-26 07:42:03.175092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.352 qpair failed and we were unable to recover it. 00:32:19.352 [2024-11-26 07:42:03.175435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.352 [2024-11-26 07:42:03.175446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.352 qpair failed and we were unable to recover it. 00:32:19.352 [2024-11-26 07:42:03.175757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.352 [2024-11-26 07:42:03.175769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.352 qpair failed and we were unable to recover it. 00:32:19.352 [2024-11-26 07:42:03.176094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.352 [2024-11-26 07:42:03.176105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.352 qpair failed and we were unable to recover it. 00:32:19.352 [2024-11-26 07:42:03.176437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.352 [2024-11-26 07:42:03.176448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.352 qpair failed and we were unable to recover it. 00:32:19.352 [2024-11-26 07:42:03.176759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.352 [2024-11-26 07:42:03.176771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.352 qpair failed and we were unable to recover it. 00:32:19.352 [2024-11-26 07:42:03.177172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.352 [2024-11-26 07:42:03.177184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.352 qpair failed and we were unable to recover it. 00:32:19.352 [2024-11-26 07:42:03.177481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.352 [2024-11-26 07:42:03.177493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.352 qpair failed and we were unable to recover it. 00:32:19.352 [2024-11-26 07:42:03.177824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.352 [2024-11-26 07:42:03.177837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.352 qpair failed and we were unable to recover it. 00:32:19.352 [2024-11-26 07:42:03.178166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.352 [2024-11-26 07:42:03.178179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.352 qpair failed and we were unable to recover it. 00:32:19.352 [2024-11-26 07:42:03.178506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.352 [2024-11-26 07:42:03.178519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.352 qpair failed and we were unable to recover it. 00:32:19.352 [2024-11-26 07:42:03.178858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.352 [2024-11-26 07:42:03.178874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.352 qpair failed and we were unable to recover it. 00:32:19.352 [2024-11-26 07:42:03.179175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.352 [2024-11-26 07:42:03.179187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.352 qpair failed and we were unable to recover it. 00:32:19.352 [2024-11-26 07:42:03.179536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.352 [2024-11-26 07:42:03.179548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.352 qpair failed and we were unable to recover it. 00:32:19.352 [2024-11-26 07:42:03.179719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.352 [2024-11-26 07:42:03.179733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.352 qpair failed and we were unable to recover it. 00:32:19.352 [2024-11-26 07:42:03.180032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.352 [2024-11-26 07:42:03.180044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.352 qpair failed and we were unable to recover it. 00:32:19.352 [2024-11-26 07:42:03.180323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.352 [2024-11-26 07:42:03.180334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.352 qpair failed and we were unable to recover it. 00:32:19.352 [2024-11-26 07:42:03.180649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.352 [2024-11-26 07:42:03.180660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.352 qpair failed and we were unable to recover it. 00:32:19.352 [2024-11-26 07:42:03.180840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.352 [2024-11-26 07:42:03.180850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.352 qpair failed and we were unable to recover it. 00:32:19.352 [2024-11-26 07:42:03.181022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.352 [2024-11-26 07:42:03.181034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.352 qpair failed and we were unable to recover it. 00:32:19.353 [2024-11-26 07:42:03.181300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.353 [2024-11-26 07:42:03.181311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.353 qpair failed and we were unable to recover it. 00:32:19.353 [2024-11-26 07:42:03.181626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.353 [2024-11-26 07:42:03.181638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.353 qpair failed and we were unable to recover it. 00:32:19.353 [2024-11-26 07:42:03.181928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.353 [2024-11-26 07:42:03.181940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.353 qpair failed and we were unable to recover it. 00:32:19.353 [2024-11-26 07:42:03.182232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.353 [2024-11-26 07:42:03.182243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.353 qpair failed and we were unable to recover it. 00:32:19.353 [2024-11-26 07:42:03.182434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.353 [2024-11-26 07:42:03.182447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.353 qpair failed and we were unable to recover it. 00:32:19.353 Read completed with error (sct=0, sc=8) 00:32:19.353 starting I/O failed 00:32:19.353 Read completed with error (sct=0, sc=8) 00:32:19.353 starting I/O failed 00:32:19.353 Read completed with error (sct=0, sc=8) 00:32:19.353 starting I/O failed 00:32:19.353 Read completed with error (sct=0, sc=8) 00:32:19.353 starting I/O failed 00:32:19.353 Read completed with error (sct=0, sc=8) 00:32:19.353 starting I/O failed 00:32:19.353 Read completed with error (sct=0, sc=8) 00:32:19.353 starting I/O failed 00:32:19.353 Read completed with error (sct=0, sc=8) 00:32:19.353 starting I/O failed 00:32:19.353 Read completed with error (sct=0, sc=8) 00:32:19.353 starting I/O failed 00:32:19.353 Read completed with error (sct=0, sc=8) 00:32:19.353 starting I/O failed 00:32:19.353 Read completed with error (sct=0, sc=8) 00:32:19.353 starting I/O failed 00:32:19.353 Read completed with error (sct=0, sc=8) 00:32:19.353 starting I/O failed 00:32:19.353 Read completed with error (sct=0, sc=8) 00:32:19.353 starting I/O failed 00:32:19.353 Read completed with error (sct=0, sc=8) 00:32:19.353 starting I/O failed 00:32:19.353 Read completed with error (sct=0, sc=8) 00:32:19.353 starting I/O failed 00:32:19.353 Read completed with error (sct=0, sc=8) 00:32:19.353 starting I/O failed 00:32:19.353 Read completed with error (sct=0, sc=8) 00:32:19.353 starting I/O failed 00:32:19.353 Write completed with error (sct=0, sc=8) 00:32:19.353 starting I/O failed 00:32:19.353 Read completed with error (sct=0, sc=8) 00:32:19.353 starting I/O failed 00:32:19.353 Read completed with error (sct=0, sc=8) 00:32:19.353 starting I/O failed 00:32:19.353 Write completed with error (sct=0, sc=8) 00:32:19.353 starting I/O failed 00:32:19.353 Write completed with error (sct=0, sc=8) 00:32:19.353 starting I/O failed 00:32:19.353 Read completed with error (sct=0, sc=8) 00:32:19.353 starting I/O failed 00:32:19.353 Write completed with error (sct=0, sc=8) 00:32:19.353 starting I/O failed 00:32:19.353 Write completed with error (sct=0, sc=8) 00:32:19.353 starting I/O failed 00:32:19.353 Write completed with error (sct=0, sc=8) 00:32:19.353 starting I/O failed 00:32:19.353 Write completed with error (sct=0, sc=8) 00:32:19.353 starting I/O failed 00:32:19.353 Write completed with error (sct=0, sc=8) 00:32:19.353 starting I/O failed 00:32:19.353 Write completed with error (sct=0, sc=8) 00:32:19.353 starting I/O failed 00:32:19.353 Read completed with error (sct=0, sc=8) 00:32:19.353 starting I/O failed 00:32:19.353 Write completed with error (sct=0, sc=8) 00:32:19.353 starting I/O failed 00:32:19.353 Write completed with error (sct=0, sc=8) 00:32:19.353 starting I/O failed 00:32:19.353 Read completed with error (sct=0, sc=8) 00:32:19.353 starting I/O failed 00:32:19.353 [2024-11-26 07:42:03.183175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:32:19.353 [2024-11-26 07:42:03.183633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.353 [2024-11-26 07:42:03.183690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90fc000b90 with addr=10.0.0.2, port=4420 00:32:19.353 qpair failed and we were unable to recover it. 00:32:19.353 [2024-11-26 07:42:03.184137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.353 [2024-11-26 07:42:03.184229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90fc000b90 with addr=10.0.0.2, port=4420 00:32:19.353 qpair failed and we were unable to recover it. 00:32:19.353 [2024-11-26 07:42:03.184552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.353 [2024-11-26 07:42:03.184565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.353 qpair failed and we were unable to recover it. 00:32:19.353 [2024-11-26 07:42:03.184876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.353 [2024-11-26 07:42:03.184887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.353 qpair failed and we were unable to recover it. 00:32:19.353 [2024-11-26 07:42:03.185222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.353 [2024-11-26 07:42:03.185233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.353 qpair failed and we were unable to recover it. 00:32:19.353 [2024-11-26 07:42:03.185538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.353 [2024-11-26 07:42:03.185549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.353 qpair failed and we were unable to recover it. 00:32:19.353 [2024-11-26 07:42:03.185885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.353 [2024-11-26 07:42:03.185897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.353 qpair failed and we were unable to recover it. 00:32:19.353 [2024-11-26 07:42:03.186217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.353 [2024-11-26 07:42:03.186228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.353 qpair failed and we were unable to recover it. 00:32:19.353 [2024-11-26 07:42:03.186428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.353 [2024-11-26 07:42:03.186438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.353 qpair failed and we were unable to recover it. 00:32:19.353 [2024-11-26 07:42:03.186745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.353 [2024-11-26 07:42:03.186756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.353 qpair failed and we were unable to recover it. 00:32:19.353 [2024-11-26 07:42:03.187116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.353 [2024-11-26 07:42:03.187127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.353 qpair failed and we were unable to recover it. 00:32:19.353 [2024-11-26 07:42:03.187477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.353 [2024-11-26 07:42:03.187488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.353 qpair failed and we were unable to recover it. 00:32:19.353 [2024-11-26 07:42:03.187799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.353 [2024-11-26 07:42:03.187810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.353 qpair failed and we were unable to recover it. 00:32:19.353 [2024-11-26 07:42:03.188104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.353 [2024-11-26 07:42:03.188116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.353 qpair failed and we were unable to recover it. 00:32:19.353 [2024-11-26 07:42:03.188447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.353 [2024-11-26 07:42:03.188458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.353 qpair failed and we were unable to recover it. 00:32:19.353 [2024-11-26 07:42:03.188799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.353 [2024-11-26 07:42:03.188809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.353 qpair failed and we were unable to recover it. 00:32:19.353 [2024-11-26 07:42:03.189127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.353 [2024-11-26 07:42:03.189139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.353 qpair failed and we were unable to recover it. 00:32:19.353 [2024-11-26 07:42:03.189440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.353 [2024-11-26 07:42:03.189451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.353 qpair failed and we were unable to recover it. 00:32:19.353 [2024-11-26 07:42:03.189748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.353 [2024-11-26 07:42:03.189760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.353 qpair failed and we were unable to recover it. 00:32:19.353 [2024-11-26 07:42:03.190065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.353 [2024-11-26 07:42:03.190078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.353 qpair failed and we were unable to recover it. 00:32:19.353 [2024-11-26 07:42:03.190393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.353 [2024-11-26 07:42:03.190405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.353 qpair failed and we were unable to recover it. 00:32:19.353 [2024-11-26 07:42:03.190741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.353 [2024-11-26 07:42:03.190752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.353 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.191069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.354 [2024-11-26 07:42:03.191079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.354 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.191380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.354 [2024-11-26 07:42:03.191392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.354 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.191605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.354 [2024-11-26 07:42:03.191616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.354 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.191933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.354 [2024-11-26 07:42:03.191945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.354 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.192270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.354 [2024-11-26 07:42:03.192281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.354 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.192594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.354 [2024-11-26 07:42:03.192605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.354 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.192804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.354 [2024-11-26 07:42:03.192814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.354 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.193124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.354 [2024-11-26 07:42:03.193135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.354 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.193470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.354 [2024-11-26 07:42:03.193482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.354 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.193786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.354 [2024-11-26 07:42:03.193798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.354 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.194021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.354 [2024-11-26 07:42:03.194032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.354 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.194364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.354 [2024-11-26 07:42:03.194376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.354 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.194589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.354 [2024-11-26 07:42:03.194600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.354 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.194800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.354 [2024-11-26 07:42:03.194812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.354 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.195146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.354 [2024-11-26 07:42:03.195159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.354 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.195436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.354 [2024-11-26 07:42:03.195448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.354 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.195778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.354 [2024-11-26 07:42:03.195790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.354 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.196111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.354 [2024-11-26 07:42:03.196124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.354 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.196455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.354 [2024-11-26 07:42:03.196467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.354 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.196788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.354 [2024-11-26 07:42:03.196800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.354 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.197108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.354 [2024-11-26 07:42:03.197121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.354 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.197446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.354 [2024-11-26 07:42:03.197458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.354 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.197774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.354 [2024-11-26 07:42:03.197786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.354 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.198102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.354 [2024-11-26 07:42:03.198115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.354 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.198438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.354 [2024-11-26 07:42:03.198453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.354 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.198785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.354 [2024-11-26 07:42:03.198798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.354 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.199136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.354 [2024-11-26 07:42:03.199149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.354 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.199453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.354 [2024-11-26 07:42:03.199465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.354 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.199792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.354 [2024-11-26 07:42:03.199804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.354 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.200146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.354 [2024-11-26 07:42:03.200158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.354 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.200460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.354 [2024-11-26 07:42:03.200472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.354 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.200763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.354 [2024-11-26 07:42:03.200775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.354 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.201078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.354 [2024-11-26 07:42:03.201090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.354 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.201397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.354 [2024-11-26 07:42:03.201409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.354 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.201712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.354 [2024-11-26 07:42:03.201724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.354 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.201932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.354 [2024-11-26 07:42:03.201944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.354 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.202213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.354 [2024-11-26 07:42:03.202224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.354 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.202535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.354 [2024-11-26 07:42:03.202546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.354 qpair failed and we were unable to recover it. 00:32:19.354 [2024-11-26 07:42:03.202858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.355 [2024-11-26 07:42:03.202874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.355 qpair failed and we were unable to recover it. 00:32:19.355 [2024-11-26 07:42:03.203160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.355 [2024-11-26 07:42:03.203172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.355 qpair failed and we were unable to recover it. 00:32:19.355 [2024-11-26 07:42:03.203377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.355 [2024-11-26 07:42:03.203388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.355 qpair failed and we were unable to recover it. 00:32:19.355 [2024-11-26 07:42:03.203656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.355 [2024-11-26 07:42:03.203667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.355 qpair failed and we were unable to recover it. 00:32:19.355 [2024-11-26 07:42:03.203858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.355 [2024-11-26 07:42:03.203873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.355 qpair failed and we were unable to recover it. 00:32:19.355 [2024-11-26 07:42:03.204193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.355 [2024-11-26 07:42:03.204205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.355 qpair failed and we were unable to recover it. 00:32:19.355 [2024-11-26 07:42:03.204550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.355 [2024-11-26 07:42:03.204561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.355 qpair failed and we were unable to recover it. 00:32:19.355 [2024-11-26 07:42:03.204764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.355 [2024-11-26 07:42:03.204774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.355 qpair failed and we were unable to recover it. 00:32:19.355 [2024-11-26 07:42:03.204982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.355 [2024-11-26 07:42:03.204994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.355 qpair failed and we were unable to recover it. 00:32:19.355 [2024-11-26 07:42:03.205295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.355 [2024-11-26 07:42:03.205306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.355 qpair failed and we were unable to recover it. 00:32:19.355 [2024-11-26 07:42:03.205557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.355 [2024-11-26 07:42:03.205568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.355 qpair failed and we were unable to recover it. 00:32:19.355 [2024-11-26 07:42:03.205950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.355 [2024-11-26 07:42:03.205961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.355 qpair failed and we were unable to recover it. 00:32:19.355 [2024-11-26 07:42:03.206341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.355 [2024-11-26 07:42:03.206352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.355 qpair failed and we were unable to recover it. 00:32:19.355 [2024-11-26 07:42:03.206688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.355 [2024-11-26 07:42:03.206699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.355 qpair failed and we were unable to recover it. 00:32:19.355 [2024-11-26 07:42:03.206996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.355 [2024-11-26 07:42:03.207008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.355 qpair failed and we were unable to recover it. 00:32:19.355 [2024-11-26 07:42:03.207320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.355 [2024-11-26 07:42:03.207332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.355 qpair failed and we were unable to recover it. 00:32:19.355 [2024-11-26 07:42:03.207510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.355 [2024-11-26 07:42:03.207523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.355 qpair failed and we were unable to recover it. 00:32:19.355 [2024-11-26 07:42:03.207735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.355 [2024-11-26 07:42:03.207746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.355 qpair failed and we were unable to recover it. 00:32:19.355 [2024-11-26 07:42:03.208081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.355 [2024-11-26 07:42:03.208093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.355 qpair failed and we were unable to recover it. 00:32:19.355 [2024-11-26 07:42:03.208398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.355 [2024-11-26 07:42:03.208409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.355 qpair failed and we were unable to recover it. 00:32:19.355 [2024-11-26 07:42:03.208742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.355 [2024-11-26 07:42:03.208754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.355 qpair failed and we were unable to recover it. 00:32:19.355 [2024-11-26 07:42:03.209079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.355 [2024-11-26 07:42:03.209091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.355 qpair failed and we were unable to recover it. 00:32:19.355 [2024-11-26 07:42:03.209297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.355 [2024-11-26 07:42:03.209308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.355 qpair failed and we were unable to recover it. 00:32:19.355 [2024-11-26 07:42:03.209638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.355 [2024-11-26 07:42:03.209650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.355 qpair failed and we were unable to recover it. 00:32:19.355 [2024-11-26 07:42:03.209978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.355 [2024-11-26 07:42:03.209989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.355 qpair failed and we were unable to recover it. 00:32:19.355 [2024-11-26 07:42:03.210319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.355 [2024-11-26 07:42:03.210331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.355 qpair failed and we were unable to recover it. 00:32:19.355 [2024-11-26 07:42:03.210629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.355 [2024-11-26 07:42:03.210639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.355 qpair failed and we were unable to recover it. 00:32:19.355 [2024-11-26 07:42:03.210986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.355 [2024-11-26 07:42:03.210998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.355 qpair failed and we were unable to recover it. 00:32:19.355 [2024-11-26 07:42:03.211303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.355 [2024-11-26 07:42:03.211315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.355 qpair failed and we were unable to recover it. 00:32:19.355 [2024-11-26 07:42:03.211511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.355 [2024-11-26 07:42:03.211523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.355 qpair failed and we were unable to recover it. 00:32:19.355 [2024-11-26 07:42:03.211817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.355 [2024-11-26 07:42:03.211828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.355 qpair failed and we were unable to recover it. 00:32:19.355 [2024-11-26 07:42:03.212145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.355 [2024-11-26 07:42:03.212157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.355 qpair failed and we were unable to recover it. 00:32:19.355 [2024-11-26 07:42:03.212470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.355 [2024-11-26 07:42:03.212482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.355 qpair failed and we were unable to recover it. 00:32:19.355 [2024-11-26 07:42:03.212817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.355 [2024-11-26 07:42:03.212829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.355 qpair failed and we were unable to recover it. 00:32:19.355 [2024-11-26 07:42:03.213140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.355 [2024-11-26 07:42:03.213152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.355 qpair failed and we were unable to recover it. 00:32:19.355 [2024-11-26 07:42:03.213479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.355 [2024-11-26 07:42:03.213490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.355 qpair failed and we were unable to recover it. 00:32:19.355 [2024-11-26 07:42:03.213776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.355 [2024-11-26 07:42:03.213787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.355 qpair failed and we were unable to recover it. 00:32:19.355 [2024-11-26 07:42:03.214071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.355 [2024-11-26 07:42:03.214082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.355 qpair failed and we were unable to recover it. 00:32:19.355 [2024-11-26 07:42:03.214391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.356 [2024-11-26 07:42:03.214403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.356 qpair failed and we were unable to recover it. 00:32:19.356 [2024-11-26 07:42:03.214712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.356 [2024-11-26 07:42:03.214722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.356 qpair failed and we were unable to recover it. 00:32:19.356 [2024-11-26 07:42:03.215037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.356 [2024-11-26 07:42:03.215049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.356 qpair failed and we were unable to recover it. 00:32:19.356 [2024-11-26 07:42:03.215384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.356 [2024-11-26 07:42:03.215395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.356 qpair failed and we were unable to recover it. 00:32:19.356 [2024-11-26 07:42:03.215676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.356 [2024-11-26 07:42:03.215687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.356 qpair failed and we were unable to recover it. 00:32:19.356 [2024-11-26 07:42:03.215991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.356 [2024-11-26 07:42:03.216003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.356 qpair failed and we were unable to recover it. 00:32:19.356 [2024-11-26 07:42:03.216318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.356 [2024-11-26 07:42:03.216329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.356 qpair failed and we were unable to recover it. 00:32:19.356 [2024-11-26 07:42:03.216611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.356 [2024-11-26 07:42:03.216623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.356 qpair failed and we were unable to recover it. 00:32:19.356 [2024-11-26 07:42:03.216846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.356 [2024-11-26 07:42:03.216858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.356 qpair failed and we were unable to recover it. 00:32:19.356 [2024-11-26 07:42:03.217176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.356 [2024-11-26 07:42:03.217188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.356 qpair failed and we were unable to recover it. 00:32:19.356 [2024-11-26 07:42:03.217489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.356 [2024-11-26 07:42:03.217500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.356 qpair failed and we were unable to recover it. 00:32:19.356 [2024-11-26 07:42:03.217696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.356 [2024-11-26 07:42:03.217709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.356 qpair failed and we were unable to recover it. 00:32:19.356 [2024-11-26 07:42:03.217994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.356 [2024-11-26 07:42:03.218007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.356 qpair failed and we were unable to recover it. 00:32:19.356 [2024-11-26 07:42:03.218315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.356 [2024-11-26 07:42:03.218327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.356 qpair failed and we were unable to recover it. 00:32:19.356 [2024-11-26 07:42:03.218505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.356 [2024-11-26 07:42:03.218516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.356 qpair failed and we were unable to recover it. 00:32:19.356 [2024-11-26 07:42:03.218691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.356 [2024-11-26 07:42:03.218703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.356 qpair failed and we were unable to recover it. 00:32:19.356 [2024-11-26 07:42:03.219076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.356 [2024-11-26 07:42:03.219090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.356 qpair failed and we were unable to recover it. 00:32:19.356 [2024-11-26 07:42:03.219408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.356 [2024-11-26 07:42:03.219420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.356 qpair failed and we were unable to recover it. 00:32:19.356 [2024-11-26 07:42:03.219786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.356 [2024-11-26 07:42:03.219798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.356 qpair failed and we were unable to recover it. 00:32:19.356 [2024-11-26 07:42:03.220127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.356 [2024-11-26 07:42:03.220139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.356 qpair failed and we were unable to recover it. 00:32:19.356 [2024-11-26 07:42:03.220361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.356 [2024-11-26 07:42:03.220371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.356 qpair failed and we were unable to recover it. 00:32:19.356 [2024-11-26 07:42:03.220760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.356 [2024-11-26 07:42:03.220771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.356 qpair failed and we were unable to recover it. 00:32:19.356 [2024-11-26 07:42:03.221073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.356 [2024-11-26 07:42:03.221085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.356 qpair failed and we were unable to recover it. 00:32:19.356 [2024-11-26 07:42:03.221426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.356 [2024-11-26 07:42:03.221437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.356 qpair failed and we were unable to recover it. 00:32:19.356 [2024-11-26 07:42:03.221743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.356 [2024-11-26 07:42:03.221755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.356 qpair failed and we were unable to recover it. 00:32:19.356 [2024-11-26 07:42:03.222033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.356 [2024-11-26 07:42:03.222045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.356 qpair failed and we were unable to recover it. 00:32:19.356 [2024-11-26 07:42:03.222367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.356 [2024-11-26 07:42:03.222378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.356 qpair failed and we were unable to recover it. 00:32:19.356 [2024-11-26 07:42:03.222643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.356 [2024-11-26 07:42:03.222654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.356 qpair failed and we were unable to recover it. 00:32:19.356 [2024-11-26 07:42:03.223012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.356 [2024-11-26 07:42:03.223023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.356 qpair failed and we were unable to recover it. 00:32:19.356 [2024-11-26 07:42:03.223323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.356 [2024-11-26 07:42:03.223333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.356 qpair failed and we were unable to recover it. 00:32:19.356 [2024-11-26 07:42:03.223645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.356 [2024-11-26 07:42:03.223656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.356 qpair failed and we were unable to recover it. 00:32:19.356 [2024-11-26 07:42:03.223932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.356 [2024-11-26 07:42:03.223944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.356 qpair failed and we were unable to recover it. 00:32:19.356 [2024-11-26 07:42:03.224238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.356 [2024-11-26 07:42:03.224248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.356 qpair failed and we were unable to recover it. 00:32:19.356 [2024-11-26 07:42:03.224556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.356 [2024-11-26 07:42:03.224568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.356 qpair failed and we were unable to recover it. 00:32:19.356 [2024-11-26 07:42:03.224913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.356 [2024-11-26 07:42:03.224925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.356 qpair failed and we were unable to recover it. 00:32:19.356 [2024-11-26 07:42:03.225223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.356 [2024-11-26 07:42:03.225233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.356 qpair failed and we were unable to recover it. 00:32:19.356 [2024-11-26 07:42:03.225537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.356 [2024-11-26 07:42:03.225548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.357 qpair failed and we were unable to recover it. 00:32:19.357 [2024-11-26 07:42:03.225858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.357 [2024-11-26 07:42:03.225878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.357 qpair failed and we were unable to recover it. 00:32:19.357 [2024-11-26 07:42:03.226210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.357 [2024-11-26 07:42:03.226221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.357 qpair failed and we were unable to recover it. 00:32:19.357 [2024-11-26 07:42:03.226580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.357 [2024-11-26 07:42:03.226591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.357 qpair failed and we were unable to recover it. 00:32:19.357 [2024-11-26 07:42:03.226926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.357 [2024-11-26 07:42:03.226938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.357 qpair failed and we were unable to recover it. 00:32:19.357 [2024-11-26 07:42:03.227260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.357 [2024-11-26 07:42:03.227271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.357 qpair failed and we were unable to recover it. 00:32:19.357 [2024-11-26 07:42:03.227497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.357 [2024-11-26 07:42:03.227507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.357 qpair failed and we were unable to recover it. 00:32:19.357 [2024-11-26 07:42:03.227835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.357 [2024-11-26 07:42:03.227848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.357 qpair failed and we were unable to recover it. 00:32:19.357 [2024-11-26 07:42:03.228176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.357 [2024-11-26 07:42:03.228187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.357 qpair failed and we were unable to recover it. 00:32:19.357 [2024-11-26 07:42:03.228493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.357 [2024-11-26 07:42:03.228504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.357 qpair failed and we were unable to recover it. 00:32:19.357 [2024-11-26 07:42:03.228806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.357 [2024-11-26 07:42:03.228817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.357 qpair failed and we were unable to recover it. 00:32:19.357 [2024-11-26 07:42:03.229128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.357 [2024-11-26 07:42:03.229139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.357 qpair failed and we were unable to recover it. 00:32:19.357 [2024-11-26 07:42:03.229461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.357 [2024-11-26 07:42:03.229473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.357 qpair failed and we were unable to recover it. 00:32:19.357 [2024-11-26 07:42:03.229773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.357 [2024-11-26 07:42:03.229784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.357 qpair failed and we were unable to recover it. 00:32:19.357 [2024-11-26 07:42:03.230078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.357 [2024-11-26 07:42:03.230089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.357 qpair failed and we were unable to recover it. 00:32:19.357 [2024-11-26 07:42:03.230306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.357 [2024-11-26 07:42:03.230317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.357 qpair failed and we were unable to recover it. 00:32:19.357 [2024-11-26 07:42:03.230626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.357 [2024-11-26 07:42:03.230638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.357 qpair failed and we were unable to recover it. 00:32:19.357 [2024-11-26 07:42:03.230941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.357 [2024-11-26 07:42:03.230953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.357 qpair failed and we were unable to recover it. 00:32:19.357 [2024-11-26 07:42:03.231282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.357 [2024-11-26 07:42:03.231293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.357 qpair failed and we were unable to recover it. 00:32:19.357 [2024-11-26 07:42:03.231586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.357 [2024-11-26 07:42:03.231597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.357 qpair failed and we were unable to recover it. 00:32:19.357 [2024-11-26 07:42:03.231905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.357 [2024-11-26 07:42:03.231916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.357 qpair failed and we were unable to recover it. 00:32:19.357 [2024-11-26 07:42:03.232228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.357 [2024-11-26 07:42:03.232239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.357 qpair failed and we were unable to recover it. 00:32:19.357 [2024-11-26 07:42:03.232550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.357 [2024-11-26 07:42:03.232561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.357 qpair failed and we were unable to recover it. 00:32:19.357 [2024-11-26 07:42:03.232868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.357 [2024-11-26 07:42:03.232880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.357 qpair failed and we were unable to recover it. 00:32:19.357 [2024-11-26 07:42:03.233225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.357 [2024-11-26 07:42:03.233237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.357 qpair failed and we were unable to recover it. 00:32:19.357 [2024-11-26 07:42:03.233558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.357 [2024-11-26 07:42:03.233569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.357 qpair failed and we were unable to recover it. 00:32:19.357 [2024-11-26 07:42:03.233882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.357 [2024-11-26 07:42:03.233894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.357 qpair failed and we were unable to recover it. 00:32:19.357 [2024-11-26 07:42:03.234201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.357 [2024-11-26 07:42:03.234212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.357 qpair failed and we were unable to recover it. 00:32:19.357 [2024-11-26 07:42:03.234515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.357 [2024-11-26 07:42:03.234527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.357 qpair failed and we were unable to recover it. 00:32:19.357 [2024-11-26 07:42:03.234830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.357 [2024-11-26 07:42:03.234841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.357 qpair failed and we were unable to recover it. 00:32:19.357 [2024-11-26 07:42:03.235158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.357 [2024-11-26 07:42:03.235170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.357 qpair failed and we were unable to recover it. 00:32:19.357 [2024-11-26 07:42:03.235503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.357 [2024-11-26 07:42:03.235514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.357 qpair failed and we were unable to recover it. 00:32:19.357 [2024-11-26 07:42:03.235824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.357 [2024-11-26 07:42:03.235836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.357 qpair failed and we were unable to recover it. 00:32:19.357 [2024-11-26 07:42:03.235989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.357 [2024-11-26 07:42:03.236000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.357 qpair failed and we were unable to recover it. 00:32:19.358 [2024-11-26 07:42:03.236226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.358 [2024-11-26 07:42:03.236237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.358 qpair failed and we were unable to recover it. 00:32:19.358 [2024-11-26 07:42:03.236574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.358 [2024-11-26 07:42:03.236585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.358 qpair failed and we were unable to recover it. 00:32:19.358 [2024-11-26 07:42:03.236922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.358 [2024-11-26 07:42:03.236933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.358 qpair failed and we were unable to recover it. 00:32:19.358 [2024-11-26 07:42:03.237120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.358 [2024-11-26 07:42:03.237131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.358 qpair failed and we were unable to recover it. 00:32:19.358 [2024-11-26 07:42:03.237429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.358 [2024-11-26 07:42:03.237441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.358 qpair failed and we were unable to recover it. 00:32:19.358 [2024-11-26 07:42:03.237772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.358 [2024-11-26 07:42:03.237784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.358 qpair failed and we were unable to recover it. 00:32:19.358 [2024-11-26 07:42:03.238116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.358 [2024-11-26 07:42:03.238128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.358 qpair failed and we were unable to recover it. 00:32:19.358 [2024-11-26 07:42:03.238348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.358 [2024-11-26 07:42:03.238358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.358 qpair failed and we were unable to recover it. 00:32:19.358 [2024-11-26 07:42:03.238668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.358 [2024-11-26 07:42:03.238681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.358 qpair failed and we were unable to recover it. 00:32:19.358 [2024-11-26 07:42:03.238968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.358 [2024-11-26 07:42:03.238979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.358 qpair failed and we were unable to recover it. 00:32:19.358 [2024-11-26 07:42:03.239289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.358 [2024-11-26 07:42:03.239301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.358 qpair failed and we were unable to recover it. 00:32:19.358 [2024-11-26 07:42:03.239637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.358 [2024-11-26 07:42:03.239648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.358 qpair failed and we were unable to recover it. 00:32:19.358 [2024-11-26 07:42:03.239859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.358 [2024-11-26 07:42:03.239875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.358 qpair failed and we were unable to recover it. 00:32:19.358 [2024-11-26 07:42:03.240204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.358 [2024-11-26 07:42:03.240215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.358 qpair failed and we were unable to recover it. 00:32:19.358 [2024-11-26 07:42:03.240560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.358 [2024-11-26 07:42:03.240571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.358 qpair failed and we were unable to recover it. 00:32:19.358 [2024-11-26 07:42:03.240880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.358 [2024-11-26 07:42:03.240891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.358 qpair failed and we were unable to recover it. 00:32:19.358 [2024-11-26 07:42:03.241202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.358 [2024-11-26 07:42:03.241213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.358 qpair failed and we were unable to recover it. 00:32:19.358 [2024-11-26 07:42:03.241493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.358 [2024-11-26 07:42:03.241504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.358 qpair failed and we were unable to recover it. 00:32:19.358 [2024-11-26 07:42:03.241811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.358 [2024-11-26 07:42:03.241822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.358 qpair failed and we were unable to recover it. 00:32:19.358 [2024-11-26 07:42:03.242131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.358 [2024-11-26 07:42:03.242143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.358 qpair failed and we were unable to recover it. 00:32:19.358 [2024-11-26 07:42:03.242471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.358 [2024-11-26 07:42:03.242482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.358 qpair failed and we were unable to recover it. 00:32:19.358 [2024-11-26 07:42:03.242817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.358 [2024-11-26 07:42:03.242828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.358 qpair failed and we were unable to recover it. 00:32:19.358 [2024-11-26 07:42:03.243139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.358 [2024-11-26 07:42:03.243150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.358 qpair failed and we were unable to recover it. 00:32:19.358 [2024-11-26 07:42:03.243494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.358 [2024-11-26 07:42:03.243506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.358 qpair failed and we were unable to recover it. 00:32:19.358 [2024-11-26 07:42:03.243815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.358 [2024-11-26 07:42:03.243827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.358 qpair failed and we were unable to recover it. 00:32:19.358 [2024-11-26 07:42:03.244044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.358 [2024-11-26 07:42:03.244055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.358 qpair failed and we were unable to recover it. 00:32:19.358 [2024-11-26 07:42:03.244338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.358 [2024-11-26 07:42:03.244349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.358 qpair failed and we were unable to recover it. 00:32:19.358 [2024-11-26 07:42:03.244555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.358 [2024-11-26 07:42:03.244567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.358 qpair failed and we were unable to recover it. 00:32:19.358 [2024-11-26 07:42:03.244904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.358 [2024-11-26 07:42:03.244916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.358 qpair failed and we were unable to recover it. 00:32:19.358 [2024-11-26 07:42:03.245237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.358 [2024-11-26 07:42:03.245249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.358 qpair failed and we were unable to recover it. 00:32:19.358 [2024-11-26 07:42:03.245561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.358 [2024-11-26 07:42:03.245572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.358 qpair failed and we were unable to recover it. 00:32:19.358 [2024-11-26 07:42:03.245901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.358 [2024-11-26 07:42:03.245912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.358 qpair failed and we were unable to recover it. 00:32:19.358 [2024-11-26 07:42:03.246196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.358 [2024-11-26 07:42:03.246206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.358 qpair failed and we were unable to recover it. 00:32:19.358 [2024-11-26 07:42:03.246539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.358 [2024-11-26 07:42:03.246551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.358 qpair failed and we were unable to recover it. 00:32:19.358 [2024-11-26 07:42:03.246771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.358 [2024-11-26 07:42:03.246782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.358 qpair failed and we were unable to recover it. 00:32:19.358 [2024-11-26 07:42:03.246961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.358 [2024-11-26 07:42:03.246971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.358 qpair failed and we were unable to recover it. 00:32:19.358 [2024-11-26 07:42:03.247258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.358 [2024-11-26 07:42:03.247269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.358 qpair failed and we were unable to recover it. 00:32:19.358 [2024-11-26 07:42:03.247575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.358 [2024-11-26 07:42:03.247585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.358 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.247927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.359 [2024-11-26 07:42:03.247938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.359 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.248275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.359 [2024-11-26 07:42:03.248287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.359 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.248657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.359 [2024-11-26 07:42:03.248668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.359 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.248979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.359 [2024-11-26 07:42:03.248994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.359 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.249203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.359 [2024-11-26 07:42:03.249214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.359 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.249487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.359 [2024-11-26 07:42:03.249498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.359 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.249820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.359 [2024-11-26 07:42:03.249832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.359 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.250161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.359 [2024-11-26 07:42:03.250173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.359 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.250506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.359 [2024-11-26 07:42:03.250518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.359 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.250852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.359 [2024-11-26 07:42:03.250867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.359 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.251196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.359 [2024-11-26 07:42:03.251207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.359 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.251520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.359 [2024-11-26 07:42:03.251532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.359 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.251750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.359 [2024-11-26 07:42:03.251762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.359 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.251970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.359 [2024-11-26 07:42:03.251982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.359 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.252292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.359 [2024-11-26 07:42:03.252304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.359 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.252513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.359 [2024-11-26 07:42:03.252525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.359 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.252798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.359 [2024-11-26 07:42:03.252810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.359 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.253097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.359 [2024-11-26 07:42:03.253110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.359 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.253441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.359 [2024-11-26 07:42:03.253452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.359 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.253779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.359 [2024-11-26 07:42:03.253789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.359 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.254015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.359 [2024-11-26 07:42:03.254026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.359 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.254232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.359 [2024-11-26 07:42:03.254245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.359 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.254431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.359 [2024-11-26 07:42:03.254445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.359 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.254717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.359 [2024-11-26 07:42:03.254728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.359 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.255064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.359 [2024-11-26 07:42:03.255075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.359 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.255400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.359 [2024-11-26 07:42:03.255412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.359 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.255742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.359 [2024-11-26 07:42:03.255753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.359 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.256090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.359 [2024-11-26 07:42:03.256103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.359 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.256437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.359 [2024-11-26 07:42:03.256448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.359 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.256751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.359 [2024-11-26 07:42:03.256762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.359 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.257057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.359 [2024-11-26 07:42:03.257071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.359 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.257397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.359 [2024-11-26 07:42:03.257408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.359 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.257743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.359 [2024-11-26 07:42:03.257754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.359 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.258077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.359 [2024-11-26 07:42:03.258088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.359 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.258406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.359 [2024-11-26 07:42:03.258417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.359 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.258718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.359 [2024-11-26 07:42:03.258729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.359 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.259032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.359 [2024-11-26 07:42:03.259044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.359 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.259391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.359 [2024-11-26 07:42:03.259403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.359 qpair failed and we were unable to recover it. 00:32:19.359 [2024-11-26 07:42:03.259753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.360 [2024-11-26 07:42:03.259764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.360 qpair failed and we were unable to recover it. 00:32:19.360 [2024-11-26 07:42:03.260094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.360 [2024-11-26 07:42:03.260107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.360 qpair failed and we were unable to recover it. 00:32:19.360 [2024-11-26 07:42:03.260311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.360 [2024-11-26 07:42:03.260323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.360 qpair failed and we were unable to recover it. 00:32:19.360 [2024-11-26 07:42:03.260649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.360 [2024-11-26 07:42:03.260661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.360 qpair failed and we were unable to recover it. 00:32:19.360 [2024-11-26 07:42:03.260984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.360 [2024-11-26 07:42:03.260995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.360 qpair failed and we were unable to recover it. 00:32:19.360 [2024-11-26 07:42:03.261359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.360 [2024-11-26 07:42:03.261370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.360 qpair failed and we were unable to recover it. 00:32:19.360 [2024-11-26 07:42:03.261667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.360 [2024-11-26 07:42:03.261678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.360 qpair failed and we were unable to recover it. 00:32:19.360 [2024-11-26 07:42:03.261859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.360 [2024-11-26 07:42:03.261880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.360 qpair failed and we were unable to recover it. 00:32:19.360 [2024-11-26 07:42:03.262207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.360 [2024-11-26 07:42:03.262218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.360 qpair failed and we were unable to recover it. 00:32:19.360 [2024-11-26 07:42:03.262524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.360 [2024-11-26 07:42:03.262536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.360 qpair failed and we were unable to recover it. 00:32:19.360 [2024-11-26 07:42:03.262870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.360 [2024-11-26 07:42:03.262882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.360 qpair failed and we were unable to recover it. 00:32:19.360 [2024-11-26 07:42:03.263074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.360 [2024-11-26 07:42:03.263084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.360 qpair failed and we were unable to recover it. 00:32:19.360 [2024-11-26 07:42:03.263400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.360 [2024-11-26 07:42:03.263411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.360 qpair failed and we were unable to recover it. 00:32:19.360 [2024-11-26 07:42:03.263643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.360 [2024-11-26 07:42:03.263654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.360 qpair failed and we were unable to recover it. 00:32:19.360 [2024-11-26 07:42:03.263959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.360 [2024-11-26 07:42:03.263971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.360 qpair failed and we were unable to recover it. 00:32:19.360 [2024-11-26 07:42:03.264281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.360 [2024-11-26 07:42:03.264292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.360 qpair failed and we were unable to recover it. 00:32:19.360 [2024-11-26 07:42:03.264597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.360 [2024-11-26 07:42:03.264608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.360 qpair failed and we were unable to recover it. 00:32:19.360 [2024-11-26 07:42:03.264924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.360 [2024-11-26 07:42:03.264936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.360 qpair failed and we were unable to recover it. 00:32:19.360 [2024-11-26 07:42:03.265246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.360 [2024-11-26 07:42:03.265257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.360 qpair failed and we were unable to recover it. 00:32:19.360 [2024-11-26 07:42:03.265562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.360 [2024-11-26 07:42:03.265578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.360 qpair failed and we were unable to recover it. 00:32:19.360 [2024-11-26 07:42:03.265901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.360 [2024-11-26 07:42:03.265913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.360 qpair failed and we were unable to recover it. 00:32:19.360 [2024-11-26 07:42:03.266221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.360 [2024-11-26 07:42:03.266232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.360 qpair failed and we were unable to recover it. 00:32:19.360 [2024-11-26 07:42:03.266559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.360 [2024-11-26 07:42:03.266570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.360 qpair failed and we were unable to recover it. 00:32:19.360 [2024-11-26 07:42:03.266763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.360 [2024-11-26 07:42:03.266774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.360 qpair failed and we were unable to recover it. 00:32:19.360 [2024-11-26 07:42:03.266954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.360 [2024-11-26 07:42:03.266965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.360 qpair failed and we were unable to recover it. 00:32:19.360 [2024-11-26 07:42:03.267150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.360 [2024-11-26 07:42:03.267163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.360 qpair failed and we were unable to recover it. 00:32:19.360 [2024-11-26 07:42:03.267472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.360 [2024-11-26 07:42:03.267484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.360 qpair failed and we were unable to recover it. 00:32:19.360 [2024-11-26 07:42:03.267785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.360 [2024-11-26 07:42:03.267797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.360 qpair failed and we were unable to recover it. 00:32:19.360 [2024-11-26 07:42:03.268045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.360 [2024-11-26 07:42:03.268057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.360 qpair failed and we were unable to recover it. 00:32:19.360 [2024-11-26 07:42:03.268363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.360 [2024-11-26 07:42:03.268375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.360 qpair failed and we were unable to recover it. 00:32:19.360 [2024-11-26 07:42:03.268689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.360 [2024-11-26 07:42:03.268700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.360 qpair failed and we were unable to recover it. 00:32:19.360 [2024-11-26 07:42:03.269009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.360 [2024-11-26 07:42:03.269020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.360 qpair failed and we were unable to recover it. 00:32:19.360 [2024-11-26 07:42:03.269343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.360 [2024-11-26 07:42:03.269354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.360 qpair failed and we were unable to recover it. 00:32:19.360 [2024-11-26 07:42:03.269653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.360 [2024-11-26 07:42:03.269665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.360 qpair failed and we were unable to recover it. 00:32:19.360 [2024-11-26 07:42:03.269977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.360 [2024-11-26 07:42:03.269989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.360 qpair failed and we were unable to recover it. 00:32:19.360 [2024-11-26 07:42:03.270213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.360 [2024-11-26 07:42:03.270224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.360 qpair failed and we were unable to recover it. 00:32:19.360 [2024-11-26 07:42:03.270545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.360 [2024-11-26 07:42:03.270556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.360 qpair failed and we were unable to recover it. 00:32:19.360 [2024-11-26 07:42:03.270790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.360 [2024-11-26 07:42:03.270800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.360 qpair failed and we were unable to recover it. 00:32:19.360 [2024-11-26 07:42:03.271102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.361 [2024-11-26 07:42:03.271114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.361 qpair failed and we were unable to recover it. 00:32:19.361 [2024-11-26 07:42:03.271332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.361 [2024-11-26 07:42:03.271343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.361 qpair failed and we were unable to recover it. 00:32:19.361 [2024-11-26 07:42:03.271654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.361 [2024-11-26 07:42:03.271665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.361 qpair failed and we were unable to recover it. 00:32:19.361 [2024-11-26 07:42:03.271855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.361 [2024-11-26 07:42:03.271870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.361 qpair failed and we were unable to recover it. 00:32:19.361 [2024-11-26 07:42:03.272171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.361 [2024-11-26 07:42:03.272183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.361 qpair failed and we were unable to recover it. 00:32:19.361 [2024-11-26 07:42:03.272499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.361 [2024-11-26 07:42:03.272511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.361 qpair failed and we were unable to recover it. 00:32:19.361 [2024-11-26 07:42:03.272806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.361 [2024-11-26 07:42:03.272818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.361 qpair failed and we were unable to recover it. 00:32:19.361 [2024-11-26 07:42:03.273129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.361 [2024-11-26 07:42:03.273142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.361 qpair failed and we were unable to recover it. 00:32:19.361 [2024-11-26 07:42:03.273448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.361 [2024-11-26 07:42:03.273459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.361 qpair failed and we were unable to recover it. 00:32:19.361 [2024-11-26 07:42:03.273786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.361 [2024-11-26 07:42:03.273798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.361 qpair failed and we were unable to recover it. 00:32:19.361 [2024-11-26 07:42:03.274133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.361 [2024-11-26 07:42:03.274145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.361 qpair failed and we were unable to recover it. 00:32:19.361 [2024-11-26 07:42:03.274450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.361 [2024-11-26 07:42:03.274462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.361 qpair failed and we were unable to recover it. 00:32:19.361 [2024-11-26 07:42:03.274794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.361 [2024-11-26 07:42:03.274805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.361 qpair failed and we were unable to recover it. 00:32:19.361 [2024-11-26 07:42:03.275167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.361 [2024-11-26 07:42:03.275180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.361 qpair failed and we were unable to recover it. 00:32:19.361 [2024-11-26 07:42:03.275521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.361 [2024-11-26 07:42:03.275532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.361 qpair failed and we were unable to recover it. 00:32:19.361 [2024-11-26 07:42:03.275850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.361 [2024-11-26 07:42:03.275867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.361 qpair failed and we were unable to recover it. 00:32:19.361 [2024-11-26 07:42:03.276211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.361 [2024-11-26 07:42:03.276222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.361 qpair failed and we were unable to recover it. 00:32:19.361 [2024-11-26 07:42:03.276529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.361 [2024-11-26 07:42:03.276541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.361 qpair failed and we were unable to recover it. 00:32:19.361 [2024-11-26 07:42:03.276720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.361 [2024-11-26 07:42:03.276732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.361 qpair failed and we were unable to recover it. 00:32:19.361 [2024-11-26 07:42:03.277042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.361 [2024-11-26 07:42:03.277053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.361 qpair failed and we were unable to recover it. 00:32:19.361 [2024-11-26 07:42:03.277350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.361 [2024-11-26 07:42:03.277361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.361 qpair failed and we were unable to recover it. 00:32:19.361 [2024-11-26 07:42:03.277671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.361 [2024-11-26 07:42:03.277683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.361 qpair failed and we were unable to recover it. 00:32:19.361 [2024-11-26 07:42:03.278014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.361 [2024-11-26 07:42:03.278027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.361 qpair failed and we were unable to recover it. 00:32:19.361 [2024-11-26 07:42:03.278355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.361 [2024-11-26 07:42:03.278365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.361 qpair failed and we were unable to recover it. 00:32:19.361 [2024-11-26 07:42:03.278702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.361 [2024-11-26 07:42:03.278713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.361 qpair failed and we were unable to recover it. 00:32:19.361 [2024-11-26 07:42:03.279063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.361 [2024-11-26 07:42:03.279075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.361 qpair failed and we were unable to recover it. 00:32:19.361 [2024-11-26 07:42:03.279371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.361 [2024-11-26 07:42:03.279382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.361 qpair failed and we were unable to recover it. 00:32:19.361 [2024-11-26 07:42:03.279697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.361 [2024-11-26 07:42:03.279708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.361 qpair failed and we were unable to recover it. 00:32:19.361 [2024-11-26 07:42:03.279933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.361 [2024-11-26 07:42:03.279944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.361 qpair failed and we were unable to recover it. 00:32:19.361 [2024-11-26 07:42:03.280280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.361 [2024-11-26 07:42:03.280292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.361 qpair failed and we were unable to recover it. 00:32:19.361 [2024-11-26 07:42:03.280593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.361 [2024-11-26 07:42:03.280605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.361 qpair failed and we were unable to recover it. 00:32:19.361 [2024-11-26 07:42:03.280951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.361 [2024-11-26 07:42:03.280963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.361 qpair failed and we were unable to recover it. 00:32:19.361 [2024-11-26 07:42:03.281305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.361 [2024-11-26 07:42:03.281316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.361 qpair failed and we were unable to recover it. 00:32:19.361 [2024-11-26 07:42:03.281622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.362 [2024-11-26 07:42:03.281633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.362 qpair failed and we were unable to recover it. 00:32:19.362 [2024-11-26 07:42:03.281946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.362 [2024-11-26 07:42:03.281958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.362 qpair failed and we were unable to recover it. 00:32:19.362 [2024-11-26 07:42:03.282302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.362 [2024-11-26 07:42:03.282313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.362 qpair failed and we were unable to recover it. 00:32:19.362 [2024-11-26 07:42:03.282631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.362 [2024-11-26 07:42:03.282643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.362 qpair failed and we were unable to recover it. 00:32:19.362 [2024-11-26 07:42:03.282938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.362 [2024-11-26 07:42:03.282949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.362 qpair failed and we were unable to recover it. 00:32:19.362 [2024-11-26 07:42:03.283256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.362 [2024-11-26 07:42:03.283268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.362 qpair failed and we were unable to recover it. 00:32:19.362 [2024-11-26 07:42:03.283584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.362 [2024-11-26 07:42:03.283596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.362 qpair failed and we were unable to recover it. 00:32:19.362 [2024-11-26 07:42:03.283925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.362 [2024-11-26 07:42:03.283937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.362 qpair failed and we were unable to recover it. 00:32:19.362 [2024-11-26 07:42:03.284269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.362 [2024-11-26 07:42:03.284281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.362 qpair failed and we were unable to recover it. 00:32:19.362 [2024-11-26 07:42:03.284462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.362 [2024-11-26 07:42:03.284472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.362 qpair failed and we were unable to recover it. 00:32:19.362 [2024-11-26 07:42:03.284782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.362 [2024-11-26 07:42:03.284794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.362 qpair failed and we were unable to recover it. 00:32:19.362 [2024-11-26 07:42:03.285195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.362 [2024-11-26 07:42:03.285206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.362 qpair failed and we were unable to recover it. 00:32:19.362 [2024-11-26 07:42:03.285420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.362 [2024-11-26 07:42:03.285431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.362 qpair failed and we were unable to recover it. 00:32:19.362 [2024-11-26 07:42:03.285719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.362 [2024-11-26 07:42:03.285732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.362 qpair failed and we were unable to recover it. 00:32:19.362 [2024-11-26 07:42:03.286032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.362 [2024-11-26 07:42:03.286044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.362 qpair failed and we were unable to recover it. 00:32:19.362 [2024-11-26 07:42:03.286354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.362 [2024-11-26 07:42:03.286367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.362 qpair failed and we were unable to recover it. 00:32:19.362 [2024-11-26 07:42:03.286681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.362 [2024-11-26 07:42:03.286696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.362 qpair failed and we were unable to recover it. 00:32:19.362 [2024-11-26 07:42:03.287006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.362 [2024-11-26 07:42:03.287017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.362 qpair failed and we were unable to recover it. 00:32:19.362 [2024-11-26 07:42:03.287348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.362 [2024-11-26 07:42:03.287359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.362 qpair failed and we were unable to recover it. 00:32:19.362 [2024-11-26 07:42:03.287690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.362 [2024-11-26 07:42:03.287700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.362 qpair failed and we were unable to recover it. 00:32:19.362 [2024-11-26 07:42:03.288024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.362 [2024-11-26 07:42:03.288036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.362 qpair failed and we were unable to recover it. 00:32:19.362 [2024-11-26 07:42:03.288257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.362 [2024-11-26 07:42:03.288267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.362 qpair failed and we were unable to recover it. 00:32:19.362 [2024-11-26 07:42:03.288589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.362 [2024-11-26 07:42:03.288600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.362 qpair failed and we were unable to recover it. 00:32:19.362 [2024-11-26 07:42:03.288969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.362 [2024-11-26 07:42:03.288981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.362 qpair failed and we were unable to recover it. 00:32:19.362 [2024-11-26 07:42:03.289242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.362 [2024-11-26 07:42:03.289253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.362 qpair failed and we were unable to recover it. 00:32:19.362 [2024-11-26 07:42:03.289538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.362 [2024-11-26 07:42:03.289549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.362 qpair failed and we were unable to recover it. 00:32:19.362 [2024-11-26 07:42:03.289873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.362 [2024-11-26 07:42:03.289886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.362 qpair failed and we were unable to recover it. 00:32:19.362 [2024-11-26 07:42:03.290116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.362 [2024-11-26 07:42:03.290126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.362 qpair failed and we were unable to recover it. 00:32:19.362 [2024-11-26 07:42:03.290424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.362 [2024-11-26 07:42:03.290435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.362 qpair failed and we were unable to recover it. 00:32:19.362 [2024-11-26 07:42:03.290788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.362 [2024-11-26 07:42:03.290799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.362 qpair failed and we were unable to recover it. 00:32:19.362 [2024-11-26 07:42:03.291112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.362 [2024-11-26 07:42:03.291125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.362 qpair failed and we were unable to recover it. 00:32:19.362 [2024-11-26 07:42:03.291401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.362 [2024-11-26 07:42:03.291413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.362 qpair failed and we were unable to recover it. 00:32:19.362 [2024-11-26 07:42:03.291775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.362 [2024-11-26 07:42:03.291787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.362 qpair failed and we were unable to recover it. 00:32:19.362 [2024-11-26 07:42:03.292099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.362 [2024-11-26 07:42:03.292110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.362 qpair failed and we were unable to recover it. 00:32:19.362 [2024-11-26 07:42:03.292423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.362 [2024-11-26 07:42:03.292435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.362 qpair failed and we were unable to recover it. 00:32:19.362 [2024-11-26 07:42:03.292727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.362 [2024-11-26 07:42:03.292738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.362 qpair failed and we were unable to recover it. 00:32:19.362 [2024-11-26 07:42:03.293075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.362 [2024-11-26 07:42:03.293087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.362 qpair failed and we were unable to recover it. 00:32:19.362 [2024-11-26 07:42:03.293292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.362 [2024-11-26 07:42:03.293303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.362 qpair failed and we were unable to recover it. 00:32:19.362 [2024-11-26 07:42:03.293506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.363 [2024-11-26 07:42:03.293518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.363 qpair failed and we were unable to recover it. 00:32:19.363 [2024-11-26 07:42:03.293800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.363 [2024-11-26 07:42:03.293811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.363 qpair failed and we were unable to recover it. 00:32:19.363 [2024-11-26 07:42:03.294120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.363 [2024-11-26 07:42:03.294131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.363 qpair failed and we were unable to recover it. 00:32:19.363 [2024-11-26 07:42:03.294446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.363 [2024-11-26 07:42:03.294458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.363 qpair failed and we were unable to recover it. 00:32:19.363 [2024-11-26 07:42:03.294680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.363 [2024-11-26 07:42:03.294691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.363 qpair failed and we were unable to recover it. 00:32:19.363 [2024-11-26 07:42:03.295002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.363 [2024-11-26 07:42:03.295015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.363 qpair failed and we were unable to recover it. 00:32:19.363 [2024-11-26 07:42:03.295359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.363 [2024-11-26 07:42:03.295371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.363 qpair failed and we were unable to recover it. 00:32:19.363 [2024-11-26 07:42:03.295692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.363 [2024-11-26 07:42:03.295703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.363 qpair failed and we were unable to recover it. 00:32:19.363 [2024-11-26 07:42:03.296023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.363 [2024-11-26 07:42:03.296035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.363 qpair failed and we were unable to recover it. 00:32:19.363 [2024-11-26 07:42:03.296245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.363 [2024-11-26 07:42:03.296256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.363 qpair failed and we were unable to recover it. 00:32:19.363 [2024-11-26 07:42:03.296565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.363 [2024-11-26 07:42:03.296575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.363 qpair failed and we were unable to recover it. 00:32:19.363 [2024-11-26 07:42:03.296876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.363 [2024-11-26 07:42:03.296889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.363 qpair failed and we were unable to recover it. 00:32:19.363 [2024-11-26 07:42:03.297225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.363 [2024-11-26 07:42:03.297236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.363 qpair failed and we were unable to recover it. 00:32:19.363 [2024-11-26 07:42:03.297580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.363 [2024-11-26 07:42:03.297592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.363 qpair failed and we were unable to recover it. 00:32:19.363 [2024-11-26 07:42:03.297901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.363 [2024-11-26 07:42:03.297913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.363 qpair failed and we were unable to recover it. 00:32:19.363 [2024-11-26 07:42:03.298225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.363 [2024-11-26 07:42:03.298235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.363 qpair failed and we were unable to recover it. 00:32:19.363 [2024-11-26 07:42:03.298542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.363 [2024-11-26 07:42:03.298553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.363 qpair failed and we were unable to recover it. 00:32:19.363 [2024-11-26 07:42:03.298732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.363 [2024-11-26 07:42:03.298743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.363 qpair failed and we were unable to recover it. 00:32:19.363 [2024-11-26 07:42:03.299024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.363 [2024-11-26 07:42:03.299035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.363 qpair failed and we were unable to recover it. 00:32:19.363 [2024-11-26 07:42:03.299354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.363 [2024-11-26 07:42:03.299365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.363 qpair failed and we were unable to recover it. 00:32:19.363 [2024-11-26 07:42:03.299673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.363 [2024-11-26 07:42:03.299685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.363 qpair failed and we were unable to recover it. 00:32:19.363 [2024-11-26 07:42:03.300024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.363 [2024-11-26 07:42:03.300036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.363 qpair failed and we were unable to recover it. 00:32:19.363 [2024-11-26 07:42:03.300350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.363 [2024-11-26 07:42:03.300362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.363 qpair failed and we were unable to recover it. 00:32:19.363 [2024-11-26 07:42:03.300634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.363 [2024-11-26 07:42:03.300645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.363 qpair failed and we were unable to recover it. 00:32:19.363 [2024-11-26 07:42:03.300963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.363 [2024-11-26 07:42:03.300975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.363 qpair failed and we were unable to recover it. 00:32:19.363 [2024-11-26 07:42:03.301205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.363 [2024-11-26 07:42:03.301215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.363 qpair failed and we were unable to recover it. 00:32:19.363 [2024-11-26 07:42:03.301490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.363 [2024-11-26 07:42:03.301501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.363 qpair failed and we were unable to recover it. 00:32:19.363 [2024-11-26 07:42:03.301823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.363 [2024-11-26 07:42:03.301835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.363 qpair failed and we were unable to recover it. 00:32:19.363 [2024-11-26 07:42:03.302165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.363 [2024-11-26 07:42:03.302178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.363 qpair failed and we were unable to recover it. 00:32:19.363 [2024-11-26 07:42:03.302510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.363 [2024-11-26 07:42:03.302521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.363 qpair failed and we were unable to recover it. 00:32:19.363 [2024-11-26 07:42:03.302830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.363 [2024-11-26 07:42:03.302841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.363 qpair failed and we were unable to recover it. 00:32:19.363 [2024-11-26 07:42:03.303158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.363 [2024-11-26 07:42:03.303170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.363 qpair failed and we were unable to recover it. 00:32:19.363 [2024-11-26 07:42:03.303503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.363 [2024-11-26 07:42:03.303514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.363 qpair failed and we were unable to recover it. 00:32:19.363 [2024-11-26 07:42:03.303850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.363 [2024-11-26 07:42:03.303866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.363 qpair failed and we were unable to recover it. 00:32:19.363 [2024-11-26 07:42:03.303964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.363 [2024-11-26 07:42:03.303975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.363 qpair failed and we were unable to recover it. 00:32:19.363 [2024-11-26 07:42:03.304251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.363 [2024-11-26 07:42:03.304261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.363 qpair failed and we were unable to recover it. 00:32:19.363 [2024-11-26 07:42:03.304590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.363 [2024-11-26 07:42:03.304611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.363 qpair failed and we were unable to recover it. 00:32:19.363 [2024-11-26 07:42:03.304939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.363 [2024-11-26 07:42:03.304951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.363 qpair failed and we were unable to recover it. 00:32:19.363 [2024-11-26 07:42:03.305279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.305291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.364 [2024-11-26 07:42:03.305603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.305615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.364 [2024-11-26 07:42:03.305928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.305940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.364 [2024-11-26 07:42:03.306248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.306258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.364 [2024-11-26 07:42:03.306593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.306605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.364 [2024-11-26 07:42:03.306977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.306989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.364 [2024-11-26 07:42:03.307189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.307200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.364 [2024-11-26 07:42:03.307520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.307532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.364 [2024-11-26 07:42:03.307874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.307886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.364 [2024-11-26 07:42:03.308273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.308284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.364 [2024-11-26 07:42:03.308591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.308603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.364 [2024-11-26 07:42:03.308931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.308942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.364 [2024-11-26 07:42:03.309254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.309275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.364 [2024-11-26 07:42:03.309602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.309613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.364 [2024-11-26 07:42:03.309944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.309957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.364 [2024-11-26 07:42:03.310279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.310290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.364 [2024-11-26 07:42:03.310570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.310581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.364 [2024-11-26 07:42:03.310878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.310890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.364 [2024-11-26 07:42:03.311205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.311216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.364 [2024-11-26 07:42:03.311518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.311529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.364 [2024-11-26 07:42:03.311739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.311750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.364 [2024-11-26 07:42:03.311978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.311990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.364 [2024-11-26 07:42:03.312216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.312227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.364 [2024-11-26 07:42:03.312521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.312532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.364 [2024-11-26 07:42:03.312857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.312873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.364 [2024-11-26 07:42:03.313060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.313070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.364 [2024-11-26 07:42:03.313392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.313404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.364 [2024-11-26 07:42:03.313712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.313723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.364 [2024-11-26 07:42:03.314062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.314074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.364 [2024-11-26 07:42:03.314383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.314397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.364 [2024-11-26 07:42:03.314699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.314711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.364 [2024-11-26 07:42:03.315053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.315065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.364 [2024-11-26 07:42:03.315410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.315422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.364 [2024-11-26 07:42:03.315753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.315764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.364 [2024-11-26 07:42:03.316121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.316133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.364 [2024-11-26 07:42:03.316382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.316396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.364 [2024-11-26 07:42:03.316708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.316720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.364 [2024-11-26 07:42:03.317055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.317066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.364 [2024-11-26 07:42:03.317255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.364 [2024-11-26 07:42:03.317266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.364 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.317534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.365 [2024-11-26 07:42:03.317545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.365 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.317866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.365 [2024-11-26 07:42:03.317877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.365 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.318163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.365 [2024-11-26 07:42:03.318174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.365 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.318486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.365 [2024-11-26 07:42:03.318498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.365 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.318807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.365 [2024-11-26 07:42:03.318818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.365 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.319152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.365 [2024-11-26 07:42:03.319165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.365 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.319495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.365 [2024-11-26 07:42:03.319506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.365 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.319802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.365 [2024-11-26 07:42:03.319812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.365 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.320140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.365 [2024-11-26 07:42:03.320152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.365 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.320489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.365 [2024-11-26 07:42:03.320501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.365 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.320808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.365 [2024-11-26 07:42:03.320819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.365 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.321155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.365 [2024-11-26 07:42:03.321167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.365 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.321496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.365 [2024-11-26 07:42:03.321509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.365 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.321841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.365 [2024-11-26 07:42:03.321853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.365 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.322172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.365 [2024-11-26 07:42:03.322185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.365 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.322554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.365 [2024-11-26 07:42:03.322566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.365 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.322884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.365 [2024-11-26 07:42:03.322896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.365 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.323214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.365 [2024-11-26 07:42:03.323226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.365 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.323557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.365 [2024-11-26 07:42:03.323569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.365 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.323878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.365 [2024-11-26 07:42:03.323891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.365 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.324083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.365 [2024-11-26 07:42:03.324096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.365 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.324387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.365 [2024-11-26 07:42:03.324397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.365 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.324737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.365 [2024-11-26 07:42:03.324748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.365 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.325089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.365 [2024-11-26 07:42:03.325103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.365 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.325432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.365 [2024-11-26 07:42:03.325443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.365 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.325776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.365 [2024-11-26 07:42:03.325788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.365 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.325936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.365 [2024-11-26 07:42:03.325948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.365 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.326323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.365 [2024-11-26 07:42:03.326335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.365 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.326647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.365 [2024-11-26 07:42:03.326659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.365 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.326992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.365 [2024-11-26 07:42:03.327003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.365 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.327297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.365 [2024-11-26 07:42:03.327307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.365 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.327628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.365 [2024-11-26 07:42:03.327639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.365 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.327940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.365 [2024-11-26 07:42:03.327951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.365 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.328258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.365 [2024-11-26 07:42:03.328269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.365 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.328471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.365 [2024-11-26 07:42:03.328481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.365 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.328807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.365 [2024-11-26 07:42:03.328818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.365 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.329116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.365 [2024-11-26 07:42:03.329128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.365 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.329350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.365 [2024-11-26 07:42:03.329362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.365 qpair failed and we were unable to recover it. 00:32:19.365 [2024-11-26 07:42:03.329671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.366 [2024-11-26 07:42:03.329683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.366 qpair failed and we were unable to recover it. 00:32:19.366 [2024-11-26 07:42:03.330056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.366 [2024-11-26 07:42:03.330068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.366 qpair failed and we were unable to recover it. 00:32:19.366 [2024-11-26 07:42:03.330360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.366 [2024-11-26 07:42:03.330371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.366 qpair failed and we were unable to recover it. 00:32:19.366 [2024-11-26 07:42:03.330707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.366 [2024-11-26 07:42:03.330719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.366 qpair failed and we were unable to recover it. 00:32:19.366 [2024-11-26 07:42:03.331001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.366 [2024-11-26 07:42:03.331012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.366 qpair failed and we were unable to recover it. 00:32:19.366 [2024-11-26 07:42:03.331345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.366 [2024-11-26 07:42:03.331356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.366 qpair failed and we were unable to recover it. 00:32:19.366 [2024-11-26 07:42:03.331685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.366 [2024-11-26 07:42:03.331696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.366 qpair failed and we were unable to recover it. 00:32:19.366 [2024-11-26 07:42:03.332034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.366 [2024-11-26 07:42:03.332045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.366 qpair failed and we were unable to recover it. 00:32:19.366 [2024-11-26 07:42:03.332373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.366 [2024-11-26 07:42:03.332384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.366 qpair failed and we were unable to recover it. 00:32:19.366 [2024-11-26 07:42:03.332687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.366 [2024-11-26 07:42:03.332697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.366 qpair failed and we were unable to recover it. 00:32:19.366 [2024-11-26 07:42:03.333004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.366 [2024-11-26 07:42:03.333016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.366 qpair failed and we were unable to recover it. 00:32:19.366 [2024-11-26 07:42:03.333350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.366 [2024-11-26 07:42:03.333361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.366 qpair failed and we were unable to recover it. 00:32:19.366 [2024-11-26 07:42:03.333670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.366 [2024-11-26 07:42:03.333684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.366 qpair failed and we were unable to recover it. 00:32:19.366 [2024-11-26 07:42:03.334053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.366 [2024-11-26 07:42:03.334064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.366 qpair failed and we were unable to recover it. 00:32:19.366 [2024-11-26 07:42:03.334274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.366 [2024-11-26 07:42:03.334285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.366 qpair failed and we were unable to recover it. 00:32:19.366 [2024-11-26 07:42:03.334591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.366 [2024-11-26 07:42:03.334602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.366 qpair failed and we were unable to recover it. 00:32:19.366 [2024-11-26 07:42:03.334907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.366 [2024-11-26 07:42:03.334926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.366 qpair failed and we were unable to recover it. 00:32:19.366 [2024-11-26 07:42:03.335243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.366 [2024-11-26 07:42:03.335254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.366 qpair failed and we were unable to recover it. 00:32:19.366 [2024-11-26 07:42:03.335464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.366 [2024-11-26 07:42:03.335475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.366 qpair failed and we were unable to recover it. 00:32:19.366 [2024-11-26 07:42:03.335776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.366 [2024-11-26 07:42:03.335787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.366 qpair failed and we were unable to recover it. 00:32:19.366 [2024-11-26 07:42:03.335993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.366 [2024-11-26 07:42:03.336003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.366 qpair failed and we were unable to recover it. 00:32:19.366 [2024-11-26 07:42:03.336327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.366 [2024-11-26 07:42:03.336338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.366 qpair failed and we were unable to recover it. 00:32:19.366 [2024-11-26 07:42:03.336640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.366 [2024-11-26 07:42:03.336652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.366 qpair failed and we were unable to recover it. 00:32:19.366 [2024-11-26 07:42:03.336845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.366 [2024-11-26 07:42:03.336856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.366 qpair failed and we were unable to recover it. 00:32:19.366 [2024-11-26 07:42:03.337154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.366 [2024-11-26 07:42:03.337165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.366 qpair failed and we were unable to recover it. 00:32:19.366 [2024-11-26 07:42:03.337435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.366 [2024-11-26 07:42:03.337446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.366 qpair failed and we were unable to recover it. 00:32:19.366 [2024-11-26 07:42:03.337759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.366 [2024-11-26 07:42:03.337771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.366 qpair failed and we were unable to recover it. 00:32:19.366 [2024-11-26 07:42:03.338081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.366 [2024-11-26 07:42:03.338093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.366 qpair failed and we were unable to recover it. 00:32:19.366 [2024-11-26 07:42:03.338401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.366 [2024-11-26 07:42:03.338413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.366 qpair failed and we were unable to recover it. 00:32:19.366 [2024-11-26 07:42:03.338762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.366 [2024-11-26 07:42:03.338774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.366 qpair failed and we were unable to recover it. 00:32:19.366 [2024-11-26 07:42:03.339074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.366 [2024-11-26 07:42:03.339085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.366 qpair failed and we were unable to recover it. 00:32:19.366 [2024-11-26 07:42:03.339268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.366 [2024-11-26 07:42:03.339279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.366 qpair failed and we were unable to recover it. 00:32:19.366 [2024-11-26 07:42:03.339607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.366 [2024-11-26 07:42:03.339619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.366 qpair failed and we were unable to recover it. 00:32:19.366 [2024-11-26 07:42:03.339929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.366 [2024-11-26 07:42:03.339941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.366 qpair failed and we were unable to recover it. 00:32:19.367 [2024-11-26 07:42:03.340194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.367 [2024-11-26 07:42:03.340205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.367 qpair failed and we were unable to recover it. 00:32:19.367 [2024-11-26 07:42:03.340485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.367 [2024-11-26 07:42:03.340496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.367 qpair failed and we were unable to recover it. 00:32:19.367 [2024-11-26 07:42:03.340795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.367 [2024-11-26 07:42:03.340806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.367 qpair failed and we were unable to recover it. 00:32:19.367 [2024-11-26 07:42:03.341206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.367 [2024-11-26 07:42:03.341217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.367 qpair failed and we were unable to recover it. 00:32:19.367 [2024-11-26 07:42:03.341525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.367 [2024-11-26 07:42:03.341537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.367 qpair failed and we were unable to recover it. 00:32:19.367 [2024-11-26 07:42:03.341729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.367 [2024-11-26 07:42:03.341740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.367 qpair failed and we were unable to recover it. 00:32:19.367 [2024-11-26 07:42:03.342016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.367 [2024-11-26 07:42:03.342027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.367 qpair failed and we were unable to recover it. 00:32:19.367 [2024-11-26 07:42:03.342351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.367 [2024-11-26 07:42:03.342362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.367 qpair failed and we were unable to recover it. 00:32:19.367 [2024-11-26 07:42:03.342666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.367 [2024-11-26 07:42:03.342677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.367 qpair failed and we were unable to recover it. 00:32:19.367 [2024-11-26 07:42:03.343010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.367 [2024-11-26 07:42:03.343022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.367 qpair failed and we were unable to recover it. 00:32:19.367 [2024-11-26 07:42:03.343320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.367 [2024-11-26 07:42:03.343332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.367 qpair failed and we were unable to recover it. 00:32:19.367 [2024-11-26 07:42:03.343641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.367 [2024-11-26 07:42:03.343652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.367 qpair failed and we were unable to recover it. 00:32:19.367 [2024-11-26 07:42:03.344026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.367 [2024-11-26 07:42:03.344038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.367 qpair failed and we were unable to recover it. 00:32:19.367 [2024-11-26 07:42:03.344331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.367 [2024-11-26 07:42:03.344341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.367 qpair failed and we were unable to recover it. 00:32:19.367 [2024-11-26 07:42:03.344657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.367 [2024-11-26 07:42:03.344668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.367 qpair failed and we were unable to recover it. 00:32:19.367 [2024-11-26 07:42:03.344983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.367 [2024-11-26 07:42:03.344994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.367 qpair failed and we were unable to recover it. 00:32:19.367 [2024-11-26 07:42:03.345306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.367 [2024-11-26 07:42:03.345317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.367 qpair failed and we were unable to recover it. 00:32:19.367 [2024-11-26 07:42:03.345602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.367 [2024-11-26 07:42:03.345612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.367 qpair failed and we were unable to recover it. 00:32:19.367 [2024-11-26 07:42:03.345919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.367 [2024-11-26 07:42:03.345930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.367 qpair failed and we were unable to recover it. 00:32:19.367 [2024-11-26 07:42:03.346256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.367 [2024-11-26 07:42:03.346270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.367 qpair failed and we were unable to recover it. 00:32:19.367 [2024-11-26 07:42:03.346476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.367 [2024-11-26 07:42:03.346487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.367 qpair failed and we were unable to recover it. 00:32:19.367 [2024-11-26 07:42:03.346783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.367 [2024-11-26 07:42:03.346794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.367 qpair failed and we were unable to recover it. 00:32:19.367 [2024-11-26 07:42:03.347128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.367 [2024-11-26 07:42:03.347140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.367 qpair failed and we were unable to recover it. 00:32:19.367 [2024-11-26 07:42:03.347449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.367 [2024-11-26 07:42:03.347460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.367 qpair failed and we were unable to recover it. 00:32:19.367 [2024-11-26 07:42:03.347792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.367 [2024-11-26 07:42:03.347803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.367 qpair failed and we were unable to recover it. 00:32:19.367 [2024-11-26 07:42:03.348072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.367 [2024-11-26 07:42:03.348084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.367 qpair failed and we were unable to recover it. 00:32:19.367 [2024-11-26 07:42:03.348387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.367 [2024-11-26 07:42:03.348398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.367 qpair failed and we were unable to recover it. 00:32:19.367 [2024-11-26 07:42:03.348739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.367 [2024-11-26 07:42:03.348751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.367 qpair failed and we were unable to recover it. 00:32:19.367 [2024-11-26 07:42:03.349086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.367 [2024-11-26 07:42:03.349099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.367 qpair failed and we were unable to recover it. 00:32:19.367 [2024-11-26 07:42:03.349406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.367 [2024-11-26 07:42:03.349418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.367 qpair failed and we were unable to recover it. 00:32:19.367 [2024-11-26 07:42:03.349753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.367 [2024-11-26 07:42:03.349766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.367 qpair failed and we were unable to recover it. 00:32:19.367 [2024-11-26 07:42:03.350095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.367 [2024-11-26 07:42:03.350108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.367 qpair failed and we were unable to recover it. 00:32:19.367 [2024-11-26 07:42:03.350423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.367 [2024-11-26 07:42:03.350435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.367 qpair failed and we were unable to recover it. 00:32:19.367 [2024-11-26 07:42:03.350776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.367 [2024-11-26 07:42:03.350788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.367 qpair failed and we were unable to recover it. 00:32:19.367 [2024-11-26 07:42:03.351094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.367 [2024-11-26 07:42:03.351106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.367 qpair failed and we were unable to recover it. 00:32:19.367 [2024-11-26 07:42:03.351418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.367 [2024-11-26 07:42:03.351430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.367 qpair failed and we were unable to recover it. 00:32:19.367 [2024-11-26 07:42:03.351755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.367 [2024-11-26 07:42:03.351768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.367 qpair failed and we were unable to recover it. 00:32:19.367 [2024-11-26 07:42:03.352081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.368 [2024-11-26 07:42:03.352093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.368 qpair failed and we were unable to recover it. 00:32:19.368 [2024-11-26 07:42:03.352400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.368 [2024-11-26 07:42:03.352412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.368 qpair failed and we were unable to recover it. 00:32:19.368 [2024-11-26 07:42:03.352740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.368 [2024-11-26 07:42:03.352752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.368 qpair failed and we were unable to recover it. 00:32:19.368 [2024-11-26 07:42:03.353079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.368 [2024-11-26 07:42:03.353091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.368 qpair failed and we were unable to recover it. 00:32:19.368 [2024-11-26 07:42:03.353421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.368 [2024-11-26 07:42:03.353433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.368 qpair failed and we were unable to recover it. 00:32:19.368 [2024-11-26 07:42:03.353750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.368 [2024-11-26 07:42:03.353762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.368 qpair failed and we were unable to recover it. 00:32:19.368 [2024-11-26 07:42:03.354082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.368 [2024-11-26 07:42:03.354095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.368 qpair failed and we were unable to recover it. 00:32:19.368 [2024-11-26 07:42:03.354427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.368 [2024-11-26 07:42:03.354439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.368 qpair failed and we were unable to recover it. 00:32:19.368 [2024-11-26 07:42:03.354738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.368 [2024-11-26 07:42:03.354750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.368 qpair failed and we were unable to recover it. 00:32:19.368 [2024-11-26 07:42:03.355073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.368 [2024-11-26 07:42:03.355088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.368 qpair failed and we were unable to recover it. 00:32:19.368 [2024-11-26 07:42:03.355302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.368 [2024-11-26 07:42:03.355314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.368 qpair failed and we were unable to recover it. 00:32:19.368 [2024-11-26 07:42:03.355641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.368 [2024-11-26 07:42:03.355653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.368 qpair failed and we were unable to recover it. 00:32:19.368 [2024-11-26 07:42:03.355998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.368 [2024-11-26 07:42:03.356009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.368 qpair failed and we were unable to recover it. 00:32:19.368 [2024-11-26 07:42:03.356383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.368 [2024-11-26 07:42:03.356393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.368 qpair failed and we were unable to recover it. 00:32:19.368 [2024-11-26 07:42:03.356694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.368 [2024-11-26 07:42:03.356706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.368 qpair failed and we were unable to recover it. 00:32:19.368 [2024-11-26 07:42:03.356892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.368 [2024-11-26 07:42:03.356903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.368 qpair failed and we were unable to recover it. 00:32:19.368 [2024-11-26 07:42:03.357225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.368 [2024-11-26 07:42:03.357237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.368 qpair failed and we were unable to recover it. 00:32:19.368 [2024-11-26 07:42:03.357439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.368 [2024-11-26 07:42:03.357451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.368 qpair failed and we were unable to recover it. 00:32:19.368 [2024-11-26 07:42:03.357737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.368 [2024-11-26 07:42:03.357748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.368 qpair failed and we were unable to recover it. 00:32:19.368 [2024-11-26 07:42:03.358073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.368 [2024-11-26 07:42:03.358084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.368 qpair failed and we were unable to recover it. 00:32:19.368 [2024-11-26 07:42:03.358387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.368 [2024-11-26 07:42:03.358399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.368 qpair failed and we were unable to recover it. 00:32:19.368 [2024-11-26 07:42:03.358692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.368 [2024-11-26 07:42:03.358702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.368 qpair failed and we were unable to recover it. 00:32:19.368 [2024-11-26 07:42:03.358892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.368 [2024-11-26 07:42:03.358903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.368 qpair failed and we were unable to recover it. 00:32:19.368 [2024-11-26 07:42:03.359222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.368 [2024-11-26 07:42:03.359233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.368 qpair failed and we were unable to recover it. 00:32:19.368 [2024-11-26 07:42:03.359570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.368 [2024-11-26 07:42:03.359581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.368 qpair failed and we were unable to recover it. 00:32:19.368 [2024-11-26 07:42:03.359883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.368 [2024-11-26 07:42:03.359894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.368 qpair failed and we were unable to recover it. 00:32:19.368 [2024-11-26 07:42:03.360234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.368 [2024-11-26 07:42:03.360245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.368 qpair failed and we were unable to recover it. 00:32:19.368 [2024-11-26 07:42:03.360557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.368 [2024-11-26 07:42:03.360568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.368 qpair failed and we were unable to recover it. 00:32:19.368 [2024-11-26 07:42:03.360906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.368 [2024-11-26 07:42:03.360917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.368 qpair failed and we were unable to recover it. 00:32:19.368 [2024-11-26 07:42:03.361275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.368 [2024-11-26 07:42:03.361286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.368 qpair failed and we were unable to recover it. 00:32:19.368 [2024-11-26 07:42:03.361601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.368 [2024-11-26 07:42:03.361611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.368 qpair failed and we were unable to recover it. 00:32:19.368 [2024-11-26 07:42:03.361783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.368 [2024-11-26 07:42:03.361794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.368 qpair failed and we were unable to recover it. 00:32:19.368 [2024-11-26 07:42:03.361983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.368 [2024-11-26 07:42:03.361994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.368 qpair failed and we were unable to recover it. 00:32:19.368 [2024-11-26 07:42:03.362277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.368 [2024-11-26 07:42:03.362288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.368 qpair failed and we were unable to recover it. 00:32:19.368 [2024-11-26 07:42:03.362608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.368 [2024-11-26 07:42:03.362619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.368 qpair failed and we were unable to recover it. 00:32:19.368 [2024-11-26 07:42:03.362929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.368 [2024-11-26 07:42:03.362940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.368 qpair failed and we were unable to recover it. 00:32:19.368 [2024-11-26 07:42:03.363252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.368 [2024-11-26 07:42:03.363266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.368 qpair failed and we were unable to recover it. 00:32:19.368 [2024-11-26 07:42:03.363447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.368 [2024-11-26 07:42:03.363459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.368 qpair failed and we were unable to recover it. 00:32:19.368 [2024-11-26 07:42:03.363741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.363752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.369 qpair failed and we were unable to recover it. 00:32:19.369 [2024-11-26 07:42:03.363981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.363993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.369 qpair failed and we were unable to recover it. 00:32:19.369 [2024-11-26 07:42:03.364293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.364304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.369 qpair failed and we were unable to recover it. 00:32:19.369 [2024-11-26 07:42:03.364606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.364618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.369 qpair failed and we were unable to recover it. 00:32:19.369 [2024-11-26 07:42:03.364798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.364810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.369 qpair failed and we were unable to recover it. 00:32:19.369 [2024-11-26 07:42:03.365003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.365013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.369 qpair failed and we were unable to recover it. 00:32:19.369 [2024-11-26 07:42:03.365305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.365316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.369 qpair failed and we were unable to recover it. 00:32:19.369 [2024-11-26 07:42:03.365619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.365632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.369 qpair failed and we were unable to recover it. 00:32:19.369 [2024-11-26 07:42:03.365930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.365942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.369 qpair failed and we were unable to recover it. 00:32:19.369 [2024-11-26 07:42:03.366261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.366273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.369 qpair failed and we were unable to recover it. 00:32:19.369 [2024-11-26 07:42:03.366570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.366581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.369 qpair failed and we were unable to recover it. 00:32:19.369 [2024-11-26 07:42:03.366886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.366897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.369 qpair failed and we were unable to recover it. 00:32:19.369 [2024-11-26 07:42:03.367231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.367241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.369 qpair failed and we were unable to recover it. 00:32:19.369 [2024-11-26 07:42:03.367507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.367518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.369 qpair failed and we were unable to recover it. 00:32:19.369 [2024-11-26 07:42:03.367801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.367812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.369 qpair failed and we were unable to recover it. 00:32:19.369 [2024-11-26 07:42:03.368120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.368133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.369 qpair failed and we were unable to recover it. 00:32:19.369 [2024-11-26 07:42:03.368433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.368444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.369 qpair failed and we were unable to recover it. 00:32:19.369 [2024-11-26 07:42:03.368711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.368721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.369 qpair failed and we were unable to recover it. 00:32:19.369 [2024-11-26 07:42:03.369044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.369055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.369 qpair failed and we were unable to recover it. 00:32:19.369 [2024-11-26 07:42:03.369323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.369333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.369 qpair failed and we were unable to recover it. 00:32:19.369 [2024-11-26 07:42:03.369735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.369747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.369 qpair failed and we were unable to recover it. 00:32:19.369 [2024-11-26 07:42:03.370072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.370083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.369 qpair failed and we were unable to recover it. 00:32:19.369 [2024-11-26 07:42:03.370377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.370389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.369 qpair failed and we were unable to recover it. 00:32:19.369 [2024-11-26 07:42:03.370717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.370728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.369 qpair failed and we were unable to recover it. 00:32:19.369 [2024-11-26 07:42:03.371072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.371083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.369 qpair failed and we were unable to recover it. 00:32:19.369 [2024-11-26 07:42:03.371413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.371423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.369 qpair failed and we were unable to recover it. 00:32:19.369 [2024-11-26 07:42:03.371764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.371775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.369 qpair failed and we were unable to recover it. 00:32:19.369 [2024-11-26 07:42:03.372091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.372102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.369 qpair failed and we were unable to recover it. 00:32:19.369 [2024-11-26 07:42:03.372412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.372424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.369 qpair failed and we were unable to recover it. 00:32:19.369 [2024-11-26 07:42:03.372754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.372764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.369 qpair failed and we were unable to recover it. 00:32:19.369 [2024-11-26 07:42:03.373083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.373095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.369 qpair failed and we were unable to recover it. 00:32:19.369 [2024-11-26 07:42:03.373423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.373435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.369 qpair failed and we were unable to recover it. 00:32:19.369 [2024-11-26 07:42:03.373779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.373791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.369 qpair failed and we were unable to recover it. 00:32:19.369 [2024-11-26 07:42:03.374100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.374112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.369 qpair failed and we were unable to recover it. 00:32:19.369 [2024-11-26 07:42:03.374337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.374347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.369 qpair failed and we were unable to recover it. 00:32:19.369 [2024-11-26 07:42:03.374660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.374671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.369 qpair failed and we were unable to recover it. 00:32:19.369 [2024-11-26 07:42:03.374974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.374985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.369 qpair failed and we were unable to recover it. 00:32:19.369 [2024-11-26 07:42:03.375296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.375307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.369 qpair failed and we were unable to recover it. 00:32:19.369 [2024-11-26 07:42:03.375592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.369 [2024-11-26 07:42:03.375603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.370 qpair failed and we were unable to recover it. 00:32:19.370 [2024-11-26 07:42:03.375914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.370 [2024-11-26 07:42:03.375925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.370 qpair failed and we were unable to recover it. 00:32:19.370 [2024-11-26 07:42:03.376234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.370 [2024-11-26 07:42:03.376246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.370 qpair failed and we were unable to recover it. 00:32:19.370 [2024-11-26 07:42:03.376554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.370 [2024-11-26 07:42:03.376565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.370 qpair failed and we were unable to recover it. 00:32:19.370 [2024-11-26 07:42:03.376935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.370 [2024-11-26 07:42:03.376946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.370 qpair failed and we were unable to recover it. 00:32:19.370 [2024-11-26 07:42:03.377144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.370 [2024-11-26 07:42:03.377155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.370 qpair failed and we were unable to recover it. 00:32:19.370 [2024-11-26 07:42:03.377470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.370 [2024-11-26 07:42:03.377481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.370 qpair failed and we were unable to recover it. 00:32:19.370 [2024-11-26 07:42:03.377672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.370 [2024-11-26 07:42:03.377682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.370 qpair failed and we were unable to recover it. 00:32:19.370 [2024-11-26 07:42:03.377909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.370 [2024-11-26 07:42:03.377922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.370 qpair failed and we were unable to recover it. 00:32:19.370 [2024-11-26 07:42:03.378256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.370 [2024-11-26 07:42:03.378267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.370 qpair failed and we were unable to recover it. 00:32:19.370 [2024-11-26 07:42:03.378580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.370 [2024-11-26 07:42:03.378593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.370 qpair failed and we were unable to recover it. 00:32:19.370 [2024-11-26 07:42:03.378936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.370 [2024-11-26 07:42:03.378948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.370 qpair failed and we were unable to recover it. 00:32:19.370 [2024-11-26 07:42:03.379288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.370 [2024-11-26 07:42:03.379299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.370 qpair failed and we were unable to recover it. 00:32:19.370 [2024-11-26 07:42:03.379614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.370 [2024-11-26 07:42:03.379625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.370 qpair failed and we were unable to recover it. 00:32:19.370 [2024-11-26 07:42:03.379803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.370 [2024-11-26 07:42:03.379814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.370 qpair failed and we were unable to recover it. 00:32:19.370 [2024-11-26 07:42:03.380115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.370 [2024-11-26 07:42:03.380126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.370 qpair failed and we were unable to recover it. 00:32:19.370 [2024-11-26 07:42:03.380344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.370 [2024-11-26 07:42:03.380355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.370 qpair failed and we were unable to recover it. 00:32:19.370 [2024-11-26 07:42:03.380661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.370 [2024-11-26 07:42:03.380673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.370 qpair failed and we were unable to recover it. 00:32:19.370 [2024-11-26 07:42:03.380987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.370 [2024-11-26 07:42:03.380999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.370 qpair failed and we were unable to recover it. 00:32:19.370 [2024-11-26 07:42:03.381323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.370 [2024-11-26 07:42:03.381335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.370 qpair failed and we were unable to recover it. 00:32:19.370 [2024-11-26 07:42:03.381673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.370 [2024-11-26 07:42:03.381684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.370 qpair failed and we were unable to recover it. 00:32:19.370 [2024-11-26 07:42:03.382028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.370 [2024-11-26 07:42:03.382040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.370 qpair failed and we were unable to recover it. 00:32:19.370 [2024-11-26 07:42:03.382337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.370 [2024-11-26 07:42:03.382348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.370 qpair failed and we were unable to recover it. 00:32:19.370 [2024-11-26 07:42:03.382656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.370 [2024-11-26 07:42:03.382668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.370 qpair failed and we were unable to recover it. 00:32:19.370 [2024-11-26 07:42:03.383010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.370 [2024-11-26 07:42:03.383021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.370 qpair failed and we were unable to recover it. 00:32:19.370 [2024-11-26 07:42:03.383357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.370 [2024-11-26 07:42:03.383368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.370 qpair failed and we were unable to recover it. 00:32:19.370 [2024-11-26 07:42:03.383663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.370 [2024-11-26 07:42:03.383674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.370 qpair failed and we were unable to recover it. 00:32:19.370 [2024-11-26 07:42:03.383978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.370 [2024-11-26 07:42:03.383990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.370 qpair failed and we were unable to recover it. 00:32:19.370 [2024-11-26 07:42:03.384270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.370 [2024-11-26 07:42:03.384282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.370 qpair failed and we were unable to recover it. 00:32:19.370 [2024-11-26 07:42:03.384609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.370 [2024-11-26 07:42:03.384620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.370 qpair failed and we were unable to recover it. 00:32:19.370 [2024-11-26 07:42:03.384924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.370 [2024-11-26 07:42:03.384936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.370 qpair failed and we were unable to recover it. 00:32:19.370 [2024-11-26 07:42:03.385215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.370 [2024-11-26 07:42:03.385226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.370 qpair failed and we were unable to recover it. 00:32:19.370 [2024-11-26 07:42:03.385526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.370 [2024-11-26 07:42:03.385538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.370 qpair failed and we were unable to recover it. 00:32:19.370 [2024-11-26 07:42:03.385838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.370 [2024-11-26 07:42:03.385850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.370 qpair failed and we were unable to recover it. 00:32:19.370 [2024-11-26 07:42:03.386108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.371 [2024-11-26 07:42:03.386119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.371 qpair failed and we were unable to recover it. 00:32:19.371 [2024-11-26 07:42:03.386313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.371 [2024-11-26 07:42:03.386325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.371 qpair failed and we were unable to recover it. 00:32:19.371 [2024-11-26 07:42:03.386663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.371 [2024-11-26 07:42:03.386675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.371 qpair failed and we were unable to recover it. 00:32:19.371 [2024-11-26 07:42:03.386974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.371 [2024-11-26 07:42:03.386986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.371 qpair failed and we were unable to recover it. 00:32:19.371 [2024-11-26 07:42:03.387163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.371 [2024-11-26 07:42:03.387174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.371 qpair failed and we were unable to recover it. 00:32:19.371 [2024-11-26 07:42:03.387474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.371 [2024-11-26 07:42:03.387486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.371 qpair failed and we were unable to recover it. 00:32:19.371 [2024-11-26 07:42:03.387788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.371 [2024-11-26 07:42:03.387800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.371 qpair failed and we were unable to recover it. 00:32:19.371 [2024-11-26 07:42:03.388099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.371 [2024-11-26 07:42:03.388110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.371 qpair failed and we were unable to recover it. 00:32:19.371 [2024-11-26 07:42:03.388385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.371 [2024-11-26 07:42:03.388396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.371 qpair failed and we were unable to recover it. 00:32:19.371 [2024-11-26 07:42:03.388699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.371 [2024-11-26 07:42:03.388710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.371 qpair failed and we were unable to recover it. 00:32:19.371 [2024-11-26 07:42:03.389018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.371 [2024-11-26 07:42:03.389029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.371 qpair failed and we were unable to recover it. 00:32:19.371 [2024-11-26 07:42:03.389358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.371 [2024-11-26 07:42:03.389370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.371 qpair failed and we were unable to recover it. 00:32:19.371 [2024-11-26 07:42:03.389669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.371 [2024-11-26 07:42:03.389682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.371 qpair failed and we were unable to recover it. 00:32:19.371 [2024-11-26 07:42:03.390013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.371 [2024-11-26 07:42:03.390024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.371 qpair failed and we were unable to recover it. 00:32:19.371 [2024-11-26 07:42:03.390359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.371 [2024-11-26 07:42:03.390371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.371 qpair failed and we were unable to recover it. 00:32:19.371 [2024-11-26 07:42:03.390690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.371 [2024-11-26 07:42:03.390700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.371 qpair failed and we were unable to recover it. 00:32:19.371 [2024-11-26 07:42:03.391010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.371 [2024-11-26 07:42:03.391023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.371 qpair failed and we were unable to recover it. 00:32:19.371 [2024-11-26 07:42:03.391349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.371 [2024-11-26 07:42:03.391360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.371 qpair failed and we were unable to recover it. 00:32:19.371 [2024-11-26 07:42:03.391696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.371 [2024-11-26 07:42:03.391706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.371 qpair failed and we were unable to recover it. 00:32:19.371 [2024-11-26 07:42:03.391882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.371 [2024-11-26 07:42:03.391893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.371 qpair failed and we were unable to recover it. 00:32:19.371 [2024-11-26 07:42:03.392218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.371 [2024-11-26 07:42:03.392229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.371 qpair failed and we were unable to recover it. 00:32:19.371 [2024-11-26 07:42:03.392537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.371 [2024-11-26 07:42:03.392552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.371 qpair failed and we were unable to recover it. 00:32:19.371 [2024-11-26 07:42:03.392885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.371 [2024-11-26 07:42:03.392896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.371 qpair failed and we were unable to recover it. 00:32:19.371 [2024-11-26 07:42:03.393196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.371 [2024-11-26 07:42:03.393207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.371 qpair failed and we were unable to recover it. 00:32:19.371 [2024-11-26 07:42:03.393552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.371 [2024-11-26 07:42:03.393563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.371 qpair failed and we were unable to recover it. 00:32:19.371 [2024-11-26 07:42:03.393877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.371 [2024-11-26 07:42:03.393889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.371 qpair failed and we were unable to recover it. 00:32:19.371 [2024-11-26 07:42:03.394213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.371 [2024-11-26 07:42:03.394224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.371 qpair failed and we were unable to recover it. 00:32:19.371 [2024-11-26 07:42:03.394529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.371 [2024-11-26 07:42:03.394541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.371 qpair failed and we were unable to recover it. 00:32:19.371 [2024-11-26 07:42:03.394730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.371 [2024-11-26 07:42:03.394743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.371 qpair failed and we were unable to recover it. 00:32:19.371 [2024-11-26 07:42:03.395033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.371 [2024-11-26 07:42:03.395044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.371 qpair failed and we were unable to recover it. 00:32:19.371 [2024-11-26 07:42:03.395330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.371 [2024-11-26 07:42:03.395341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.371 qpair failed and we were unable to recover it. 00:32:19.371 [2024-11-26 07:42:03.395663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.371 [2024-11-26 07:42:03.395674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.371 qpair failed and we were unable to recover it. 00:32:19.371 [2024-11-26 07:42:03.395975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.371 [2024-11-26 07:42:03.395986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.371 qpair failed and we were unable to recover it. 00:32:19.371 [2024-11-26 07:42:03.396301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.371 [2024-11-26 07:42:03.396311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.371 qpair failed and we were unable to recover it. 00:32:19.371 [2024-11-26 07:42:03.396597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.371 [2024-11-26 07:42:03.396608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.371 qpair failed and we were unable to recover it. 00:32:19.371 [2024-11-26 07:42:03.396917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.371 [2024-11-26 07:42:03.396929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.371 qpair failed and we were unable to recover it. 00:32:19.371 [2024-11-26 07:42:03.397128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.371 [2024-11-26 07:42:03.397138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.371 qpair failed and we were unable to recover it. 00:32:19.371 [2024-11-26 07:42:03.397446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.371 [2024-11-26 07:42:03.397457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.371 qpair failed and we were unable to recover it. 00:32:19.371 [2024-11-26 07:42:03.397787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.372 [2024-11-26 07:42:03.397799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.372 qpair failed and we were unable to recover it. 00:32:19.372 [2024-11-26 07:42:03.398113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.372 [2024-11-26 07:42:03.398124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.372 qpair failed and we were unable to recover it. 00:32:19.372 [2024-11-26 07:42:03.398451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.372 [2024-11-26 07:42:03.398463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.372 qpair failed and we were unable to recover it. 00:32:19.372 [2024-11-26 07:42:03.398763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.372 [2024-11-26 07:42:03.398774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.372 qpair failed and we were unable to recover it. 00:32:19.372 [2024-11-26 07:42:03.399090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.372 [2024-11-26 07:42:03.399102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.372 qpair failed and we were unable to recover it. 00:32:19.372 [2024-11-26 07:42:03.399428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.372 [2024-11-26 07:42:03.399439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.372 qpair failed and we were unable to recover it. 00:32:19.372 [2024-11-26 07:42:03.399743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.372 [2024-11-26 07:42:03.399755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.372 qpair failed and we were unable to recover it. 00:32:19.372 [2024-11-26 07:42:03.399962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.372 [2024-11-26 07:42:03.399973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.372 qpair failed and we were unable to recover it. 00:32:19.372 [2024-11-26 07:42:03.400292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.372 [2024-11-26 07:42:03.400303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.372 qpair failed and we were unable to recover it. 00:32:19.372 [2024-11-26 07:42:03.400629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.372 [2024-11-26 07:42:03.400641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.372 qpair failed and we were unable to recover it. 00:32:19.372 [2024-11-26 07:42:03.400916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.372 [2024-11-26 07:42:03.400929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.372 qpair failed and we were unable to recover it. 00:32:19.372 [2024-11-26 07:42:03.401249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.372 [2024-11-26 07:42:03.401259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.372 qpair failed and we were unable to recover it. 00:32:19.372 [2024-11-26 07:42:03.401538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.372 [2024-11-26 07:42:03.401549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.372 qpair failed and we were unable to recover it. 00:32:19.372 [2024-11-26 07:42:03.401876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.372 [2024-11-26 07:42:03.401888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.372 qpair failed and we were unable to recover it. 00:32:19.372 [2024-11-26 07:42:03.402179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.372 [2024-11-26 07:42:03.402189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.372 qpair failed and we were unable to recover it. 00:32:19.372 [2024-11-26 07:42:03.402507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.372 [2024-11-26 07:42:03.402519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.372 qpair failed and we were unable to recover it. 00:32:19.372 [2024-11-26 07:42:03.402846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.372 [2024-11-26 07:42:03.402856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.372 qpair failed and we were unable to recover it. 00:32:19.372 [2024-11-26 07:42:03.403170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.372 [2024-11-26 07:42:03.403182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.372 qpair failed and we were unable to recover it. 00:32:19.372 [2024-11-26 07:42:03.403487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.372 [2024-11-26 07:42:03.403498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.372 qpair failed and we were unable to recover it. 00:32:19.372 [2024-11-26 07:42:03.403801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.372 [2024-11-26 07:42:03.403813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.372 qpair failed and we were unable to recover it. 00:32:19.372 [2024-11-26 07:42:03.404115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.372 [2024-11-26 07:42:03.404127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.372 qpair failed and we were unable to recover it. 00:32:19.372 [2024-11-26 07:42:03.404432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.372 [2024-11-26 07:42:03.404444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.372 qpair failed and we were unable to recover it. 00:32:19.372 [2024-11-26 07:42:03.404753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.372 [2024-11-26 07:42:03.404764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.372 qpair failed and we were unable to recover it. 00:32:19.372 [2024-11-26 07:42:03.405027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.372 [2024-11-26 07:42:03.405038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.372 qpair failed and we were unable to recover it. 00:32:19.372 [2024-11-26 07:42:03.405334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.372 [2024-11-26 07:42:03.405345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.372 qpair failed and we were unable to recover it. 00:32:19.372 [2024-11-26 07:42:03.405649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.372 [2024-11-26 07:42:03.405661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.372 qpair failed and we were unable to recover it. 00:32:19.372 [2024-11-26 07:42:03.405855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.372 [2024-11-26 07:42:03.405876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.372 qpair failed and we were unable to recover it. 00:32:19.372 [2024-11-26 07:42:03.406202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.372 [2024-11-26 07:42:03.406213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.372 qpair failed and we were unable to recover it. 00:32:19.372 [2024-11-26 07:42:03.406545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.372 [2024-11-26 07:42:03.406556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.372 qpair failed and we were unable to recover it. 00:32:19.372 [2024-11-26 07:42:03.406856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.372 [2024-11-26 07:42:03.406870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.372 qpair failed and we were unable to recover it. 00:32:19.372 [2024-11-26 07:42:03.407210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.372 [2024-11-26 07:42:03.407221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.372 qpair failed and we were unable to recover it. 00:32:19.372 [2024-11-26 07:42:03.407534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.372 [2024-11-26 07:42:03.407546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.372 qpair failed and we were unable to recover it. 00:32:19.372 [2024-11-26 07:42:03.407841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.372 [2024-11-26 07:42:03.407852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.372 qpair failed and we were unable to recover it. 00:32:19.372 [2024-11-26 07:42:03.408183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.372 [2024-11-26 07:42:03.408195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.372 qpair failed and we were unable to recover it. 00:32:19.372 [2024-11-26 07:42:03.408415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.372 [2024-11-26 07:42:03.408425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.372 qpair failed and we were unable to recover it. 00:32:19.372 [2024-11-26 07:42:03.408730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.372 [2024-11-26 07:42:03.408741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.372 qpair failed and we were unable to recover it. 00:32:19.372 [2024-11-26 07:42:03.408929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.372 [2024-11-26 07:42:03.408942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.372 qpair failed and we were unable to recover it. 00:32:19.372 [2024-11-26 07:42:03.409243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.372 [2024-11-26 07:42:03.409254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.372 qpair failed and we were unable to recover it. 00:32:19.372 [2024-11-26 07:42:03.409573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.373 [2024-11-26 07:42:03.409585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.373 qpair failed and we were unable to recover it. 00:32:19.373 [2024-11-26 07:42:03.409902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.373 [2024-11-26 07:42:03.409914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.373 qpair failed and we were unable to recover it. 00:32:19.373 [2024-11-26 07:42:03.410242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.373 [2024-11-26 07:42:03.410253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.373 qpair failed and we were unable to recover it. 00:32:19.373 [2024-11-26 07:42:03.410455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.373 [2024-11-26 07:42:03.410465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.373 qpair failed and we were unable to recover it. 00:32:19.373 [2024-11-26 07:42:03.410784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.373 [2024-11-26 07:42:03.410796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.373 qpair failed and we were unable to recover it. 00:32:19.373 [2024-11-26 07:42:03.411117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.373 [2024-11-26 07:42:03.411128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.373 qpair failed and we were unable to recover it. 00:32:19.373 [2024-11-26 07:42:03.411334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.373 [2024-11-26 07:42:03.411345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.373 qpair failed and we were unable to recover it. 00:32:19.373 [2024-11-26 07:42:03.411658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.373 [2024-11-26 07:42:03.411669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.373 qpair failed and we were unable to recover it. 00:32:19.373 [2024-11-26 07:42:03.411972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.373 [2024-11-26 07:42:03.411983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.373 qpair failed and we were unable to recover it. 00:32:19.373 [2024-11-26 07:42:03.412311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.373 [2024-11-26 07:42:03.412321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.373 qpair failed and we were unable to recover it. 00:32:19.373 [2024-11-26 07:42:03.412652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.373 [2024-11-26 07:42:03.412663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.373 qpair failed and we were unable to recover it. 00:32:19.373 [2024-11-26 07:42:03.412842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.373 [2024-11-26 07:42:03.412854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.373 qpair failed and we were unable to recover it. 00:32:19.373 [2024-11-26 07:42:03.413149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.373 [2024-11-26 07:42:03.413161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.373 qpair failed and we were unable to recover it. 00:32:19.373 [2024-11-26 07:42:03.413497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.373 [2024-11-26 07:42:03.413511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.373 qpair failed and we were unable to recover it. 00:32:19.373 [2024-11-26 07:42:03.413844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.373 [2024-11-26 07:42:03.413855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.373 qpair failed and we were unable to recover it. 00:32:19.373 [2024-11-26 07:42:03.414187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.373 [2024-11-26 07:42:03.414199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.373 qpair failed and we were unable to recover it. 00:32:19.373 [2024-11-26 07:42:03.414529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.373 [2024-11-26 07:42:03.414540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.373 qpair failed and we were unable to recover it. 00:32:19.373 [2024-11-26 07:42:03.414874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.373 [2024-11-26 07:42:03.414886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.373 qpair failed and we were unable to recover it. 00:32:19.373 [2024-11-26 07:42:03.415271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.373 [2024-11-26 07:42:03.415282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.373 qpair failed and we were unable to recover it. 00:32:19.373 [2024-11-26 07:42:03.415588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.373 [2024-11-26 07:42:03.415598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.373 qpair failed and we were unable to recover it. 00:32:19.373 [2024-11-26 07:42:03.415904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.373 [2024-11-26 07:42:03.415915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.373 qpair failed and we were unable to recover it. 00:32:19.373 [2024-11-26 07:42:03.416197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.373 [2024-11-26 07:42:03.416208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.373 qpair failed and we were unable to recover it. 00:32:19.373 [2024-11-26 07:42:03.416543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.373 [2024-11-26 07:42:03.416553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.373 qpair failed and we were unable to recover it. 00:32:19.373 [2024-11-26 07:42:03.416857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.373 [2024-11-26 07:42:03.416872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.373 qpair failed and we were unable to recover it. 00:32:19.373 [2024-11-26 07:42:03.417171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.373 [2024-11-26 07:42:03.417182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.373 qpair failed and we were unable to recover it. 00:32:19.373 [2024-11-26 07:42:03.417480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.373 [2024-11-26 07:42:03.417491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.373 qpair failed and we were unable to recover it. 00:32:19.373 [2024-11-26 07:42:03.417767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.373 [2024-11-26 07:42:03.417777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.373 qpair failed and we were unable to recover it. 00:32:19.373 [2024-11-26 07:42:03.418109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.373 [2024-11-26 07:42:03.418122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.373 qpair failed and we were unable to recover it. 00:32:19.373 [2024-11-26 07:42:03.418408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.373 [2024-11-26 07:42:03.418418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.373 qpair failed and we were unable to recover it. 00:32:19.373 [2024-11-26 07:42:03.418723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.373 [2024-11-26 07:42:03.418735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.373 qpair failed and we were unable to recover it. 00:32:19.373 [2024-11-26 07:42:03.419018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.373 [2024-11-26 07:42:03.419029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.373 qpair failed and we were unable to recover it. 00:32:19.373 [2024-11-26 07:42:03.419323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.373 [2024-11-26 07:42:03.419335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.373 qpair failed and we were unable to recover it. 00:32:19.373 [2024-11-26 07:42:03.419740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.373 [2024-11-26 07:42:03.419752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.373 qpair failed and we were unable to recover it. 00:32:19.373 [2024-11-26 07:42:03.420057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.373 [2024-11-26 07:42:03.420067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.373 qpair failed and we were unable to recover it. 00:32:19.373 [2024-11-26 07:42:03.420360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.373 [2024-11-26 07:42:03.420372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.373 qpair failed and we were unable to recover it. 00:32:19.373 [2024-11-26 07:42:03.420702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.373 [2024-11-26 07:42:03.420714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.373 qpair failed and we were unable to recover it. 00:32:19.373 [2024-11-26 07:42:03.421027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.373 [2024-11-26 07:42:03.421039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.373 qpair failed and we were unable to recover it. 00:32:19.373 [2024-11-26 07:42:03.421319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.373 [2024-11-26 07:42:03.421330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.373 qpair failed and we were unable to recover it. 00:32:19.373 [2024-11-26 07:42:03.421645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.421656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.374 [2024-11-26 07:42:03.421960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.421971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.374 [2024-11-26 07:42:03.422285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.422298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.374 [2024-11-26 07:42:03.422553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.422564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.374 [2024-11-26 07:42:03.422899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.422910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.374 [2024-11-26 07:42:03.423235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.423246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.374 [2024-11-26 07:42:03.423550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.423562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.374 [2024-11-26 07:42:03.423872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.423884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.374 [2024-11-26 07:42:03.424125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.424135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.374 [2024-11-26 07:42:03.424435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.424446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.374 [2024-11-26 07:42:03.424775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.424786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.374 [2024-11-26 07:42:03.425066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.425078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.374 [2024-11-26 07:42:03.425364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.425376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.374 [2024-11-26 07:42:03.425741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.425752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.374 [2024-11-26 07:42:03.425944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.425955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.374 [2024-11-26 07:42:03.426272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.426283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.374 [2024-11-26 07:42:03.426618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.426629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.374 [2024-11-26 07:42:03.426958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.426969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.374 [2024-11-26 07:42:03.427282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.427293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.374 [2024-11-26 07:42:03.427599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.427611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.374 [2024-11-26 07:42:03.427784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.427794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.374 [2024-11-26 07:42:03.428017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.428030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.374 [2024-11-26 07:42:03.428229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.428240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.374 [2024-11-26 07:42:03.428572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.428583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.374 [2024-11-26 07:42:03.428911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.428923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.374 [2024-11-26 07:42:03.429216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.429227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.374 [2024-11-26 07:42:03.429534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.429546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.374 [2024-11-26 07:42:03.429854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.429869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.374 [2024-11-26 07:42:03.430210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.430221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.374 [2024-11-26 07:42:03.430532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.430546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.374 [2024-11-26 07:42:03.430867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.430879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.374 [2024-11-26 07:42:03.431182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.431194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.374 [2024-11-26 07:42:03.431541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.431552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.374 [2024-11-26 07:42:03.431932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.431944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.374 [2024-11-26 07:42:03.432260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.432271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.374 [2024-11-26 07:42:03.432579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.432590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.374 [2024-11-26 07:42:03.432873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.432883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.374 [2024-11-26 07:42:03.433213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.433224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.374 [2024-11-26 07:42:03.433522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.374 [2024-11-26 07:42:03.433532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.374 qpair failed and we were unable to recover it. 00:32:19.375 [2024-11-26 07:42:03.433753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.375 [2024-11-26 07:42:03.433765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.375 qpair failed and we were unable to recover it. 00:32:19.375 [2024-11-26 07:42:03.434086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.375 [2024-11-26 07:42:03.434097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.375 qpair failed and we were unable to recover it. 00:32:19.375 [2024-11-26 07:42:03.434288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.375 [2024-11-26 07:42:03.434299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.375 qpair failed and we were unable to recover it. 00:32:19.375 [2024-11-26 07:42:03.434598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.375 [2024-11-26 07:42:03.434609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.375 qpair failed and we were unable to recover it. 00:32:19.375 [2024-11-26 07:42:03.434919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.375 [2024-11-26 07:42:03.434930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.375 qpair failed and we were unable to recover it. 00:32:19.375 [2024-11-26 07:42:03.435266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.375 [2024-11-26 07:42:03.435278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.375 qpair failed and we were unable to recover it. 00:32:19.375 [2024-11-26 07:42:03.435578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.375 [2024-11-26 07:42:03.435589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.375 qpair failed and we were unable to recover it. 00:32:19.375 [2024-11-26 07:42:03.435900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.375 [2024-11-26 07:42:03.435911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.375 qpair failed and we were unable to recover it. 00:32:19.375 [2024-11-26 07:42:03.436237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.375 [2024-11-26 07:42:03.436248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.375 qpair failed and we were unable to recover it. 00:32:19.375 [2024-11-26 07:42:03.436575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.375 [2024-11-26 07:42:03.436587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.375 qpair failed and we were unable to recover it. 00:32:19.375 [2024-11-26 07:42:03.436765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.375 [2024-11-26 07:42:03.436776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.375 qpair failed and we were unable to recover it. 00:32:19.375 [2024-11-26 07:42:03.437088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.375 [2024-11-26 07:42:03.437099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.375 qpair failed and we were unable to recover it. 00:32:19.375 [2024-11-26 07:42:03.437272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.375 [2024-11-26 07:42:03.437283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.375 qpair failed and we were unable to recover it. 00:32:19.375 [2024-11-26 07:42:03.437589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.375 [2024-11-26 07:42:03.437602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.375 qpair failed and we were unable to recover it. 00:32:19.375 [2024-11-26 07:42:03.437943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.375 [2024-11-26 07:42:03.437954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.375 qpair failed and we were unable to recover it. 00:32:19.375 [2024-11-26 07:42:03.438254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.375 [2024-11-26 07:42:03.438265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.375 qpair failed and we were unable to recover it. 00:32:19.375 [2024-11-26 07:42:03.438606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.375 [2024-11-26 07:42:03.438617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.375 qpair failed and we were unable to recover it. 00:32:19.375 [2024-11-26 07:42:03.438955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.375 [2024-11-26 07:42:03.438966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.375 qpair failed and we were unable to recover it. 00:32:19.375 [2024-11-26 07:42:03.439296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.375 [2024-11-26 07:42:03.439307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.375 qpair failed and we were unable to recover it. 00:32:19.375 [2024-11-26 07:42:03.439607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.375 [2024-11-26 07:42:03.439618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.375 qpair failed and we were unable to recover it. 00:32:19.375 [2024-11-26 07:42:03.439912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.375 [2024-11-26 07:42:03.439924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.375 qpair failed and we were unable to recover it. 00:32:19.375 [2024-11-26 07:42:03.440245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.375 [2024-11-26 07:42:03.440258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.375 qpair failed and we were unable to recover it. 00:32:19.375 [2024-11-26 07:42:03.440624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.375 [2024-11-26 07:42:03.440635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.375 qpair failed and we were unable to recover it. 00:32:19.375 [2024-11-26 07:42:03.440938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.375 [2024-11-26 07:42:03.440949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.375 qpair failed and we were unable to recover it. 00:32:19.375 [2024-11-26 07:42:03.441112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.375 [2024-11-26 07:42:03.441124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.375 qpair failed and we were unable to recover it. 00:32:19.375 [2024-11-26 07:42:03.441458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.375 [2024-11-26 07:42:03.441469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.375 qpair failed and we were unable to recover it. 00:32:19.375 [2024-11-26 07:42:03.441788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.375 [2024-11-26 07:42:03.441800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.375 qpair failed and we were unable to recover it. 00:32:19.375 [2024-11-26 07:42:03.442126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.375 [2024-11-26 07:42:03.442138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.375 qpair failed and we were unable to recover it. 00:32:19.375 [2024-11-26 07:42:03.442439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.375 [2024-11-26 07:42:03.442449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.375 qpair failed and we were unable to recover it. 00:32:19.375 [2024-11-26 07:42:03.442744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.375 [2024-11-26 07:42:03.442754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.375 qpair failed and we were unable to recover it. 00:32:19.375 [2024-11-26 07:42:03.443072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.375 [2024-11-26 07:42:03.443083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.375 qpair failed and we were unable to recover it. 00:32:19.375 [2024-11-26 07:42:03.443393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.375 [2024-11-26 07:42:03.443404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.375 qpair failed and we were unable to recover it. 00:32:19.375 [2024-11-26 07:42:03.443700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.375 [2024-11-26 07:42:03.443711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.375 qpair failed and we were unable to recover it. 00:32:19.375 [2024-11-26 07:42:03.444079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.375 [2024-11-26 07:42:03.444091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.375 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.444406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.376 [2024-11-26 07:42:03.444418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.376 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.444763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.376 [2024-11-26 07:42:03.444774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.376 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.445097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.376 [2024-11-26 07:42:03.445108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.376 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.445446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.376 [2024-11-26 07:42:03.445457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.376 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.445829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.376 [2024-11-26 07:42:03.445840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.376 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.446162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.376 [2024-11-26 07:42:03.446174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.376 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.446498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.376 [2024-11-26 07:42:03.446510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.376 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.446849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.376 [2024-11-26 07:42:03.446865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.376 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.447186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.376 [2024-11-26 07:42:03.447197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.376 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.447501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.376 [2024-11-26 07:42:03.447512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.376 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.447817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.376 [2024-11-26 07:42:03.447829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.376 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.448149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.376 [2024-11-26 07:42:03.448160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.376 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.448462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.376 [2024-11-26 07:42:03.448473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.376 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.448766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.376 [2024-11-26 07:42:03.448778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.376 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.449098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.376 [2024-11-26 07:42:03.449111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.376 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.449444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.376 [2024-11-26 07:42:03.449456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.376 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.449750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.376 [2024-11-26 07:42:03.449762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.376 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.450070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.376 [2024-11-26 07:42:03.450083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.376 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.450480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.376 [2024-11-26 07:42:03.450492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.376 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.450814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.376 [2024-11-26 07:42:03.450826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.376 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.451147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.376 [2024-11-26 07:42:03.451160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.376 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.451516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.376 [2024-11-26 07:42:03.451528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.376 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.451841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.376 [2024-11-26 07:42:03.451853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.376 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.452170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.376 [2024-11-26 07:42:03.452183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.376 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.452516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.376 [2024-11-26 07:42:03.452529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.376 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.452876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.376 [2024-11-26 07:42:03.452888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.376 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.453201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.376 [2024-11-26 07:42:03.453212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.376 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.453542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.376 [2024-11-26 07:42:03.453554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.376 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.453843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.376 [2024-11-26 07:42:03.453854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.376 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.454076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.376 [2024-11-26 07:42:03.454087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.376 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.454279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.376 [2024-11-26 07:42:03.454289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.376 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.454624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.376 [2024-11-26 07:42:03.454636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.376 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.454809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.376 [2024-11-26 07:42:03.454820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.376 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.455110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.376 [2024-11-26 07:42:03.455122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.376 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.455417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.376 [2024-11-26 07:42:03.455428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.376 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.455714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.376 [2024-11-26 07:42:03.455726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.376 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.456039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.376 [2024-11-26 07:42:03.456050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.376 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.456379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.376 [2024-11-26 07:42:03.456390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.376 qpair failed and we were unable to recover it. 00:32:19.376 [2024-11-26 07:42:03.456614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.377 [2024-11-26 07:42:03.456625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-11-26 07:42:03.456926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.377 [2024-11-26 07:42:03.456938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-11-26 07:42:03.457257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.377 [2024-11-26 07:42:03.457267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-11-26 07:42:03.457468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.377 [2024-11-26 07:42:03.457479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-11-26 07:42:03.457806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.377 [2024-11-26 07:42:03.457818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-11-26 07:42:03.458120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.377 [2024-11-26 07:42:03.458131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-11-26 07:42:03.458315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.377 [2024-11-26 07:42:03.458328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-11-26 07:42:03.458635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.377 [2024-11-26 07:42:03.458646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-11-26 07:42:03.458833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.377 [2024-11-26 07:42:03.458843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-11-26 07:42:03.459078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.377 [2024-11-26 07:42:03.459091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-11-26 07:42:03.459423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.377 [2024-11-26 07:42:03.459434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-11-26 07:42:03.459734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.377 [2024-11-26 07:42:03.459746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-11-26 07:42:03.460088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.377 [2024-11-26 07:42:03.460100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-11-26 07:42:03.460384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.377 [2024-11-26 07:42:03.460397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-11-26 07:42:03.460706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.377 [2024-11-26 07:42:03.460718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-11-26 07:42:03.461103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.377 [2024-11-26 07:42:03.461114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-11-26 07:42:03.461418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.377 [2024-11-26 07:42:03.461430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-11-26 07:42:03.461765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.377 [2024-11-26 07:42:03.461776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-11-26 07:42:03.461970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.377 [2024-11-26 07:42:03.461981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-11-26 07:42:03.462266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.377 [2024-11-26 07:42:03.462277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-11-26 07:42:03.462595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.377 [2024-11-26 07:42:03.462607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-11-26 07:42:03.462906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.377 [2024-11-26 07:42:03.462918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-11-26 07:42:03.463226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.377 [2024-11-26 07:42:03.463236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-11-26 07:42:03.463444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.377 [2024-11-26 07:42:03.463455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-11-26 07:42:03.463779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.377 [2024-11-26 07:42:03.463790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-11-26 07:42:03.464005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.377 [2024-11-26 07:42:03.464016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-11-26 07:42:03.464340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.377 [2024-11-26 07:42:03.464351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-11-26 07:42:03.464678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.377 [2024-11-26 07:42:03.464689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-11-26 07:42:03.464993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.377 [2024-11-26 07:42:03.465005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-11-26 07:42:03.465297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.377 [2024-11-26 07:42:03.465308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-11-26 07:42:03.465621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.377 [2024-11-26 07:42:03.465632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-11-26 07:42:03.465934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.377 [2024-11-26 07:42:03.465946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-11-26 07:42:03.466252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.377 [2024-11-26 07:42:03.466263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-11-26 07:42:03.466592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.377 [2024-11-26 07:42:03.466603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-11-26 07:42:03.466904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.377 [2024-11-26 07:42:03.466915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-11-26 07:42:03.467190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.377 [2024-11-26 07:42:03.467201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.377 [2024-11-26 07:42:03.467500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.377 [2024-11-26 07:42:03.467512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.377 qpair failed and we were unable to recover it. 00:32:19.657 [2024-11-26 07:42:03.467836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.657 [2024-11-26 07:42:03.467848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.657 qpair failed and we were unable to recover it. 00:32:19.657 [2024-11-26 07:42:03.468152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.657 [2024-11-26 07:42:03.468165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.657 qpair failed and we were unable to recover it. 00:32:19.658 [2024-11-26 07:42:03.468382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.658 [2024-11-26 07:42:03.468393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.658 qpair failed and we were unable to recover it. 00:32:19.658 [2024-11-26 07:42:03.468704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.658 [2024-11-26 07:42:03.468718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.658 qpair failed and we were unable to recover it. 00:32:19.658 [2024-11-26 07:42:03.469052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.658 [2024-11-26 07:42:03.469064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.658 qpair failed and we were unable to recover it. 00:32:19.658 [2024-11-26 07:42:03.469264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.658 [2024-11-26 07:42:03.469274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.658 qpair failed and we were unable to recover it. 00:32:19.658 [2024-11-26 07:42:03.469582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.658 [2024-11-26 07:42:03.469593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.658 qpair failed and we were unable to recover it. 00:32:19.658 [2024-11-26 07:42:03.469901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.658 [2024-11-26 07:42:03.469912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.658 qpair failed and we were unable to recover it. 00:32:19.658 [2024-11-26 07:42:03.470292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.658 [2024-11-26 07:42:03.470303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.658 qpair failed and we were unable to recover it. 00:32:19.658 [2024-11-26 07:42:03.470475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.658 [2024-11-26 07:42:03.470487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.658 qpair failed and we were unable to recover it. 00:32:19.658 [2024-11-26 07:42:03.470773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.658 [2024-11-26 07:42:03.470785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.658 qpair failed and we were unable to recover it. 00:32:19.658 [2024-11-26 07:42:03.471115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.658 [2024-11-26 07:42:03.471126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.658 qpair failed and we were unable to recover it. 00:32:19.658 [2024-11-26 07:42:03.471347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.658 [2024-11-26 07:42:03.471357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.658 qpair failed and we were unable to recover it. 00:32:19.658 [2024-11-26 07:42:03.471687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.658 [2024-11-26 07:42:03.471698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.658 qpair failed and we were unable to recover it. 00:32:19.658 [2024-11-26 07:42:03.472009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.658 [2024-11-26 07:42:03.472020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.658 qpair failed and we were unable to recover it. 00:32:19.658 [2024-11-26 07:42:03.472323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.658 [2024-11-26 07:42:03.472335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.658 qpair failed and we were unable to recover it. 00:32:19.658 [2024-11-26 07:42:03.472652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.658 [2024-11-26 07:42:03.472664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.658 qpair failed and we were unable to recover it. 00:32:19.658 [2024-11-26 07:42:03.472991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.658 [2024-11-26 07:42:03.473003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.658 qpair failed and we were unable to recover it. 00:32:19.658 [2024-11-26 07:42:03.473282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.658 [2024-11-26 07:42:03.473293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.658 qpair failed and we were unable to recover it. 00:32:19.658 [2024-11-26 07:42:03.473605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.658 [2024-11-26 07:42:03.473616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.658 qpair failed and we were unable to recover it. 00:32:19.658 [2024-11-26 07:42:03.473948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.658 [2024-11-26 07:42:03.473960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.658 qpair failed and we were unable to recover it. 00:32:19.658 [2024-11-26 07:42:03.474305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.658 [2024-11-26 07:42:03.474317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.658 qpair failed and we were unable to recover it. 00:32:19.658 [2024-11-26 07:42:03.474570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.658 [2024-11-26 07:42:03.474581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.658 qpair failed and we were unable to recover it. 00:32:19.658 [2024-11-26 07:42:03.474886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.658 [2024-11-26 07:42:03.474897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.658 qpair failed and we were unable to recover it. 00:32:19.658 [2024-11-26 07:42:03.475201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.658 [2024-11-26 07:42:03.475212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.658 qpair failed and we were unable to recover it. 00:32:19.658 [2024-11-26 07:42:03.475441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.658 [2024-11-26 07:42:03.475452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.658 qpair failed and we were unable to recover it. 00:32:19.658 [2024-11-26 07:42:03.475782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.658 [2024-11-26 07:42:03.475793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.658 qpair failed and we were unable to recover it. 00:32:19.658 [2024-11-26 07:42:03.476108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.658 [2024-11-26 07:42:03.476121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.658 qpair failed and we were unable to recover it. 00:32:19.658 [2024-11-26 07:42:03.476300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.658 [2024-11-26 07:42:03.476311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.658 qpair failed and we were unable to recover it. 00:32:19.658 [2024-11-26 07:42:03.476641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.658 [2024-11-26 07:42:03.476652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.658 qpair failed and we were unable to recover it. 00:32:19.658 [2024-11-26 07:42:03.476936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.658 [2024-11-26 07:42:03.476948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.658 qpair failed and we were unable to recover it. 00:32:19.658 [2024-11-26 07:42:03.477272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.658 [2024-11-26 07:42:03.477283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.658 qpair failed and we were unable to recover it. 00:32:19.658 [2024-11-26 07:42:03.477507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.658 [2024-11-26 07:42:03.477518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.658 qpair failed and we were unable to recover it. 00:32:19.658 [2024-11-26 07:42:03.477844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.658 [2024-11-26 07:42:03.477856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.658 qpair failed and we were unable to recover it. 00:32:19.658 [2024-11-26 07:42:03.478199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.658 [2024-11-26 07:42:03.478210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.658 qpair failed and we were unable to recover it. 00:32:19.658 [2024-11-26 07:42:03.478544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.658 [2024-11-26 07:42:03.478555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.658 qpair failed and we were unable to recover it. 00:32:19.658 [2024-11-26 07:42:03.478889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.658 [2024-11-26 07:42:03.478900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.658 qpair failed and we were unable to recover it. 00:32:19.658 [2024-11-26 07:42:03.479232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.658 [2024-11-26 07:42:03.479243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.658 qpair failed and we were unable to recover it. 00:32:19.658 [2024-11-26 07:42:03.479546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.658 [2024-11-26 07:42:03.479557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.658 qpair failed and we were unable to recover it. 00:32:19.659 [2024-11-26 07:42:03.479858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.659 [2024-11-26 07:42:03.479873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.659 qpair failed and we were unable to recover it. 00:32:19.659 [2024-11-26 07:42:03.480208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.659 [2024-11-26 07:42:03.480219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.659 qpair failed and we were unable to recover it. 00:32:19.659 [2024-11-26 07:42:03.480523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.659 [2024-11-26 07:42:03.480534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.659 qpair failed and we were unable to recover it. 00:32:19.659 [2024-11-26 07:42:03.480843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.659 [2024-11-26 07:42:03.480855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.659 qpair failed and we were unable to recover it. 00:32:19.659 [2024-11-26 07:42:03.481192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.659 [2024-11-26 07:42:03.481204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.659 qpair failed and we were unable to recover it. 00:32:19.659 [2024-11-26 07:42:03.481535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.659 [2024-11-26 07:42:03.481548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.659 qpair failed and we were unable to recover it. 00:32:19.659 [2024-11-26 07:42:03.481841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.659 [2024-11-26 07:42:03.481854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.659 qpair failed and we were unable to recover it. 00:32:19.659 [2024-11-26 07:42:03.482208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.659 [2024-11-26 07:42:03.482220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.659 qpair failed and we were unable to recover it. 00:32:19.659 [2024-11-26 07:42:03.482517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.659 [2024-11-26 07:42:03.482528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.659 qpair failed and we were unable to recover it. 00:32:19.659 [2024-11-26 07:42:03.482860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.659 [2024-11-26 07:42:03.482877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.659 qpair failed and we were unable to recover it. 00:32:19.659 [2024-11-26 07:42:03.483202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.659 [2024-11-26 07:42:03.483212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.659 qpair failed and we were unable to recover it. 00:32:19.659 [2024-11-26 07:42:03.483515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.659 [2024-11-26 07:42:03.483526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.659 qpair failed and we were unable to recover it. 00:32:19.659 [2024-11-26 07:42:03.483835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.659 [2024-11-26 07:42:03.483847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.659 qpair failed and we were unable to recover it. 00:32:19.659 [2024-11-26 07:42:03.484136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.659 [2024-11-26 07:42:03.484147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.659 qpair failed and we were unable to recover it. 00:32:19.659 [2024-11-26 07:42:03.484477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.659 [2024-11-26 07:42:03.484488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.659 qpair failed and we were unable to recover it. 00:32:19.659 [2024-11-26 07:42:03.484671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.659 [2024-11-26 07:42:03.484682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.659 qpair failed and we were unable to recover it. 00:32:19.659 [2024-11-26 07:42:03.485000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.659 [2024-11-26 07:42:03.485012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.659 qpair failed and we were unable to recover it. 00:32:19.659 [2024-11-26 07:42:03.485346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.659 [2024-11-26 07:42:03.485357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.659 qpair failed and we were unable to recover it. 00:32:19.659 [2024-11-26 07:42:03.485552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.659 [2024-11-26 07:42:03.485564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.659 qpair failed and we were unable to recover it. 00:32:19.659 [2024-11-26 07:42:03.485821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.659 [2024-11-26 07:42:03.485832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.659 qpair failed and we were unable to recover it. 00:32:19.659 [2024-11-26 07:42:03.486149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.659 [2024-11-26 07:42:03.486161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.659 qpair failed and we were unable to recover it. 00:32:19.659 [2024-11-26 07:42:03.486494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.659 [2024-11-26 07:42:03.486506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.659 qpair failed and we were unable to recover it. 00:32:19.659 [2024-11-26 07:42:03.486832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.659 [2024-11-26 07:42:03.486843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.659 qpair failed and we were unable to recover it. 00:32:19.659 [2024-11-26 07:42:03.487150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.659 [2024-11-26 07:42:03.487162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.659 qpair failed and we were unable to recover it. 00:32:19.659 [2024-11-26 07:42:03.487481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.659 [2024-11-26 07:42:03.487492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.659 qpair failed and we were unable to recover it. 00:32:19.659 [2024-11-26 07:42:03.487835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.659 [2024-11-26 07:42:03.487846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.659 qpair failed and we were unable to recover it. 00:32:19.659 [2024-11-26 07:42:03.488032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.659 [2024-11-26 07:42:03.488044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.659 qpair failed and we were unable to recover it. 00:32:19.659 [2024-11-26 07:42:03.488230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.659 [2024-11-26 07:42:03.488241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.659 qpair failed and we were unable to recover it. 00:32:19.659 [2024-11-26 07:42:03.488557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.659 [2024-11-26 07:42:03.488568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.659 qpair failed and we were unable to recover it. 00:32:19.659 [2024-11-26 07:42:03.488892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.659 [2024-11-26 07:42:03.488904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.659 qpair failed and we were unable to recover it. 00:32:19.659 [2024-11-26 07:42:03.489244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.659 [2024-11-26 07:42:03.489255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.659 qpair failed and we were unable to recover it. 00:32:19.659 [2024-11-26 07:42:03.489564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.659 [2024-11-26 07:42:03.489575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.659 qpair failed and we were unable to recover it. 00:32:19.659 [2024-11-26 07:42:03.489754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.659 [2024-11-26 07:42:03.489767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.659 qpair failed and we were unable to recover it. 00:32:19.659 [2024-11-26 07:42:03.490063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.659 [2024-11-26 07:42:03.490074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.659 qpair failed and we were unable to recover it. 00:32:19.659 [2024-11-26 07:42:03.490434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.659 [2024-11-26 07:42:03.490445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.659 qpair failed and we were unable to recover it. 00:32:19.659 [2024-11-26 07:42:03.490744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.659 [2024-11-26 07:42:03.490757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.659 qpair failed and we were unable to recover it. 00:32:19.659 [2024-11-26 07:42:03.491094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.659 [2024-11-26 07:42:03.491106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.659 qpair failed and we were unable to recover it. 00:32:19.659 [2024-11-26 07:42:03.491393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.659 [2024-11-26 07:42:03.491404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.659 qpair failed and we were unable to recover it. 00:32:19.660 [2024-11-26 07:42:03.491639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.660 [2024-11-26 07:42:03.491651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.660 qpair failed and we were unable to recover it. 00:32:19.660 [2024-11-26 07:42:03.491955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.660 [2024-11-26 07:42:03.491966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.660 qpair failed and we were unable to recover it. 00:32:19.660 [2024-11-26 07:42:03.492315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.660 [2024-11-26 07:42:03.492327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.660 qpair failed and we were unable to recover it. 00:32:19.660 [2024-11-26 07:42:03.492659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.660 [2024-11-26 07:42:03.492671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.660 qpair failed and we were unable to recover it. 00:32:19.660 [2024-11-26 07:42:03.492996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.660 [2024-11-26 07:42:03.493007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.660 qpair failed and we were unable to recover it. 00:32:19.660 [2024-11-26 07:42:03.493211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.660 [2024-11-26 07:42:03.493222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.660 qpair failed and we were unable to recover it. 00:32:19.660 [2024-11-26 07:42:03.493548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.660 [2024-11-26 07:42:03.493559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.660 qpair failed and we were unable to recover it. 00:32:19.660 [2024-11-26 07:42:03.493844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.660 [2024-11-26 07:42:03.493855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.660 qpair failed and we were unable to recover it. 00:32:19.660 [2024-11-26 07:42:03.494194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.660 [2024-11-26 07:42:03.494205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.660 qpair failed and we were unable to recover it. 00:32:19.660 [2024-11-26 07:42:03.494515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.660 [2024-11-26 07:42:03.494527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.660 qpair failed and we were unable to recover it. 00:32:19.660 [2024-11-26 07:42:03.494830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.660 [2024-11-26 07:42:03.494841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.660 qpair failed and we were unable to recover it. 00:32:19.660 [2024-11-26 07:42:03.495165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.660 [2024-11-26 07:42:03.495177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.660 qpair failed and we were unable to recover it. 00:32:19.660 [2024-11-26 07:42:03.495348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.660 [2024-11-26 07:42:03.495360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.660 qpair failed and we were unable to recover it. 00:32:19.660 [2024-11-26 07:42:03.495674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.660 [2024-11-26 07:42:03.495686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.660 qpair failed and we were unable to recover it. 00:32:19.660 [2024-11-26 07:42:03.495993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.660 [2024-11-26 07:42:03.496005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.660 qpair failed and we were unable to recover it. 00:32:19.660 [2024-11-26 07:42:03.496337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.660 [2024-11-26 07:42:03.496348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.660 qpair failed and we were unable to recover it. 00:32:19.660 [2024-11-26 07:42:03.496726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.660 [2024-11-26 07:42:03.496737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.660 qpair failed and we were unable to recover it. 00:32:19.660 [2024-11-26 07:42:03.496907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.660 [2024-11-26 07:42:03.496918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.660 qpair failed and we were unable to recover it. 00:32:19.660 [2024-11-26 07:42:03.497238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.660 [2024-11-26 07:42:03.497249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.660 qpair failed and we were unable to recover it. 00:32:19.660 [2024-11-26 07:42:03.497580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.660 [2024-11-26 07:42:03.497591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.660 qpair failed and we were unable to recover it. 00:32:19.660 [2024-11-26 07:42:03.497900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.660 [2024-11-26 07:42:03.497912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.660 qpair failed and we were unable to recover it. 00:32:19.660 [2024-11-26 07:42:03.498240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.660 [2024-11-26 07:42:03.498254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.660 qpair failed and we were unable to recover it. 00:32:19.660 [2024-11-26 07:42:03.498577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.660 [2024-11-26 07:42:03.498588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.660 qpair failed and we were unable to recover it. 00:32:19.660 [2024-11-26 07:42:03.498877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.660 [2024-11-26 07:42:03.498889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.660 qpair failed and we were unable to recover it. 00:32:19.660 [2024-11-26 07:42:03.499206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.660 [2024-11-26 07:42:03.499218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.660 qpair failed and we were unable to recover it. 00:32:19.660 [2024-11-26 07:42:03.499517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.660 [2024-11-26 07:42:03.499529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.660 qpair failed and we were unable to recover it. 00:32:19.660 [2024-11-26 07:42:03.499857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.660 [2024-11-26 07:42:03.499872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.660 qpair failed and we were unable to recover it. 00:32:19.660 [2024-11-26 07:42:03.500211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.660 [2024-11-26 07:42:03.500223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.660 qpair failed and we were unable to recover it. 00:32:19.660 [2024-11-26 07:42:03.500403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.660 [2024-11-26 07:42:03.500415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.660 qpair failed and we were unable to recover it. 00:32:19.660 [2024-11-26 07:42:03.500692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.660 [2024-11-26 07:42:03.500703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.660 qpair failed and we were unable to recover it. 00:32:19.660 [2024-11-26 07:42:03.501010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.660 [2024-11-26 07:42:03.501022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.660 qpair failed and we were unable to recover it. 00:32:19.660 [2024-11-26 07:42:03.501357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.660 [2024-11-26 07:42:03.501368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.660 qpair failed and we were unable to recover it. 00:32:19.660 [2024-11-26 07:42:03.501696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.660 [2024-11-26 07:42:03.501708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.660 qpair failed and we were unable to recover it. 00:32:19.660 [2024-11-26 07:42:03.502010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.660 [2024-11-26 07:42:03.502022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.660 qpair failed and we were unable to recover it. 00:32:19.660 [2024-11-26 07:42:03.502328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.660 [2024-11-26 07:42:03.502339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.660 qpair failed and we were unable to recover it. 00:32:19.660 [2024-11-26 07:42:03.502627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.660 [2024-11-26 07:42:03.502638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.660 qpair failed and we were unable to recover it. 00:32:19.660 [2024-11-26 07:42:03.502850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.660 [2024-11-26 07:42:03.502861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.660 qpair failed and we were unable to recover it. 00:32:19.660 [2024-11-26 07:42:03.503158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.503169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.661 qpair failed and we were unable to recover it. 00:32:19.661 [2024-11-26 07:42:03.503501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.503511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.661 qpair failed and we were unable to recover it. 00:32:19.661 [2024-11-26 07:42:03.503844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.503856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.661 qpair failed and we were unable to recover it. 00:32:19.661 [2024-11-26 07:42:03.504187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.504200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.661 qpair failed and we were unable to recover it. 00:32:19.661 [2024-11-26 07:42:03.504530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.504542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.661 qpair failed and we were unable to recover it. 00:32:19.661 [2024-11-26 07:42:03.504880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.504893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.661 qpair failed and we were unable to recover it. 00:32:19.661 [2024-11-26 07:42:03.505276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.505288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.661 qpair failed and we were unable to recover it. 00:32:19.661 [2024-11-26 07:42:03.505588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.505600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.661 qpair failed and we were unable to recover it. 00:32:19.661 [2024-11-26 07:42:03.505960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.505973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.661 qpair failed and we were unable to recover it. 00:32:19.661 [2024-11-26 07:42:03.506285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.506298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.661 qpair failed and we were unable to recover it. 00:32:19.661 [2024-11-26 07:42:03.506636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.506648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.661 qpair failed and we were unable to recover it. 00:32:19.661 [2024-11-26 07:42:03.506956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.506970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.661 qpair failed and we were unable to recover it. 00:32:19.661 [2024-11-26 07:42:03.507297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.507309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.661 qpair failed and we were unable to recover it. 00:32:19.661 [2024-11-26 07:42:03.507496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.507508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.661 qpair failed and we were unable to recover it. 00:32:19.661 [2024-11-26 07:42:03.507813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.507825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.661 qpair failed and we were unable to recover it. 00:32:19.661 [2024-11-26 07:42:03.508176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.508189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.661 qpair failed and we were unable to recover it. 00:32:19.661 [2024-11-26 07:42:03.508490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.508504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.661 qpair failed and we were unable to recover it. 00:32:19.661 [2024-11-26 07:42:03.508838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.508851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.661 qpair failed and we were unable to recover it. 00:32:19.661 [2024-11-26 07:42:03.509182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.509194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.661 qpair failed and we were unable to recover it. 00:32:19.661 [2024-11-26 07:42:03.509533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.509546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.661 qpair failed and we were unable to recover it. 00:32:19.661 [2024-11-26 07:42:03.509855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.509873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.661 qpair failed and we were unable to recover it. 00:32:19.661 [2024-11-26 07:42:03.510203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.510216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.661 qpair failed and we were unable to recover it. 00:32:19.661 [2024-11-26 07:42:03.510512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.510525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.661 qpair failed and we were unable to recover it. 00:32:19.661 [2024-11-26 07:42:03.510784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.510796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.661 qpair failed and we were unable to recover it. 00:32:19.661 [2024-11-26 07:42:03.511174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.511187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.661 qpair failed and we were unable to recover it. 00:32:19.661 [2024-11-26 07:42:03.511519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.511531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.661 qpair failed and we were unable to recover it. 00:32:19.661 [2024-11-26 07:42:03.511816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.511828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.661 qpair failed and we were unable to recover it. 00:32:19.661 [2024-11-26 07:42:03.512197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.512210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.661 qpair failed and we were unable to recover it. 00:32:19.661 [2024-11-26 07:42:03.512559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.512571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.661 qpair failed and we were unable to recover it. 00:32:19.661 [2024-11-26 07:42:03.512831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.512842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.661 qpair failed and we were unable to recover it. 00:32:19.661 [2024-11-26 07:42:03.513154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.513167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.661 qpair failed and we were unable to recover it. 00:32:19.661 [2024-11-26 07:42:03.513384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.513397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.661 qpair failed and we were unable to recover it. 00:32:19.661 [2024-11-26 07:42:03.513716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.513728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.661 qpair failed and we were unable to recover it. 00:32:19.661 [2024-11-26 07:42:03.514032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.514045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.661 qpair failed and we were unable to recover it. 00:32:19.661 [2024-11-26 07:42:03.514232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.514244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.661 qpair failed and we were unable to recover it. 00:32:19.661 [2024-11-26 07:42:03.514440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.514454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.661 qpair failed and we were unable to recover it. 00:32:19.661 [2024-11-26 07:42:03.514740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.514752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.661 qpair failed and we were unable to recover it. 00:32:19.661 [2024-11-26 07:42:03.515084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.515096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.661 qpair failed and we were unable to recover it. 00:32:19.661 [2024-11-26 07:42:03.515432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.661 [2024-11-26 07:42:03.515444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.662 qpair failed and we were unable to recover it. 00:32:19.662 [2024-11-26 07:42:03.515762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.662 [2024-11-26 07:42:03.515775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.662 qpair failed and we were unable to recover it. 00:32:19.662 [2024-11-26 07:42:03.516104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.662 [2024-11-26 07:42:03.516117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.662 qpair failed and we were unable to recover it. 00:32:19.662 [2024-11-26 07:42:03.516318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.662 [2024-11-26 07:42:03.516330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.662 qpair failed and we were unable to recover it. 00:32:19.662 [2024-11-26 07:42:03.516546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.662 [2024-11-26 07:42:03.516558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.662 qpair failed and we were unable to recover it. 00:32:19.662 [2024-11-26 07:42:03.516752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.662 [2024-11-26 07:42:03.516764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.662 qpair failed and we were unable to recover it. 00:32:19.662 [2024-11-26 07:42:03.517081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.662 [2024-11-26 07:42:03.517093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.662 qpair failed and we were unable to recover it. 00:32:19.662 [2024-11-26 07:42:03.517404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.662 [2024-11-26 07:42:03.517416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.662 qpair failed and we were unable to recover it. 00:32:19.662 [2024-11-26 07:42:03.517745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.662 [2024-11-26 07:42:03.517756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.662 qpair failed and we were unable to recover it. 00:32:19.662 [2024-11-26 07:42:03.518060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.662 [2024-11-26 07:42:03.518072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.662 qpair failed and we were unable to recover it. 00:32:19.662 [2024-11-26 07:42:03.518381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.662 [2024-11-26 07:42:03.518392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.662 qpair failed and we were unable to recover it. 00:32:19.662 [2024-11-26 07:42:03.518704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.662 [2024-11-26 07:42:03.518715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.662 qpair failed and we were unable to recover it. 00:32:19.662 [2024-11-26 07:42:03.519045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.662 [2024-11-26 07:42:03.519057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.662 qpair failed and we were unable to recover it. 00:32:19.662 [2024-11-26 07:42:03.519371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.662 [2024-11-26 07:42:03.519383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.662 qpair failed and we were unable to recover it. 00:32:19.662 [2024-11-26 07:42:03.519708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.662 [2024-11-26 07:42:03.519724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.662 qpair failed and we were unable to recover it. 00:32:19.662 [2024-11-26 07:42:03.520099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.662 [2024-11-26 07:42:03.520111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.662 qpair failed and we were unable to recover it. 00:32:19.662 [2024-11-26 07:42:03.520436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.662 [2024-11-26 07:42:03.520448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.662 qpair failed and we were unable to recover it. 00:32:19.662 [2024-11-26 07:42:03.520755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.662 [2024-11-26 07:42:03.520766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.662 qpair failed and we were unable to recover it. 00:32:19.662 [2024-11-26 07:42:03.521060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.662 [2024-11-26 07:42:03.521071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.662 qpair failed and we were unable to recover it. 00:32:19.662 [2024-11-26 07:42:03.521382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.662 [2024-11-26 07:42:03.521393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.662 qpair failed and we were unable to recover it. 00:32:19.662 [2024-11-26 07:42:03.521730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.662 [2024-11-26 07:42:03.521742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.662 qpair failed and we were unable to recover it. 00:32:19.662 [2024-11-26 07:42:03.522075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.662 [2024-11-26 07:42:03.522087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.662 qpair failed and we were unable to recover it. 00:32:19.662 [2024-11-26 07:42:03.522394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.662 [2024-11-26 07:42:03.522406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.662 qpair failed and we were unable to recover it. 00:32:19.662 [2024-11-26 07:42:03.522616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.662 [2024-11-26 07:42:03.522629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.662 qpair failed and we were unable to recover it. 00:32:19.662 [2024-11-26 07:42:03.522934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.662 [2024-11-26 07:42:03.522946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.662 qpair failed and we were unable to recover it. 00:32:19.662 [2024-11-26 07:42:03.523256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.662 [2024-11-26 07:42:03.523268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.662 qpair failed and we were unable to recover it. 00:32:19.662 [2024-11-26 07:42:03.523613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.662 [2024-11-26 07:42:03.523624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.662 qpair failed and we were unable to recover it. 00:32:19.662 [2024-11-26 07:42:03.523929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.662 [2024-11-26 07:42:03.523941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.662 qpair failed and we were unable to recover it. 00:32:19.662 [2024-11-26 07:42:03.524145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.662 [2024-11-26 07:42:03.524157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.662 qpair failed and we were unable to recover it. 00:32:19.662 [2024-11-26 07:42:03.524380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.662 [2024-11-26 07:42:03.524391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.662 qpair failed and we were unable to recover it. 00:32:19.662 [2024-11-26 07:42:03.524711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.662 [2024-11-26 07:42:03.524724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.662 qpair failed and we were unable to recover it. 00:32:19.662 [2024-11-26 07:42:03.525039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.662 [2024-11-26 07:42:03.525052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.662 qpair failed and we were unable to recover it. 00:32:19.662 [2024-11-26 07:42:03.525344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.663 [2024-11-26 07:42:03.525356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.663 qpair failed and we were unable to recover it. 00:32:19.663 [2024-11-26 07:42:03.525636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.663 [2024-11-26 07:42:03.525648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.663 qpair failed and we were unable to recover it. 00:32:19.663 [2024-11-26 07:42:03.526037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.663 [2024-11-26 07:42:03.526048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.663 qpair failed and we were unable to recover it. 00:32:19.663 [2024-11-26 07:42:03.526400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.663 [2024-11-26 07:42:03.526412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.663 qpair failed and we were unable to recover it. 00:32:19.663 [2024-11-26 07:42:03.526743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.663 [2024-11-26 07:42:03.526755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.663 qpair failed and we were unable to recover it. 00:32:19.663 [2024-11-26 07:42:03.527063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.663 [2024-11-26 07:42:03.527076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.663 qpair failed and we were unable to recover it. 00:32:19.663 [2024-11-26 07:42:03.527407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.663 [2024-11-26 07:42:03.527418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.663 qpair failed and we were unable to recover it. 00:32:19.663 [2024-11-26 07:42:03.527759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.663 [2024-11-26 07:42:03.527770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.663 qpair failed and we were unable to recover it. 00:32:19.663 [2024-11-26 07:42:03.528080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.663 [2024-11-26 07:42:03.528093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.663 qpair failed and we were unable to recover it. 00:32:19.663 [2024-11-26 07:42:03.528421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.663 [2024-11-26 07:42:03.528436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.663 qpair failed and we were unable to recover it. 00:32:19.663 [2024-11-26 07:42:03.528771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.663 [2024-11-26 07:42:03.528783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.663 qpair failed and we were unable to recover it. 00:32:19.663 [2024-11-26 07:42:03.529091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.663 [2024-11-26 07:42:03.529103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.663 qpair failed and we were unable to recover it. 00:32:19.663 [2024-11-26 07:42:03.529440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.663 [2024-11-26 07:42:03.529452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.663 qpair failed and we were unable to recover it. 00:32:19.663 [2024-11-26 07:42:03.529822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.663 [2024-11-26 07:42:03.529834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.663 qpair failed and we were unable to recover it. 00:32:19.663 [2024-11-26 07:42:03.530008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.663 [2024-11-26 07:42:03.530021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.663 qpair failed and we were unable to recover it. 00:32:19.663 [2024-11-26 07:42:03.530316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.663 [2024-11-26 07:42:03.530327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.663 qpair failed and we were unable to recover it. 00:32:19.663 [2024-11-26 07:42:03.530496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.663 [2024-11-26 07:42:03.530508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.663 qpair failed and we were unable to recover it. 00:32:19.663 [2024-11-26 07:42:03.530853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.663 [2024-11-26 07:42:03.530870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.663 qpair failed and we were unable to recover it. 00:32:19.663 [2024-11-26 07:42:03.531184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.663 [2024-11-26 07:42:03.531195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.663 qpair failed and we were unable to recover it. 00:32:19.663 [2024-11-26 07:42:03.531522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.663 [2024-11-26 07:42:03.531534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.663 qpair failed and we were unable to recover it. 00:32:19.663 [2024-11-26 07:42:03.531897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.663 [2024-11-26 07:42:03.531909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.663 qpair failed and we were unable to recover it. 00:32:19.663 [2024-11-26 07:42:03.532196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.663 [2024-11-26 07:42:03.532207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.663 qpair failed and we were unable to recover it. 00:32:19.663 [2024-11-26 07:42:03.532528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.663 [2024-11-26 07:42:03.532539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.663 qpair failed and we were unable to recover it. 00:32:19.663 [2024-11-26 07:42:03.532877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.663 [2024-11-26 07:42:03.532890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.663 qpair failed and we were unable to recover it. 00:32:19.663 [2024-11-26 07:42:03.533198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.663 [2024-11-26 07:42:03.533210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.663 qpair failed and we were unable to recover it. 00:32:19.663 [2024-11-26 07:42:03.533549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.663 [2024-11-26 07:42:03.533561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.663 qpair failed and we were unable to recover it. 00:32:19.663 [2024-11-26 07:42:03.533869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.663 [2024-11-26 07:42:03.533881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.663 qpair failed and we were unable to recover it. 00:32:19.663 [2024-11-26 07:42:03.534191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.663 [2024-11-26 07:42:03.534204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.663 qpair failed and we were unable to recover it. 00:32:19.663 [2024-11-26 07:42:03.534547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.663 [2024-11-26 07:42:03.534558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.663 qpair failed and we were unable to recover it. 00:32:19.663 [2024-11-26 07:42:03.534869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.663 [2024-11-26 07:42:03.534881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.663 qpair failed and we were unable to recover it. 00:32:19.663 [2024-11-26 07:42:03.535242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.663 [2024-11-26 07:42:03.535253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.663 qpair failed and we were unable to recover it. 00:32:19.663 [2024-11-26 07:42:03.535437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.663 [2024-11-26 07:42:03.535448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.663 qpair failed and we were unable to recover it. 00:32:19.663 [2024-11-26 07:42:03.535724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.663 [2024-11-26 07:42:03.535736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.663 qpair failed and we were unable to recover it. 00:32:19.663 [2024-11-26 07:42:03.535943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.663 [2024-11-26 07:42:03.535956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.663 qpair failed and we were unable to recover it. 00:32:19.663 [2024-11-26 07:42:03.536289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.663 [2024-11-26 07:42:03.536300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.663 qpair failed and we were unable to recover it. 00:32:19.663 [2024-11-26 07:42:03.536603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.663 [2024-11-26 07:42:03.536615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.663 qpair failed and we were unable to recover it. 00:32:19.663 [2024-11-26 07:42:03.536957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.663 [2024-11-26 07:42:03.536969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.663 qpair failed and we were unable to recover it. 00:32:19.663 [2024-11-26 07:42:03.537295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.663 [2024-11-26 07:42:03.537308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.663 qpair failed and we were unable to recover it. 00:32:19.663 [2024-11-26 07:42:03.537614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.664 [2024-11-26 07:42:03.537626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.664 qpair failed and we were unable to recover it. 00:32:19.664 [2024-11-26 07:42:03.537930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.664 [2024-11-26 07:42:03.537943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.664 qpair failed and we were unable to recover it. 00:32:19.664 [2024-11-26 07:42:03.538245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.664 [2024-11-26 07:42:03.538255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.664 qpair failed and we were unable to recover it. 00:32:19.664 [2024-11-26 07:42:03.538572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.664 [2024-11-26 07:42:03.538584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.664 qpair failed and we were unable to recover it. 00:32:19.664 [2024-11-26 07:42:03.539350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.664 [2024-11-26 07:42:03.539377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.664 qpair failed and we were unable to recover it. 00:32:19.664 [2024-11-26 07:42:03.539667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.664 [2024-11-26 07:42:03.539681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.664 qpair failed and we were unable to recover it. 00:32:19.664 [2024-11-26 07:42:03.540028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.664 [2024-11-26 07:42:03.540040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.664 qpair failed and we were unable to recover it. 00:32:19.664 [2024-11-26 07:42:03.540273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.664 [2024-11-26 07:42:03.540284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.664 qpair failed and we were unable to recover it. 00:32:19.664 [2024-11-26 07:42:03.540794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.664 [2024-11-26 07:42:03.540812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.664 qpair failed and we were unable to recover it. 00:32:19.664 [2024-11-26 07:42:03.541140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.664 [2024-11-26 07:42:03.541154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.664 qpair failed and we were unable to recover it. 00:32:19.664 [2024-11-26 07:42:03.541463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.664 [2024-11-26 07:42:03.541475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.664 qpair failed and we were unable to recover it. 00:32:19.664 [2024-11-26 07:42:03.541786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.664 [2024-11-26 07:42:03.541797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.664 qpair failed and we were unable to recover it. 00:32:19.664 [2024-11-26 07:42:03.542117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.664 [2024-11-26 07:42:03.542129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.664 qpair failed and we were unable to recover it. 00:32:19.664 [2024-11-26 07:42:03.542460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.664 [2024-11-26 07:42:03.542472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.664 qpair failed and we were unable to recover it. 00:32:19.664 [2024-11-26 07:42:03.542808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.664 [2024-11-26 07:42:03.542821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.664 qpair failed and we were unable to recover it. 00:32:19.664 [2024-11-26 07:42:03.543027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.664 [2024-11-26 07:42:03.543038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.664 qpair failed and we were unable to recover it. 00:32:19.664 [2024-11-26 07:42:03.543336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.664 [2024-11-26 07:42:03.543347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.664 qpair failed and we were unable to recover it. 00:32:19.664 [2024-11-26 07:42:03.543656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.664 [2024-11-26 07:42:03.543668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.664 qpair failed and we were unable to recover it. 00:32:19.664 [2024-11-26 07:42:03.543994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.664 [2024-11-26 07:42:03.544006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.664 qpair failed and we were unable to recover it. 00:32:19.664 [2024-11-26 07:42:03.544344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.664 [2024-11-26 07:42:03.544366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.664 qpair failed and we were unable to recover it. 00:32:19.664 [2024-11-26 07:42:03.544679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.664 [2024-11-26 07:42:03.544691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.664 qpair failed and we were unable to recover it. 00:32:19.664 [2024-11-26 07:42:03.544906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.664 [2024-11-26 07:42:03.544918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.664 qpair failed and we were unable to recover it. 00:32:19.664 [2024-11-26 07:42:03.545222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.664 [2024-11-26 07:42:03.545233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.664 qpair failed and we were unable to recover it. 00:32:19.664 [2024-11-26 07:42:03.545528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.664 [2024-11-26 07:42:03.545539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.664 qpair failed and we were unable to recover it. 00:32:19.664 [2024-11-26 07:42:03.545851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.664 [2024-11-26 07:42:03.545871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.664 qpair failed and we were unable to recover it. 00:32:19.664 [2024-11-26 07:42:03.546110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.664 [2024-11-26 07:42:03.546123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.664 qpair failed and we were unable to recover it. 00:32:19.664 [2024-11-26 07:42:03.546329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.664 [2024-11-26 07:42:03.546341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.664 qpair failed and we were unable to recover it. 00:32:19.664 [2024-11-26 07:42:03.546604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.664 [2024-11-26 07:42:03.546617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.664 qpair failed and we were unable to recover it. 00:32:19.664 [2024-11-26 07:42:03.546941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.664 [2024-11-26 07:42:03.546954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.664 qpair failed and we were unable to recover it. 00:32:19.664 [2024-11-26 07:42:03.547264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.664 [2024-11-26 07:42:03.547275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.664 qpair failed and we were unable to recover it. 00:32:19.664 [2024-11-26 07:42:03.547579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.664 [2024-11-26 07:42:03.547590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.664 qpair failed and we were unable to recover it. 00:32:19.664 [2024-11-26 07:42:03.547930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.664 [2024-11-26 07:42:03.547941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.664 qpair failed and we were unable to recover it. 00:32:19.664 [2024-11-26 07:42:03.548310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.664 [2024-11-26 07:42:03.548321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.664 qpair failed and we were unable to recover it. 00:32:19.664 [2024-11-26 07:42:03.548670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.664 [2024-11-26 07:42:03.548682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.664 qpair failed and we were unable to recover it. 00:32:19.664 [2024-11-26 07:42:03.548992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.664 [2024-11-26 07:42:03.549004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.664 qpair failed and we were unable to recover it. 00:32:19.664 [2024-11-26 07:42:03.549313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.664 [2024-11-26 07:42:03.549324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.664 qpair failed and we were unable to recover it. 00:32:19.664 [2024-11-26 07:42:03.549648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.664 [2024-11-26 07:42:03.549659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.664 qpair failed and we were unable to recover it. 00:32:19.664 [2024-11-26 07:42:03.550008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.664 [2024-11-26 07:42:03.550021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.664 qpair failed and we were unable to recover it. 00:32:19.664 [2024-11-26 07:42:03.550197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.550210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.665 qpair failed and we were unable to recover it. 00:32:19.665 [2024-11-26 07:42:03.550481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.550495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.665 qpair failed and we were unable to recover it. 00:32:19.665 [2024-11-26 07:42:03.550806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.550818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.665 qpair failed and we were unable to recover it. 00:32:19.665 [2024-11-26 07:42:03.551099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.551111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.665 qpair failed and we were unable to recover it. 00:32:19.665 [2024-11-26 07:42:03.551396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.551407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.665 qpair failed and we were unable to recover it. 00:32:19.665 [2024-11-26 07:42:03.551738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.551750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.665 qpair failed and we were unable to recover it. 00:32:19.665 [2024-11-26 07:42:03.552147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.552159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.665 qpair failed and we were unable to recover it. 00:32:19.665 [2024-11-26 07:42:03.552496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.552508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.665 qpair failed and we were unable to recover it. 00:32:19.665 [2024-11-26 07:42:03.552833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.552846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.665 qpair failed and we were unable to recover it. 00:32:19.665 [2024-11-26 07:42:03.553178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.553190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.665 qpair failed and we were unable to recover it. 00:32:19.665 [2024-11-26 07:42:03.553520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.553532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.665 qpair failed and we were unable to recover it. 00:32:19.665 [2024-11-26 07:42:03.553874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.553888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.665 qpair failed and we were unable to recover it. 00:32:19.665 [2024-11-26 07:42:03.554081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.554092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.665 qpair failed and we were unable to recover it. 00:32:19.665 [2024-11-26 07:42:03.554414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.554425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.665 qpair failed and we were unable to recover it. 00:32:19.665 [2024-11-26 07:42:03.554629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.554641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.665 qpair failed and we were unable to recover it. 00:32:19.665 [2024-11-26 07:42:03.554836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.554848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.665 qpair failed and we were unable to recover it. 00:32:19.665 [2024-11-26 07:42:03.555131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.555142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.665 qpair failed and we were unable to recover it. 00:32:19.665 [2024-11-26 07:42:03.555443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.555454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.665 qpair failed and we were unable to recover it. 00:32:19.665 [2024-11-26 07:42:03.555785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.555797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.665 qpair failed and we were unable to recover it. 00:32:19.665 [2024-11-26 07:42:03.556131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.556144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.665 qpair failed and we were unable to recover it. 00:32:19.665 [2024-11-26 07:42:03.556480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.556491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.665 qpair failed and we were unable to recover it. 00:32:19.665 [2024-11-26 07:42:03.556687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.556698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.665 qpair failed and we were unable to recover it. 00:32:19.665 [2024-11-26 07:42:03.557027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.557039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.665 qpair failed and we were unable to recover it. 00:32:19.665 [2024-11-26 07:42:03.557370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.557383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.665 qpair failed and we were unable to recover it. 00:32:19.665 [2024-11-26 07:42:03.557720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.557732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.665 qpair failed and we were unable to recover it. 00:32:19.665 [2024-11-26 07:42:03.558053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.558065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.665 qpair failed and we were unable to recover it. 00:32:19.665 [2024-11-26 07:42:03.558246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.558257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.665 qpair failed and we were unable to recover it. 00:32:19.665 [2024-11-26 07:42:03.558536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.558548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.665 qpair failed and we were unable to recover it. 00:32:19.665 [2024-11-26 07:42:03.558879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.558894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.665 qpair failed and we were unable to recover it. 00:32:19.665 [2024-11-26 07:42:03.559231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.559243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.665 qpair failed and we were unable to recover it. 00:32:19.665 [2024-11-26 07:42:03.559552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.559563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.665 qpair failed and we were unable to recover it. 00:32:19.665 [2024-11-26 07:42:03.559850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.559868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.665 qpair failed and we were unable to recover it. 00:32:19.665 [2024-11-26 07:42:03.560158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.560169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.665 qpair failed and we were unable to recover it. 00:32:19.665 [2024-11-26 07:42:03.560350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.560360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.665 qpair failed and we were unable to recover it. 00:32:19.665 [2024-11-26 07:42:03.560674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.560685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.665 qpair failed and we were unable to recover it. 00:32:19.665 [2024-11-26 07:42:03.561013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.561026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.665 qpair failed and we were unable to recover it. 00:32:19.665 [2024-11-26 07:42:03.561361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.561372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.665 qpair failed and we were unable to recover it. 00:32:19.665 [2024-11-26 07:42:03.561676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.561687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.665 qpair failed and we were unable to recover it. 00:32:19.665 [2024-11-26 07:42:03.561949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.665 [2024-11-26 07:42:03.561968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.666 [2024-11-26 07:42:03.562307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.666 [2024-11-26 07:42:03.562327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.666 [2024-11-26 07:42:03.562670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.666 [2024-11-26 07:42:03.562681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.666 [2024-11-26 07:42:03.562971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.666 [2024-11-26 07:42:03.562984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.666 [2024-11-26 07:42:03.563311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.666 [2024-11-26 07:42:03.563323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.666 [2024-11-26 07:42:03.563706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.666 [2024-11-26 07:42:03.563718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.666 [2024-11-26 07:42:03.563978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.666 [2024-11-26 07:42:03.563990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.666 [2024-11-26 07:42:03.564179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.666 [2024-11-26 07:42:03.564189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.666 [2024-11-26 07:42:03.564400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.666 [2024-11-26 07:42:03.564413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.666 [2024-11-26 07:42:03.564703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.666 [2024-11-26 07:42:03.564714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.666 [2024-11-26 07:42:03.565016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.666 [2024-11-26 07:42:03.565027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.666 [2024-11-26 07:42:03.565361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.666 [2024-11-26 07:42:03.565373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.666 [2024-11-26 07:42:03.565552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.666 [2024-11-26 07:42:03.565563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.666 [2024-11-26 07:42:03.565728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.666 [2024-11-26 07:42:03.565740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.666 [2024-11-26 07:42:03.566025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.666 [2024-11-26 07:42:03.566037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.666 [2024-11-26 07:42:03.566367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.666 [2024-11-26 07:42:03.566379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.666 [2024-11-26 07:42:03.566710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.666 [2024-11-26 07:42:03.566723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.666 [2024-11-26 07:42:03.566959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.666 [2024-11-26 07:42:03.566973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.666 [2024-11-26 07:42:03.567280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.666 [2024-11-26 07:42:03.567293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.666 [2024-11-26 07:42:03.567630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.666 [2024-11-26 07:42:03.567642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.666 [2024-11-26 07:42:03.567966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.666 [2024-11-26 07:42:03.567978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.666 [2024-11-26 07:42:03.568254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.666 [2024-11-26 07:42:03.568265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.666 [2024-11-26 07:42:03.568568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.666 [2024-11-26 07:42:03.568579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.666 [2024-11-26 07:42:03.568889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.666 [2024-11-26 07:42:03.568901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.666 [2024-11-26 07:42:03.569218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.666 [2024-11-26 07:42:03.569229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.666 [2024-11-26 07:42:03.569411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.666 [2024-11-26 07:42:03.569422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.666 [2024-11-26 07:42:03.569734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.666 [2024-11-26 07:42:03.569745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.666 [2024-11-26 07:42:03.569930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.666 [2024-11-26 07:42:03.569941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.666 [2024-11-26 07:42:03.570276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.666 [2024-11-26 07:42:03.570288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.666 [2024-11-26 07:42:03.570599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.666 [2024-11-26 07:42:03.570611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.666 [2024-11-26 07:42:03.570968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.666 [2024-11-26 07:42:03.570980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.666 [2024-11-26 07:42:03.571207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.666 [2024-11-26 07:42:03.571219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.666 [2024-11-26 07:42:03.571545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.666 [2024-11-26 07:42:03.571556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.666 [2024-11-26 07:42:03.571867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.666 [2024-11-26 07:42:03.571881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.666 [2024-11-26 07:42:03.572215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.666 [2024-11-26 07:42:03.572227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.666 [2024-11-26 07:42:03.572534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.666 [2024-11-26 07:42:03.572546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.666 [2024-11-26 07:42:03.572879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.666 [2024-11-26 07:42:03.572892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.666 [2024-11-26 07:42:03.573218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.666 [2024-11-26 07:42:03.573229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.666 [2024-11-26 07:42:03.573562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.666 [2024-11-26 07:42:03.573573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.666 qpair failed and we were unable to recover it. 00:32:19.667 [2024-11-26 07:42:03.573905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.667 [2024-11-26 07:42:03.573917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.667 qpair failed and we were unable to recover it. 00:32:19.667 [2024-11-26 07:42:03.574013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.667 [2024-11-26 07:42:03.574025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.667 qpair failed and we were unable to recover it. 00:32:19.667 [2024-11-26 07:42:03.574184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.667 [2024-11-26 07:42:03.574198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.667 qpair failed and we were unable to recover it. 00:32:19.667 [2024-11-26 07:42:03.574525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.667 [2024-11-26 07:42:03.574537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.667 qpair failed and we were unable to recover it. 00:32:19.667 [2024-11-26 07:42:03.574820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.667 [2024-11-26 07:42:03.574831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.667 qpair failed and we were unable to recover it. 00:32:19.667 [2024-11-26 07:42:03.575160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.667 [2024-11-26 07:42:03.575172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.667 qpair failed and we were unable to recover it. 00:32:19.667 [2024-11-26 07:42:03.575501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.667 [2024-11-26 07:42:03.575514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.667 qpair failed and we were unable to recover it. 00:32:19.667 [2024-11-26 07:42:03.575873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.667 [2024-11-26 07:42:03.575887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.667 qpair failed and we were unable to recover it. 00:32:19.667 [2024-11-26 07:42:03.576177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.667 [2024-11-26 07:42:03.576188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.667 qpair failed and we were unable to recover it. 00:32:19.667 [2024-11-26 07:42:03.576504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.667 [2024-11-26 07:42:03.576515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.667 qpair failed and we were unable to recover it. 00:32:19.667 [2024-11-26 07:42:03.576844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.667 [2024-11-26 07:42:03.576857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.667 qpair failed and we were unable to recover it. 00:32:19.667 [2024-11-26 07:42:03.577199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.667 [2024-11-26 07:42:03.577212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.667 qpair failed and we were unable to recover it. 00:32:19.667 [2024-11-26 07:42:03.577544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.667 [2024-11-26 07:42:03.577556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.667 qpair failed and we were unable to recover it. 00:32:19.667 [2024-11-26 07:42:03.577738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.667 [2024-11-26 07:42:03.577750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.667 qpair failed and we were unable to recover it. 00:32:19.667 [2024-11-26 07:42:03.577963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.667 [2024-11-26 07:42:03.577974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.667 qpair failed and we were unable to recover it. 00:32:19.667 [2024-11-26 07:42:03.578262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.667 [2024-11-26 07:42:03.578273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.667 qpair failed and we were unable to recover it. 00:32:19.667 [2024-11-26 07:42:03.578557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.667 [2024-11-26 07:42:03.578568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.667 qpair failed and we were unable to recover it. 00:32:19.667 [2024-11-26 07:42:03.578854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.667 [2024-11-26 07:42:03.578869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.667 qpair failed and we were unable to recover it. 00:32:19.667 [2024-11-26 07:42:03.579193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.667 [2024-11-26 07:42:03.579206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.667 qpair failed and we were unable to recover it. 00:32:19.667 [2024-11-26 07:42:03.579507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.667 [2024-11-26 07:42:03.579518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.667 qpair failed and we were unable to recover it. 00:32:19.667 [2024-11-26 07:42:03.579852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.667 [2024-11-26 07:42:03.579867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.667 qpair failed and we were unable to recover it. 00:32:19.667 [2024-11-26 07:42:03.580169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.667 [2024-11-26 07:42:03.580180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.667 qpair failed and we were unable to recover it. 00:32:19.667 [2024-11-26 07:42:03.580487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.667 [2024-11-26 07:42:03.580500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.667 qpair failed and we were unable to recover it. 00:32:19.667 [2024-11-26 07:42:03.580827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.667 [2024-11-26 07:42:03.580839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.667 qpair failed and we were unable to recover it. 00:32:19.667 [2024-11-26 07:42:03.581176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.667 [2024-11-26 07:42:03.581188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.667 qpair failed and we were unable to recover it. 00:32:19.667 [2024-11-26 07:42:03.581497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.667 [2024-11-26 07:42:03.581508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.667 qpair failed and we were unable to recover it. 00:32:19.667 [2024-11-26 07:42:03.581706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.667 [2024-11-26 07:42:03.581717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.667 qpair failed and we were unable to recover it. 00:32:19.667 [2024-11-26 07:42:03.582056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.667 [2024-11-26 07:42:03.582068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.667 qpair failed and we were unable to recover it. 00:32:19.667 [2024-11-26 07:42:03.582389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.667 [2024-11-26 07:42:03.582400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.667 qpair failed and we were unable to recover it. 00:32:19.667 [2024-11-26 07:42:03.582727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.667 [2024-11-26 07:42:03.582738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.667 qpair failed and we were unable to recover it. 00:32:19.667 [2024-11-26 07:42:03.583075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.667 [2024-11-26 07:42:03.583087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.667 qpair failed and we were unable to recover it. 00:32:19.667 [2024-11-26 07:42:03.583396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.667 [2024-11-26 07:42:03.583408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.667 qpair failed and we were unable to recover it. 00:32:19.667 [2024-11-26 07:42:03.583732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.667 [2024-11-26 07:42:03.583744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.667 qpair failed and we were unable to recover it. 00:32:19.667 [2024-11-26 07:42:03.584030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.667 [2024-11-26 07:42:03.584042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.667 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.584397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.668 [2024-11-26 07:42:03.584409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.668 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.584708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.668 [2024-11-26 07:42:03.584721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.668 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.585055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.668 [2024-11-26 07:42:03.585067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.668 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.585372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.668 [2024-11-26 07:42:03.585385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.668 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.585683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.668 [2024-11-26 07:42:03.585695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.668 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.586015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.668 [2024-11-26 07:42:03.586027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.668 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.586360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.668 [2024-11-26 07:42:03.586372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.668 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.586701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.668 [2024-11-26 07:42:03.586712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.668 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.587053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.668 [2024-11-26 07:42:03.587067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.668 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.587369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.668 [2024-11-26 07:42:03.587381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.668 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.587720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.668 [2024-11-26 07:42:03.587732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.668 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.588043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.668 [2024-11-26 07:42:03.588054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.668 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.588227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.668 [2024-11-26 07:42:03.588241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.668 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.588571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.668 [2024-11-26 07:42:03.588581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.668 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.588761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.668 [2024-11-26 07:42:03.588774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.668 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.589085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.668 [2024-11-26 07:42:03.589098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.668 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.589398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.668 [2024-11-26 07:42:03.589410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.668 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.589719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.668 [2024-11-26 07:42:03.589731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.668 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.590070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.668 [2024-11-26 07:42:03.590081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.668 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.590406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.668 [2024-11-26 07:42:03.590417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.668 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.590746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.668 [2024-11-26 07:42:03.590759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.668 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.591072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.668 [2024-11-26 07:42:03.591084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.668 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.591410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.668 [2024-11-26 07:42:03.591422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.668 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.591602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.668 [2024-11-26 07:42:03.591615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.668 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.591793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.668 [2024-11-26 07:42:03.591806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.668 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.592002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.668 [2024-11-26 07:42:03.592014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.668 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.592312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.668 [2024-11-26 07:42:03.592324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.668 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.592677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.668 [2024-11-26 07:42:03.592688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.668 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.592995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.668 [2024-11-26 07:42:03.593008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.668 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.593329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.668 [2024-11-26 07:42:03.593340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.668 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.593688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.668 [2024-11-26 07:42:03.593700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.668 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.594008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.668 [2024-11-26 07:42:03.594020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.668 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.594344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.668 [2024-11-26 07:42:03.594356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.668 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.594667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.668 [2024-11-26 07:42:03.594678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.668 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.595068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.668 [2024-11-26 07:42:03.595080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.668 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.595401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.668 [2024-11-26 07:42:03.595413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.668 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.595743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.668 [2024-11-26 07:42:03.595756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.668 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.596099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.668 [2024-11-26 07:42:03.596111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.668 qpair failed and we were unable to recover it. 00:32:19.668 [2024-11-26 07:42:03.596447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.669 [2024-11-26 07:42:03.596459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.669 qpair failed and we were unable to recover it. 00:32:19.669 [2024-11-26 07:42:03.596768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.669 [2024-11-26 07:42:03.596783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.669 qpair failed and we were unable to recover it. 00:32:19.669 [2024-11-26 07:42:03.597107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.669 [2024-11-26 07:42:03.597119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.669 qpair failed and we were unable to recover it. 00:32:19.669 [2024-11-26 07:42:03.597449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.669 [2024-11-26 07:42:03.597461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.669 qpair failed and we were unable to recover it. 00:32:19.669 [2024-11-26 07:42:03.597783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.669 [2024-11-26 07:42:03.597795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.669 qpair failed and we were unable to recover it. 00:32:19.669 [2024-11-26 07:42:03.598112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.669 [2024-11-26 07:42:03.598125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.669 qpair failed and we were unable to recover it. 00:32:19.669 [2024-11-26 07:42:03.598437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.669 [2024-11-26 07:42:03.598449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.669 qpair failed and we were unable to recover it. 00:32:19.669 [2024-11-26 07:42:03.598811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.669 [2024-11-26 07:42:03.598823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.669 qpair failed and we were unable to recover it. 00:32:19.669 [2024-11-26 07:42:03.599147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.669 [2024-11-26 07:42:03.599159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.669 qpair failed and we were unable to recover it. 00:32:19.669 [2024-11-26 07:42:03.599458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.669 [2024-11-26 07:42:03.599470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.669 qpair failed and we were unable to recover it. 00:32:19.669 [2024-11-26 07:42:03.599778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.669 [2024-11-26 07:42:03.599790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.669 qpair failed and we were unable to recover it. 00:32:19.669 [2024-11-26 07:42:03.600047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.669 [2024-11-26 07:42:03.600060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.669 qpair failed and we were unable to recover it. 00:32:19.669 [2024-11-26 07:42:03.600174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.669 [2024-11-26 07:42:03.600186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.669 qpair failed and we were unable to recover it. 00:32:19.669 [2024-11-26 07:42:03.600518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.669 [2024-11-26 07:42:03.600530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.669 qpair failed and we were unable to recover it. 00:32:19.669 [2024-11-26 07:42:03.600840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.669 [2024-11-26 07:42:03.600852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.669 qpair failed and we were unable to recover it. 00:32:19.669 [2024-11-26 07:42:03.601155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.669 [2024-11-26 07:42:03.601167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.669 qpair failed and we were unable to recover it. 00:32:19.669 [2024-11-26 07:42:03.601502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.669 [2024-11-26 07:42:03.601514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.669 qpair failed and we were unable to recover it. 00:32:19.669 [2024-11-26 07:42:03.601823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.669 [2024-11-26 07:42:03.601836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.669 qpair failed and we were unable to recover it. 00:32:19.669 [2024-11-26 07:42:03.602162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.669 [2024-11-26 07:42:03.602176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.669 qpair failed and we were unable to recover it. 00:32:19.669 [2024-11-26 07:42:03.602484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.669 [2024-11-26 07:42:03.602496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.669 qpair failed and we were unable to recover it. 00:32:19.669 [2024-11-26 07:42:03.602831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.669 [2024-11-26 07:42:03.602843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.669 qpair failed and we were unable to recover it. 00:32:19.669 [2024-11-26 07:42:03.603044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.669 [2024-11-26 07:42:03.603057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.669 qpair failed and we were unable to recover it. 00:32:19.669 [2024-11-26 07:42:03.603366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.669 [2024-11-26 07:42:03.603379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.669 qpair failed and we were unable to recover it. 00:32:19.669 [2024-11-26 07:42:03.603709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.669 [2024-11-26 07:42:03.603721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.669 qpair failed and we were unable to recover it. 00:32:19.669 [2024-11-26 07:42:03.603993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.669 [2024-11-26 07:42:03.604005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.669 qpair failed and we were unable to recover it. 00:32:19.669 [2024-11-26 07:42:03.604182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.669 [2024-11-26 07:42:03.604193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.669 qpair failed and we were unable to recover it. 00:32:19.669 [2024-11-26 07:42:03.604399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.669 [2024-11-26 07:42:03.604410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.669 qpair failed and we were unable to recover it. 00:32:19.669 [2024-11-26 07:42:03.604737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.669 [2024-11-26 07:42:03.604748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.669 qpair failed and we were unable to recover it. 00:32:19.669 [2024-11-26 07:42:03.605081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.669 [2024-11-26 07:42:03.605093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.669 qpair failed and we were unable to recover it. 00:32:19.669 [2024-11-26 07:42:03.605399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.669 [2024-11-26 07:42:03.605410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.669 qpair failed and we were unable to recover it. 00:32:19.669 [2024-11-26 07:42:03.605704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.669 [2024-11-26 07:42:03.605715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.669 qpair failed and we were unable to recover it. 00:32:19.669 [2024-11-26 07:42:03.606030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.669 [2024-11-26 07:42:03.606042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.669 qpair failed and we were unable to recover it. 00:32:19.669 [2024-11-26 07:42:03.606224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.669 [2024-11-26 07:42:03.606235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.669 qpair failed and we were unable to recover it. 00:32:19.669 [2024-11-26 07:42:03.606437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.669 [2024-11-26 07:42:03.606447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.669 qpair failed and we were unable to recover it. 00:32:19.669 [2024-11-26 07:42:03.606717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.669 [2024-11-26 07:42:03.606727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.669 qpair failed and we were unable to recover it. 00:32:19.669 [2024-11-26 07:42:03.607036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.669 [2024-11-26 07:42:03.607048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.669 qpair failed and we were unable to recover it. 00:32:19.669 [2024-11-26 07:42:03.607355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.669 [2024-11-26 07:42:03.607366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.669 qpair failed and we were unable to recover it. 00:32:19.669 [2024-11-26 07:42:03.607679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.669 [2024-11-26 07:42:03.607691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.669 qpair failed and we were unable to recover it. 00:32:19.669 [2024-11-26 07:42:03.608005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.608016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.670 qpair failed and we were unable to recover it. 00:32:19.670 [2024-11-26 07:42:03.608183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.608193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.670 qpair failed and we were unable to recover it. 00:32:19.670 [2024-11-26 07:42:03.608514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.608526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.670 qpair failed and we were unable to recover it. 00:32:19.670 [2024-11-26 07:42:03.608896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.608908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.670 qpair failed and we were unable to recover it. 00:32:19.670 [2024-11-26 07:42:03.609086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.609097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.670 qpair failed and we were unable to recover it. 00:32:19.670 [2024-11-26 07:42:03.609403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.609414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.670 qpair failed and we were unable to recover it. 00:32:19.670 [2024-11-26 07:42:03.609726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.609737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.670 qpair failed and we were unable to recover it. 00:32:19.670 [2024-11-26 07:42:03.610048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.610060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.670 qpair failed and we were unable to recover it. 00:32:19.670 [2024-11-26 07:42:03.610402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.610413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.670 qpair failed and we were unable to recover it. 00:32:19.670 [2024-11-26 07:42:03.610719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.610730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.670 qpair failed and we were unable to recover it. 00:32:19.670 [2024-11-26 07:42:03.611038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.611050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.670 qpair failed and we were unable to recover it. 00:32:19.670 [2024-11-26 07:42:03.611344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.611354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.670 qpair failed and we were unable to recover it. 00:32:19.670 [2024-11-26 07:42:03.611550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.611562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.670 qpair failed and we were unable to recover it. 00:32:19.670 [2024-11-26 07:42:03.611889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.611901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.670 qpair failed and we were unable to recover it. 00:32:19.670 [2024-11-26 07:42:03.612210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.612221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.670 qpair failed and we were unable to recover it. 00:32:19.670 [2024-11-26 07:42:03.612557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.612567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.670 qpair failed and we were unable to recover it. 00:32:19.670 [2024-11-26 07:42:03.612879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.612891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.670 qpair failed and we were unable to recover it. 00:32:19.670 [2024-11-26 07:42:03.613203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.613214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.670 qpair failed and we were unable to recover it. 00:32:19.670 [2024-11-26 07:42:03.613533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.613544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.670 qpair failed and we were unable to recover it. 00:32:19.670 [2024-11-26 07:42:03.613833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.613844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.670 qpair failed and we were unable to recover it. 00:32:19.670 [2024-11-26 07:42:03.614187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.614198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.670 qpair failed and we were unable to recover it. 00:32:19.670 [2024-11-26 07:42:03.614468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.614479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.670 qpair failed and we were unable to recover it. 00:32:19.670 [2024-11-26 07:42:03.614792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.614803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.670 qpair failed and we were unable to recover it. 00:32:19.670 [2024-11-26 07:42:03.615003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.615014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.670 qpair failed and we were unable to recover it. 00:32:19.670 [2024-11-26 07:42:03.615328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.615339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.670 qpair failed and we were unable to recover it. 00:32:19.670 [2024-11-26 07:42:03.615654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.615665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.670 qpair failed and we were unable to recover it. 00:32:19.670 [2024-11-26 07:42:03.615978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.615989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.670 qpair failed and we were unable to recover it. 00:32:19.670 [2024-11-26 07:42:03.616349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.616361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.670 qpair failed and we were unable to recover it. 00:32:19.670 [2024-11-26 07:42:03.616541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.616553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.670 qpair failed and we were unable to recover it. 00:32:19.670 [2024-11-26 07:42:03.616876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.616887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.670 qpair failed and we were unable to recover it. 00:32:19.670 [2024-11-26 07:42:03.617227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.617239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.670 qpair failed and we were unable to recover it. 00:32:19.670 [2024-11-26 07:42:03.617586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.617599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.670 qpair failed and we were unable to recover it. 00:32:19.670 [2024-11-26 07:42:03.617921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.617933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.670 qpair failed and we were unable to recover it. 00:32:19.670 [2024-11-26 07:42:03.618268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.618279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.670 qpair failed and we were unable to recover it. 00:32:19.670 [2024-11-26 07:42:03.618589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.618600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.670 qpair failed and we were unable to recover it. 00:32:19.670 [2024-11-26 07:42:03.618944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.618955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.670 qpair failed and we were unable to recover it. 00:32:19.670 [2024-11-26 07:42:03.619158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.619169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.670 qpair failed and we were unable to recover it. 00:32:19.670 [2024-11-26 07:42:03.619457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.619468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.670 qpair failed and we were unable to recover it. 00:32:19.670 [2024-11-26 07:42:03.619786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.670 [2024-11-26 07:42:03.619797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.671 qpair failed and we were unable to recover it. 00:32:19.671 [2024-11-26 07:42:03.620202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.671 [2024-11-26 07:42:03.620214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.671 qpair failed and we were unable to recover it. 00:32:19.671 [2024-11-26 07:42:03.620502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.671 [2024-11-26 07:42:03.620513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.671 qpair failed and we were unable to recover it. 00:32:19.671 [2024-11-26 07:42:03.620705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.671 [2024-11-26 07:42:03.620717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.671 qpair failed and we were unable to recover it. 00:32:19.671 [2024-11-26 07:42:03.620926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.671 [2024-11-26 07:42:03.620938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.671 qpair failed and we were unable to recover it. 00:32:19.671 [2024-11-26 07:42:03.621246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.671 [2024-11-26 07:42:03.621258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.671 qpair failed and we were unable to recover it. 00:32:19.671 [2024-11-26 07:42:03.621611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.671 [2024-11-26 07:42:03.621622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.671 qpair failed and we were unable to recover it. 00:32:19.671 [2024-11-26 07:42:03.621975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.671 [2024-11-26 07:42:03.621988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.671 qpair failed and we were unable to recover it. 00:32:19.671 [2024-11-26 07:42:03.622312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.671 [2024-11-26 07:42:03.622323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.671 qpair failed and we were unable to recover it. 00:32:19.671 [2024-11-26 07:42:03.622672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.671 [2024-11-26 07:42:03.622684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.671 qpair failed and we were unable to recover it. 00:32:19.671 [2024-11-26 07:42:03.623010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.671 [2024-11-26 07:42:03.623022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.671 qpair failed and we were unable to recover it. 00:32:19.671 [2024-11-26 07:42:03.623238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.671 [2024-11-26 07:42:03.623249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.671 qpair failed and we were unable to recover it. 00:32:19.671 [2024-11-26 07:42:03.623569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.671 [2024-11-26 07:42:03.623580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.671 qpair failed and we were unable to recover it. 00:32:19.671 [2024-11-26 07:42:03.623943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.671 [2024-11-26 07:42:03.623954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.671 qpair failed and we were unable to recover it. 00:32:19.671 [2024-11-26 07:42:03.624245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.671 [2024-11-26 07:42:03.624256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.671 qpair failed and we were unable to recover it. 00:32:19.671 [2024-11-26 07:42:03.624581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.671 [2024-11-26 07:42:03.624592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.671 qpair failed and we were unable to recover it. 00:32:19.671 [2024-11-26 07:42:03.624943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.671 [2024-11-26 07:42:03.624954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.671 qpair failed and we were unable to recover it. 00:32:19.671 [2024-11-26 07:42:03.625296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.671 [2024-11-26 07:42:03.625307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.671 qpair failed and we were unable to recover it. 00:32:19.671 [2024-11-26 07:42:03.625569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.671 [2024-11-26 07:42:03.625580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.671 qpair failed and we were unable to recover it. 00:32:19.671 [2024-11-26 07:42:03.625892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.671 [2024-11-26 07:42:03.625903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.671 qpair failed and we were unable to recover it. 00:32:19.671 [2024-11-26 07:42:03.626109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.671 [2024-11-26 07:42:03.626122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.671 qpair failed and we were unable to recover it. 00:32:19.671 [2024-11-26 07:42:03.626326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.671 [2024-11-26 07:42:03.626336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.671 qpair failed and we were unable to recover it. 00:32:19.671 [2024-11-26 07:42:03.626515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.671 [2024-11-26 07:42:03.626526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.671 qpair failed and we were unable to recover it. 00:32:19.671 [2024-11-26 07:42:03.626852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.671 [2024-11-26 07:42:03.626867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.671 qpair failed and we were unable to recover it. 00:32:19.671 [2024-11-26 07:42:03.627182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.671 [2024-11-26 07:42:03.627194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.671 qpair failed and we were unable to recover it. 00:32:19.671 [2024-11-26 07:42:03.627404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.671 [2024-11-26 07:42:03.627415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.671 qpair failed and we were unable to recover it. 00:32:19.671 [2024-11-26 07:42:03.627720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.671 [2024-11-26 07:42:03.627732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.671 qpair failed and we were unable to recover it. 00:32:19.671 [2024-11-26 07:42:03.628022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.671 [2024-11-26 07:42:03.628034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.671 qpair failed and we were unable to recover it. 00:32:19.671 [2024-11-26 07:42:03.628358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.671 [2024-11-26 07:42:03.628369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.671 qpair failed and we were unable to recover it. 00:32:19.671 [2024-11-26 07:42:03.628707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.671 [2024-11-26 07:42:03.628718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.671 qpair failed and we were unable to recover it. 00:32:19.671 [2024-11-26 07:42:03.629030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.671 [2024-11-26 07:42:03.629041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.671 qpair failed and we were unable to recover it. 00:32:19.671 [2024-11-26 07:42:03.629361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.671 [2024-11-26 07:42:03.629372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.671 qpair failed and we were unable to recover it. 00:32:19.671 [2024-11-26 07:42:03.629689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.671 [2024-11-26 07:42:03.629701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.671 qpair failed and we were unable to recover it. 00:32:19.671 [2024-11-26 07:42:03.629896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.671 [2024-11-26 07:42:03.629909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.671 qpair failed and we were unable to recover it. 00:32:19.671 [2024-11-26 07:42:03.630259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.630270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.672 qpair failed and we were unable to recover it. 00:32:19.672 [2024-11-26 07:42:03.630513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.630524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.672 qpair failed and we were unable to recover it. 00:32:19.672 [2024-11-26 07:42:03.630828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.630839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.672 qpair failed and we were unable to recover it. 00:32:19.672 [2024-11-26 07:42:03.631211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.631222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.672 qpair failed and we were unable to recover it. 00:32:19.672 [2024-11-26 07:42:03.631530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.631541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.672 qpair failed and we were unable to recover it. 00:32:19.672 [2024-11-26 07:42:03.631731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.631743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.672 qpair failed and we were unable to recover it. 00:32:19.672 [2024-11-26 07:42:03.632037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.632048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.672 qpair failed and we were unable to recover it. 00:32:19.672 [2024-11-26 07:42:03.632354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.632365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.672 qpair failed and we were unable to recover it. 00:32:19.672 [2024-11-26 07:42:03.632671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.632682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.672 qpair failed and we were unable to recover it. 00:32:19.672 [2024-11-26 07:42:03.632996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.633007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.672 qpair failed and we were unable to recover it. 00:32:19.672 [2024-11-26 07:42:03.633331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.633342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.672 qpair failed and we were unable to recover it. 00:32:19.672 [2024-11-26 07:42:03.633643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.633654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.672 qpair failed and we were unable to recover it. 00:32:19.672 [2024-11-26 07:42:03.633977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.633989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.672 qpair failed and we were unable to recover it. 00:32:19.672 [2024-11-26 07:42:03.634302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.634317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.672 qpair failed and we were unable to recover it. 00:32:19.672 [2024-11-26 07:42:03.634657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.634667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.672 qpair failed and we were unable to recover it. 00:32:19.672 [2024-11-26 07:42:03.634977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.634988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.672 qpair failed and we were unable to recover it. 00:32:19.672 [2024-11-26 07:42:03.635290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.635301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.672 qpair failed and we were unable to recover it. 00:32:19.672 [2024-11-26 07:42:03.635625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.635635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.672 qpair failed and we were unable to recover it. 00:32:19.672 [2024-11-26 07:42:03.636010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.636022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.672 qpair failed and we were unable to recover it. 00:32:19.672 [2024-11-26 07:42:03.636346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.636356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.672 qpair failed and we were unable to recover it. 00:32:19.672 [2024-11-26 07:42:03.636668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.636679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.672 qpair failed and we were unable to recover it. 00:32:19.672 [2024-11-26 07:42:03.636991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.637002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.672 qpair failed and we were unable to recover it. 00:32:19.672 [2024-11-26 07:42:03.637317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.637328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.672 qpair failed and we were unable to recover it. 00:32:19.672 [2024-11-26 07:42:03.637667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.637678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.672 qpair failed and we were unable to recover it. 00:32:19.672 [2024-11-26 07:42:03.637987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.637999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.672 qpair failed and we were unable to recover it. 00:32:19.672 [2024-11-26 07:42:03.638330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.638341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.672 qpair failed and we were unable to recover it. 00:32:19.672 [2024-11-26 07:42:03.638662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.638673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.672 qpair failed and we were unable to recover it. 00:32:19.672 [2024-11-26 07:42:03.638994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.639006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.672 qpair failed and we were unable to recover it. 00:32:19.672 [2024-11-26 07:42:03.639311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.639322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.672 qpair failed and we were unable to recover it. 00:32:19.672 [2024-11-26 07:42:03.639530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.639540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.672 qpair failed and we were unable to recover it. 00:32:19.672 [2024-11-26 07:42:03.639869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.639880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.672 qpair failed and we were unable to recover it. 00:32:19.672 [2024-11-26 07:42:03.640215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.640227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.672 qpair failed and we were unable to recover it. 00:32:19.672 [2024-11-26 07:42:03.640529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.640540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.672 qpair failed and we were unable to recover it. 00:32:19.672 [2024-11-26 07:42:03.640887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.640899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.672 qpair failed and we were unable to recover it. 00:32:19.672 [2024-11-26 07:42:03.641169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.641179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.672 qpair failed and we were unable to recover it. 00:32:19.672 [2024-11-26 07:42:03.641512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.641524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.672 qpair failed and we were unable to recover it. 00:32:19.672 [2024-11-26 07:42:03.641806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.641816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.672 qpair failed and we were unable to recover it. 00:32:19.672 [2024-11-26 07:42:03.642114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.642126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.672 qpair failed and we were unable to recover it. 00:32:19.672 [2024-11-26 07:42:03.642480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.672 [2024-11-26 07:42:03.642491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.673 qpair failed and we were unable to recover it. 00:32:19.673 [2024-11-26 07:42:03.642821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.673 [2024-11-26 07:42:03.642833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.673 qpair failed and we were unable to recover it. 00:32:19.673 [2024-11-26 07:42:03.643050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.673 [2024-11-26 07:42:03.643062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.673 qpair failed and we were unable to recover it. 00:32:19.673 [2024-11-26 07:42:03.643275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.673 [2024-11-26 07:42:03.643286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.673 qpair failed and we were unable to recover it. 00:32:19.673 [2024-11-26 07:42:03.643555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.673 [2024-11-26 07:42:03.643565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.673 qpair failed and we were unable to recover it. 00:32:19.673 [2024-11-26 07:42:03.643856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.673 [2024-11-26 07:42:03.643870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.673 qpair failed and we were unable to recover it. 00:32:19.673 [2024-11-26 07:42:03.643988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.673 [2024-11-26 07:42:03.644000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.673 qpair failed and we were unable to recover it. 00:32:19.673 [2024-11-26 07:42:03.644303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.673 [2024-11-26 07:42:03.644314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.673 qpair failed and we were unable to recover it. 00:32:19.673 [2024-11-26 07:42:03.644502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.673 [2024-11-26 07:42:03.644512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.673 qpair failed and we were unable to recover it. 00:32:19.673 [2024-11-26 07:42:03.644827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.673 [2024-11-26 07:42:03.644838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.673 qpair failed and we were unable to recover it. 00:32:19.673 [2024-11-26 07:42:03.645149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.673 [2024-11-26 07:42:03.645161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.673 qpair failed and we were unable to recover it. 00:32:19.673 [2024-11-26 07:42:03.645493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.673 [2024-11-26 07:42:03.645505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.673 qpair failed and we were unable to recover it. 00:32:19.673 [2024-11-26 07:42:03.645725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.673 [2024-11-26 07:42:03.645738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.673 qpair failed and we were unable to recover it. 00:32:19.673 [2024-11-26 07:42:03.645980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.673 [2024-11-26 07:42:03.645992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.673 qpair failed and we were unable to recover it. 00:32:19.673 [2024-11-26 07:42:03.646176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.673 [2024-11-26 07:42:03.646185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.673 qpair failed and we were unable to recover it. 00:32:19.673 [2024-11-26 07:42:03.646465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.673 [2024-11-26 07:42:03.646476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.673 qpair failed and we were unable to recover it. 00:32:19.673 [2024-11-26 07:42:03.646787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.673 [2024-11-26 07:42:03.646798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.673 qpair failed and we were unable to recover it. 00:32:19.673 [2024-11-26 07:42:03.647149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.673 [2024-11-26 07:42:03.647161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.673 qpair failed and we were unable to recover it. 00:32:19.673 [2024-11-26 07:42:03.647354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.673 [2024-11-26 07:42:03.647366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.673 qpair failed and we were unable to recover it. 00:32:19.673 [2024-11-26 07:42:03.647672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.673 [2024-11-26 07:42:03.647684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.673 qpair failed and we were unable to recover it. 00:32:19.673 [2024-11-26 07:42:03.648031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.673 [2024-11-26 07:42:03.648043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.673 qpair failed and we were unable to recover it. 00:32:19.673 [2024-11-26 07:42:03.648242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.673 [2024-11-26 07:42:03.648253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.673 qpair failed and we were unable to recover it. 00:32:19.673 [2024-11-26 07:42:03.648443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.673 [2024-11-26 07:42:03.648454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.673 qpair failed and we were unable to recover it. 00:32:19.673 [2024-11-26 07:42:03.648766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.673 [2024-11-26 07:42:03.648777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.673 qpair failed and we were unable to recover it. 00:32:19.673 [2024-11-26 07:42:03.649063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.673 [2024-11-26 07:42:03.649074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.673 qpair failed and we were unable to recover it. 00:32:19.673 [2024-11-26 07:42:03.649418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.673 [2024-11-26 07:42:03.649429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.673 qpair failed and we were unable to recover it. 00:32:19.673 [2024-11-26 07:42:03.649744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.673 [2024-11-26 07:42:03.649755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.673 qpair failed and we were unable to recover it. 00:32:19.673 [2024-11-26 07:42:03.649910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.673 [2024-11-26 07:42:03.649922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.673 qpair failed and we were unable to recover it. 00:32:19.673 [2024-11-26 07:42:03.650230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.673 [2024-11-26 07:42:03.650242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.673 qpair failed and we were unable to recover it. 00:32:19.673 [2024-11-26 07:42:03.650594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.673 [2024-11-26 07:42:03.650605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.673 qpair failed and we were unable to recover it. 00:32:19.673 [2024-11-26 07:42:03.650826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.673 [2024-11-26 07:42:03.650836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.673 qpair failed and we were unable to recover it. 00:32:19.673 [2024-11-26 07:42:03.651060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.673 [2024-11-26 07:42:03.651072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.673 qpair failed and we were unable to recover it. 00:32:19.673 [2024-11-26 07:42:03.651373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.673 [2024-11-26 07:42:03.651384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.673 qpair failed and we were unable to recover it. 00:32:19.673 [2024-11-26 07:42:03.651685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.673 [2024-11-26 07:42:03.651695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.673 qpair failed and we were unable to recover it. 00:32:19.673 [2024-11-26 07:42:03.651926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.673 [2024-11-26 07:42:03.651937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.673 qpair failed and we were unable to recover it. 00:32:19.673 [2024-11-26 07:42:03.652229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.673 [2024-11-26 07:42:03.652241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.673 qpair failed and we were unable to recover it. 00:32:19.673 [2024-11-26 07:42:03.652453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.673 [2024-11-26 07:42:03.652464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.673 qpair failed and we were unable to recover it. 00:32:19.673 [2024-11-26 07:42:03.652811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.673 [2024-11-26 07:42:03.652822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.673 qpair failed and we were unable to recover it. 00:32:19.673 [2024-11-26 07:42:03.652952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.673 [2024-11-26 07:42:03.652963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.674 [2024-11-26 07:42:03.653239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.674 [2024-11-26 07:42:03.653250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.674 [2024-11-26 07:42:03.653575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.674 [2024-11-26 07:42:03.653586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.674 [2024-11-26 07:42:03.653886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.674 [2024-11-26 07:42:03.653897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.674 [2024-11-26 07:42:03.654213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.674 [2024-11-26 07:42:03.654225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.674 [2024-11-26 07:42:03.654581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.674 [2024-11-26 07:42:03.654596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.674 [2024-11-26 07:42:03.654915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.674 [2024-11-26 07:42:03.654927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.674 [2024-11-26 07:42:03.655272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.674 [2024-11-26 07:42:03.655283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.674 [2024-11-26 07:42:03.655619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.674 [2024-11-26 07:42:03.655632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.674 [2024-11-26 07:42:03.655986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.674 [2024-11-26 07:42:03.655997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.674 [2024-11-26 07:42:03.656321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.674 [2024-11-26 07:42:03.656332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.674 [2024-11-26 07:42:03.656631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.674 [2024-11-26 07:42:03.656642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.674 [2024-11-26 07:42:03.656919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.674 [2024-11-26 07:42:03.656930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.674 [2024-11-26 07:42:03.657257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.674 [2024-11-26 07:42:03.657268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.674 [2024-11-26 07:42:03.657469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.674 [2024-11-26 07:42:03.657479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.674 [2024-11-26 07:42:03.657779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.674 [2024-11-26 07:42:03.657790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.674 [2024-11-26 07:42:03.658044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.674 [2024-11-26 07:42:03.658055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.674 [2024-11-26 07:42:03.658353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.674 [2024-11-26 07:42:03.658364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.674 [2024-11-26 07:42:03.658644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.674 [2024-11-26 07:42:03.658655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.674 [2024-11-26 07:42:03.658939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.674 [2024-11-26 07:42:03.658951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.674 [2024-11-26 07:42:03.659171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.674 [2024-11-26 07:42:03.659182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.674 [2024-11-26 07:42:03.659494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.674 [2024-11-26 07:42:03.659505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.674 [2024-11-26 07:42:03.659816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.674 [2024-11-26 07:42:03.659827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.674 [2024-11-26 07:42:03.660027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.674 [2024-11-26 07:42:03.660038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.674 [2024-11-26 07:42:03.660324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.674 [2024-11-26 07:42:03.660335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.674 [2024-11-26 07:42:03.660665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.674 [2024-11-26 07:42:03.660676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.674 [2024-11-26 07:42:03.660995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.674 [2024-11-26 07:42:03.661006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.674 [2024-11-26 07:42:03.661349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.674 [2024-11-26 07:42:03.661360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.674 [2024-11-26 07:42:03.661704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.674 [2024-11-26 07:42:03.661716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.674 [2024-11-26 07:42:03.661970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.674 [2024-11-26 07:42:03.661981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.674 [2024-11-26 07:42:03.662314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.674 [2024-11-26 07:42:03.662326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.674 [2024-11-26 07:42:03.662626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.674 [2024-11-26 07:42:03.662638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.674 [2024-11-26 07:42:03.662932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.674 [2024-11-26 07:42:03.662946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.674 [2024-11-26 07:42:03.663159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.674 [2024-11-26 07:42:03.663170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.674 [2024-11-26 07:42:03.663500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.674 [2024-11-26 07:42:03.663512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.674 [2024-11-26 07:42:03.663833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.674 [2024-11-26 07:42:03.663845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.674 [2024-11-26 07:42:03.664089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.674 [2024-11-26 07:42:03.664100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.674 [2024-11-26 07:42:03.664409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.674 [2024-11-26 07:42:03.664420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.674 [2024-11-26 07:42:03.664637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.674 [2024-11-26 07:42:03.664649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.674 qpair failed and we were unable to recover it. 00:32:19.675 [2024-11-26 07:42:03.665017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.675 [2024-11-26 07:42:03.665029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.675 qpair failed and we were unable to recover it. 00:32:19.675 [2024-11-26 07:42:03.665313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.675 [2024-11-26 07:42:03.665324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.675 qpair failed and we were unable to recover it. 00:32:19.675 [2024-11-26 07:42:03.665609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.675 [2024-11-26 07:42:03.665621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.675 qpair failed and we were unable to recover it. 00:32:19.675 [2024-11-26 07:42:03.665830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.675 [2024-11-26 07:42:03.665842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.675 qpair failed and we were unable to recover it. 00:32:19.675 [2024-11-26 07:42:03.666160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.675 [2024-11-26 07:42:03.666172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.675 qpair failed and we were unable to recover it. 00:32:19.675 [2024-11-26 07:42:03.666505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.675 [2024-11-26 07:42:03.666516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.675 qpair failed and we were unable to recover it. 00:32:19.675 [2024-11-26 07:42:03.666828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.675 [2024-11-26 07:42:03.666839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.675 qpair failed and we were unable to recover it. 00:32:19.675 [2024-11-26 07:42:03.667052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.675 [2024-11-26 07:42:03.667063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.675 qpair failed and we were unable to recover it. 00:32:19.675 [2024-11-26 07:42:03.667371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.675 [2024-11-26 07:42:03.667383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.675 qpair failed and we were unable to recover it. 00:32:19.675 [2024-11-26 07:42:03.667692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.675 [2024-11-26 07:42:03.667704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.675 qpair failed and we were unable to recover it. 00:32:19.675 [2024-11-26 07:42:03.667965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.675 [2024-11-26 07:42:03.667976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.675 qpair failed and we were unable to recover it. 00:32:19.675 [2024-11-26 07:42:03.668291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.675 [2024-11-26 07:42:03.668302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.675 qpair failed and we were unable to recover it. 00:32:19.675 [2024-11-26 07:42:03.668595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.675 [2024-11-26 07:42:03.668606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.675 qpair failed and we were unable to recover it. 00:32:19.675 [2024-11-26 07:42:03.668915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.675 [2024-11-26 07:42:03.668927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.675 qpair failed and we were unable to recover it. 00:32:19.675 [2024-11-26 07:42:03.669263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.675 [2024-11-26 07:42:03.669273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.675 qpair failed and we were unable to recover it. 00:32:19.675 [2024-11-26 07:42:03.669467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.675 [2024-11-26 07:42:03.669478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.675 qpair failed and we were unable to recover it. 00:32:19.675 [2024-11-26 07:42:03.669776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.675 [2024-11-26 07:42:03.669787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.675 qpair failed and we were unable to recover it. 00:32:19.675 [2024-11-26 07:42:03.670118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.675 [2024-11-26 07:42:03.670131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.675 qpair failed and we were unable to recover it. 00:32:19.675 [2024-11-26 07:42:03.670388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.675 [2024-11-26 07:42:03.670399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.675 qpair failed and we were unable to recover it. 00:32:19.675 [2024-11-26 07:42:03.670666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.675 [2024-11-26 07:42:03.670676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.675 qpair failed and we were unable to recover it. 00:32:19.675 [2024-11-26 07:42:03.670972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.675 [2024-11-26 07:42:03.670983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.675 qpair failed and we were unable to recover it. 00:32:19.675 [2024-11-26 07:42:03.671279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.675 [2024-11-26 07:42:03.671289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.675 qpair failed and we were unable to recover it. 00:32:19.675 [2024-11-26 07:42:03.671601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.675 [2024-11-26 07:42:03.671613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.675 qpair failed and we were unable to recover it. 00:32:19.675 [2024-11-26 07:42:03.671791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.675 [2024-11-26 07:42:03.671801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.675 qpair failed and we were unable to recover it. 00:32:19.675 [2024-11-26 07:42:03.672024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.675 [2024-11-26 07:42:03.672036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.675 qpair failed and we were unable to recover it. 00:32:19.675 [2024-11-26 07:42:03.672400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.675 [2024-11-26 07:42:03.672411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.675 qpair failed and we were unable to recover it. 00:32:19.675 [2024-11-26 07:42:03.672625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.675 [2024-11-26 07:42:03.672636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.675 qpair failed and we were unable to recover it. 00:32:19.675 [2024-11-26 07:42:03.672952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.675 [2024-11-26 07:42:03.672963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.675 qpair failed and we were unable to recover it. 00:32:19.675 [2024-11-26 07:42:03.673308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.675 [2024-11-26 07:42:03.673319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.675 qpair failed and we were unable to recover it. 00:32:19.675 [2024-11-26 07:42:03.673620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.675 [2024-11-26 07:42:03.673631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.675 qpair failed and we were unable to recover it. 00:32:19.675 [2024-11-26 07:42:03.673916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.675 [2024-11-26 07:42:03.673928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.675 qpair failed and we were unable to recover it. 00:32:19.675 [2024-11-26 07:42:03.674250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.675 [2024-11-26 07:42:03.674261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.675 qpair failed and we were unable to recover it. 00:32:19.675 [2024-11-26 07:42:03.674566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.675 [2024-11-26 07:42:03.674578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.675 qpair failed and we were unable to recover it. 00:32:19.675 [2024-11-26 07:42:03.674881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.675 [2024-11-26 07:42:03.674893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.675 qpair failed and we were unable to recover it. 00:32:19.675 [2024-11-26 07:42:03.675211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.675 [2024-11-26 07:42:03.675222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.675 qpair failed and we were unable to recover it. 00:32:19.675 [2024-11-26 07:42:03.675538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.675 [2024-11-26 07:42:03.675549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.675 qpair failed and we were unable to recover it. 00:32:19.675 [2024-11-26 07:42:03.675888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.675 [2024-11-26 07:42:03.675899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.675 qpair failed and we were unable to recover it. 00:32:19.675 [2024-11-26 07:42:03.676213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.675 [2024-11-26 07:42:03.676224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.675 qpair failed and we were unable to recover it. 00:32:19.676 [2024-11-26 07:42:03.676438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.676 [2024-11-26 07:42:03.676449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.676 qpair failed and we were unable to recover it. 00:32:19.676 [2024-11-26 07:42:03.676747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.676 [2024-11-26 07:42:03.676758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.676 qpair failed and we were unable to recover it. 00:32:19.676 [2024-11-26 07:42:03.677109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.676 [2024-11-26 07:42:03.677121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.676 qpair failed and we were unable to recover it. 00:32:19.676 [2024-11-26 07:42:03.677230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.676 [2024-11-26 07:42:03.677240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.676 qpair failed and we were unable to recover it. 00:32:19.676 [2024-11-26 07:42:03.677475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.676 [2024-11-26 07:42:03.677485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.676 qpair failed and we were unable to recover it. 00:32:19.676 [2024-11-26 07:42:03.677686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.676 [2024-11-26 07:42:03.677698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.676 qpair failed and we were unable to recover it. 00:32:19.676 [2024-11-26 07:42:03.677921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.676 [2024-11-26 07:42:03.677932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.676 qpair failed and we were unable to recover it. 00:32:19.676 [2024-11-26 07:42:03.678242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.676 [2024-11-26 07:42:03.678253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.676 qpair failed and we were unable to recover it. 00:32:19.676 [2024-11-26 07:42:03.678570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.676 [2024-11-26 07:42:03.678581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.676 qpair failed and we were unable to recover it. 00:32:19.676 [2024-11-26 07:42:03.678893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.676 [2024-11-26 07:42:03.678905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.676 qpair failed and we were unable to recover it. 00:32:19.676 [2024-11-26 07:42:03.679102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.676 [2024-11-26 07:42:03.679115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.676 qpair failed and we were unable to recover it. 00:32:19.676 [2024-11-26 07:42:03.679426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.676 [2024-11-26 07:42:03.679437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.676 qpair failed and we were unable to recover it. 00:32:19.676 [2024-11-26 07:42:03.679750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.676 [2024-11-26 07:42:03.679762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.676 qpair failed and we were unable to recover it. 00:32:19.676 [2024-11-26 07:42:03.680138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.676 [2024-11-26 07:42:03.680149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.676 qpair failed and we were unable to recover it. 00:32:19.676 [2024-11-26 07:42:03.680458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.676 [2024-11-26 07:42:03.680468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.676 qpair failed and we were unable to recover it. 00:32:19.676 [2024-11-26 07:42:03.680799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.676 [2024-11-26 07:42:03.680810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.676 qpair failed and we were unable to recover it. 00:32:19.676 [2024-11-26 07:42:03.681014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.676 [2024-11-26 07:42:03.681026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.676 qpair failed and we were unable to recover it. 00:32:19.676 [2024-11-26 07:42:03.681229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.676 [2024-11-26 07:42:03.681240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.676 qpair failed and we were unable to recover it. 00:32:19.676 [2024-11-26 07:42:03.681694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.676 [2024-11-26 07:42:03.681705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.676 qpair failed and we were unable to recover it. 00:32:19.676 [2024-11-26 07:42:03.682011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.676 [2024-11-26 07:42:03.682022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.676 qpair failed and we were unable to recover it. 00:32:19.676 [2024-11-26 07:42:03.682344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.676 [2024-11-26 07:42:03.682355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.676 qpair failed and we were unable to recover it. 00:32:19.676 [2024-11-26 07:42:03.682665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.676 [2024-11-26 07:42:03.682676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.676 qpair failed and we were unable to recover it. 00:32:19.676 [2024-11-26 07:42:03.682982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.676 [2024-11-26 07:42:03.682994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.676 qpair failed and we were unable to recover it. 00:32:19.676 [2024-11-26 07:42:03.683321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.676 [2024-11-26 07:42:03.683334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.676 qpair failed and we were unable to recover it. 00:32:19.676 [2024-11-26 07:42:03.683641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.676 [2024-11-26 07:42:03.683653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.676 qpair failed and we were unable to recover it. 00:32:19.676 [2024-11-26 07:42:03.683891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.676 [2024-11-26 07:42:03.683902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.676 qpair failed and we were unable to recover it. 00:32:19.676 [2024-11-26 07:42:03.684206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.676 [2024-11-26 07:42:03.684217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.676 qpair failed and we were unable to recover it. 00:32:19.676 [2024-11-26 07:42:03.684529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.676 [2024-11-26 07:42:03.684541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.676 qpair failed and we were unable to recover it. 00:32:19.676 [2024-11-26 07:42:03.684845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.676 [2024-11-26 07:42:03.684856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.676 qpair failed and we were unable to recover it. 00:32:19.676 [2024-11-26 07:42:03.685159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.676 [2024-11-26 07:42:03.685170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.676 qpair failed and we were unable to recover it. 00:32:19.676 [2024-11-26 07:42:03.685504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.676 [2024-11-26 07:42:03.685515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.676 qpair failed and we were unable to recover it. 00:32:19.676 [2024-11-26 07:42:03.685847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.676 [2024-11-26 07:42:03.685859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.676 qpair failed and we were unable to recover it. 00:32:19.676 [2024-11-26 07:42:03.686188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.676 [2024-11-26 07:42:03.686199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.676 qpair failed and we were unable to recover it. 00:32:19.676 [2024-11-26 07:42:03.686494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.676 [2024-11-26 07:42:03.686505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.676 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.686801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.677 [2024-11-26 07:42:03.686812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.677 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.687127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.677 [2024-11-26 07:42:03.687138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.677 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.687448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.677 [2024-11-26 07:42:03.687459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.677 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.687651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.677 [2024-11-26 07:42:03.687662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.677 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.687954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.677 [2024-11-26 07:42:03.687965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.677 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.688282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.677 [2024-11-26 07:42:03.688293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.677 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.688481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.677 [2024-11-26 07:42:03.688492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.677 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.688804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.677 [2024-11-26 07:42:03.688816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.677 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.689014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.677 [2024-11-26 07:42:03.689025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.677 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.689221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.677 [2024-11-26 07:42:03.689231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.677 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.689421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.677 [2024-11-26 07:42:03.689431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.677 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.689639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.677 [2024-11-26 07:42:03.689650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.677 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.689967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.677 [2024-11-26 07:42:03.689978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.677 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.690322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.677 [2024-11-26 07:42:03.690333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.677 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.690500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.677 [2024-11-26 07:42:03.690511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.677 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.690763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.677 [2024-11-26 07:42:03.690773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.677 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.690884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.677 [2024-11-26 07:42:03.690896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.677 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.691251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.677 [2024-11-26 07:42:03.691262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.677 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.691616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.677 [2024-11-26 07:42:03.691627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.677 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.691933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.677 [2024-11-26 07:42:03.691945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.677 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.692268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.677 [2024-11-26 07:42:03.692279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.677 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.692493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.677 [2024-11-26 07:42:03.692504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.677 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.692709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.677 [2024-11-26 07:42:03.692721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.677 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.693036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.677 [2024-11-26 07:42:03.693047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.677 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.693356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.677 [2024-11-26 07:42:03.693367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.677 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.693706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.677 [2024-11-26 07:42:03.693717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.677 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.693920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.677 [2024-11-26 07:42:03.693932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.677 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.694207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.677 [2024-11-26 07:42:03.694218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.677 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.694597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.677 [2024-11-26 07:42:03.694608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.677 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.694938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.677 [2024-11-26 07:42:03.694950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.677 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.695272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.677 [2024-11-26 07:42:03.695282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.677 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.695581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.677 [2024-11-26 07:42:03.695592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.677 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.695952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.677 [2024-11-26 07:42:03.695964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.677 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.696323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.677 [2024-11-26 07:42:03.696334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.677 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.696415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.677 [2024-11-26 07:42:03.696423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.677 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.696689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.677 [2024-11-26 07:42:03.696699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.677 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.697017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.677 [2024-11-26 07:42:03.697028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.677 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.697217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.677 [2024-11-26 07:42:03.697229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.677 qpair failed and we were unable to recover it. 00:32:19.677 [2024-11-26 07:42:03.697539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.678 [2024-11-26 07:42:03.697550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.678 qpair failed and we were unable to recover it. 00:32:19.678 [2024-11-26 07:42:03.697736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.678 [2024-11-26 07:42:03.697747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.678 qpair failed and we were unable to recover it. 00:32:19.678 [2024-11-26 07:42:03.698098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.678 [2024-11-26 07:42:03.698109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.678 qpair failed and we were unable to recover it. 00:32:19.678 [2024-11-26 07:42:03.698470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.678 [2024-11-26 07:42:03.698481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.678 qpair failed and we were unable to recover it. 00:32:19.678 [2024-11-26 07:42:03.698667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.678 [2024-11-26 07:42:03.698678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.678 qpair failed and we were unable to recover it. 00:32:19.678 [2024-11-26 07:42:03.698885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.678 [2024-11-26 07:42:03.698899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.678 qpair failed and we were unable to recover it. 00:32:19.678 [2024-11-26 07:42:03.699224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.678 [2024-11-26 07:42:03.699235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.678 qpair failed and we were unable to recover it. 00:32:19.678 [2024-11-26 07:42:03.699550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.678 [2024-11-26 07:42:03.699561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.678 qpair failed and we were unable to recover it. 00:32:19.678 [2024-11-26 07:42:03.699859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.678 [2024-11-26 07:42:03.699874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.678 qpair failed and we were unable to recover it. 00:32:19.678 [2024-11-26 07:42:03.700112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.678 [2024-11-26 07:42:03.700123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.678 qpair failed and we were unable to recover it. 00:32:19.678 [2024-11-26 07:42:03.700455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.678 [2024-11-26 07:42:03.700466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.678 qpair failed and we were unable to recover it. 00:32:19.678 [2024-11-26 07:42:03.700665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.678 [2024-11-26 07:42:03.700676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.678 qpair failed and we were unable to recover it. 00:32:19.678 [2024-11-26 07:42:03.700906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.678 [2024-11-26 07:42:03.700917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.678 qpair failed and we were unable to recover it. 00:32:19.678 [2024-11-26 07:42:03.701228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.678 [2024-11-26 07:42:03.701239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.678 qpair failed and we were unable to recover it. 00:32:19.678 [2024-11-26 07:42:03.701548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.678 [2024-11-26 07:42:03.701559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.678 qpair failed and we were unable to recover it. 00:32:19.678 [2024-11-26 07:42:03.701875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.678 [2024-11-26 07:42:03.701887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.678 qpair failed and we were unable to recover it. 00:32:19.678 [2024-11-26 07:42:03.702194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.678 [2024-11-26 07:42:03.702205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.678 qpair failed and we were unable to recover it. 00:32:19.678 [2024-11-26 07:42:03.702517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.678 [2024-11-26 07:42:03.702528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.678 qpair failed and we were unable to recover it. 00:32:19.678 [2024-11-26 07:42:03.702829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.678 [2024-11-26 07:42:03.702841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.678 qpair failed and we were unable to recover it. 00:32:19.678 [2024-11-26 07:42:03.703070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.678 [2024-11-26 07:42:03.703082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.678 qpair failed and we were unable to recover it. 00:32:19.678 [2024-11-26 07:42:03.703381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.678 [2024-11-26 07:42:03.703392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.678 qpair failed and we were unable to recover it. 00:32:19.678 [2024-11-26 07:42:03.703720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.678 [2024-11-26 07:42:03.703731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.678 qpair failed and we were unable to recover it. 00:32:19.678 [2024-11-26 07:42:03.703925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.678 [2024-11-26 07:42:03.703937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.678 qpair failed and we were unable to recover it. 00:32:19.678 [2024-11-26 07:42:03.704216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.678 [2024-11-26 07:42:03.704227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.678 qpair failed and we were unable to recover it. 00:32:19.678 [2024-11-26 07:42:03.704527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.678 [2024-11-26 07:42:03.704538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.678 qpair failed and we were unable to recover it. 00:32:19.678 [2024-11-26 07:42:03.704741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.678 [2024-11-26 07:42:03.704752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.678 qpair failed and we were unable to recover it. 00:32:19.678 [2024-11-26 07:42:03.704993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.678 [2024-11-26 07:42:03.705004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.678 qpair failed and we were unable to recover it. 00:32:19.678 [2024-11-26 07:42:03.705170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.678 [2024-11-26 07:42:03.705180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.678 qpair failed and we were unable to recover it. 00:32:19.678 [2024-11-26 07:42:03.705495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.678 [2024-11-26 07:42:03.705506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.678 qpair failed and we were unable to recover it. 00:32:19.678 [2024-11-26 07:42:03.705817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.678 [2024-11-26 07:42:03.705827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.678 qpair failed and we were unable to recover it. 00:32:19.678 [2024-11-26 07:42:03.706124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.678 [2024-11-26 07:42:03.706135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.678 qpair failed and we were unable to recover it. 00:32:19.678 [2024-11-26 07:42:03.706467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.678 [2024-11-26 07:42:03.706478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.678 qpair failed and we were unable to recover it. 00:32:19.678 [2024-11-26 07:42:03.706820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.678 [2024-11-26 07:42:03.706831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.678 qpair failed and we were unable to recover it. 00:32:19.678 [2024-11-26 07:42:03.707032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.678 [2024-11-26 07:42:03.707043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.678 qpair failed and we were unable to recover it. 00:32:19.678 [2024-11-26 07:42:03.707321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.678 [2024-11-26 07:42:03.707332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.678 qpair failed and we were unable to recover it. 00:32:19.678 [2024-11-26 07:42:03.707521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.678 [2024-11-26 07:42:03.707531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.678 qpair failed and we were unable to recover it. 00:32:19.678 [2024-11-26 07:42:03.707711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.678 [2024-11-26 07:42:03.707723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.678 qpair failed and we were unable to recover it. 00:32:19.678 [2024-11-26 07:42:03.707920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.678 [2024-11-26 07:42:03.707932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.678 qpair failed and we were unable to recover it. 00:32:19.678 [2024-11-26 07:42:03.708137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.708147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.679 [2024-11-26 07:42:03.708487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.708497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.679 [2024-11-26 07:42:03.708708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.708719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.679 [2024-11-26 07:42:03.709055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.709066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.679 [2024-11-26 07:42:03.709391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.709401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.679 [2024-11-26 07:42:03.709711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.709722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.679 [2024-11-26 07:42:03.710044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.710055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.679 [2024-11-26 07:42:03.710263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.710273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.679 [2024-11-26 07:42:03.710534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.710545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.679 [2024-11-26 07:42:03.710877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.710888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.679 [2024-11-26 07:42:03.711200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.711212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.679 [2024-11-26 07:42:03.711573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.711584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.679 [2024-11-26 07:42:03.711773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.711785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.679 [2024-11-26 07:42:03.711962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.711973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.679 [2024-11-26 07:42:03.712289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.712299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.679 [2024-11-26 07:42:03.712599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.712610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.679 [2024-11-26 07:42:03.712923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.712934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.679 [2024-11-26 07:42:03.713177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.713188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.679 [2024-11-26 07:42:03.713517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.713528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.679 [2024-11-26 07:42:03.713845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.713856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.679 [2024-11-26 07:42:03.714210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.714221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.679 [2024-11-26 07:42:03.714533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.714545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.679 [2024-11-26 07:42:03.714885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.714897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.679 [2024-11-26 07:42:03.715119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.715130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.679 [2024-11-26 07:42:03.715323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.715335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.679 [2024-11-26 07:42:03.715666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.715677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.679 [2024-11-26 07:42:03.715981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.715993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.679 [2024-11-26 07:42:03.716316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.716327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.679 [2024-11-26 07:42:03.716634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.716645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.679 [2024-11-26 07:42:03.716846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.716857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.679 [2024-11-26 07:42:03.717156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.717167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.679 [2024-11-26 07:42:03.717482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.717494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.679 [2024-11-26 07:42:03.717696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.717707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.679 [2024-11-26 07:42:03.718044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.718055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.679 [2024-11-26 07:42:03.718388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.718399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.679 [2024-11-26 07:42:03.718693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.718706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.679 [2024-11-26 07:42:03.718938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.718950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.679 [2024-11-26 07:42:03.719145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.719155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.679 [2024-11-26 07:42:03.719495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.679 [2024-11-26 07:42:03.719507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.679 qpair failed and we were unable to recover it. 00:32:19.680 [2024-11-26 07:42:03.719819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.680 [2024-11-26 07:42:03.719830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.680 qpair failed and we were unable to recover it. 00:32:19.680 [2024-11-26 07:42:03.720209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.680 [2024-11-26 07:42:03.720220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.680 qpair failed and we were unable to recover it. 00:32:19.680 [2024-11-26 07:42:03.720547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.680 [2024-11-26 07:42:03.720558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.680 qpair failed and we were unable to recover it. 00:32:19.680 [2024-11-26 07:42:03.720860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.680 [2024-11-26 07:42:03.720876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.680 qpair failed and we were unable to recover it. 00:32:19.680 [2024-11-26 07:42:03.721171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.680 [2024-11-26 07:42:03.721182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.680 qpair failed and we were unable to recover it. 00:32:19.680 [2024-11-26 07:42:03.721471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.680 [2024-11-26 07:42:03.721482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.680 qpair failed and we were unable to recover it. 00:32:19.680 [2024-11-26 07:42:03.721812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.680 [2024-11-26 07:42:03.721822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.680 qpair failed and we were unable to recover it. 00:32:19.680 [2024-11-26 07:42:03.722114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.680 [2024-11-26 07:42:03.722125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.680 qpair failed and we were unable to recover it. 00:32:19.680 [2024-11-26 07:42:03.722489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.680 [2024-11-26 07:42:03.722500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.680 qpair failed and we were unable to recover it. 00:32:19.680 [2024-11-26 07:42:03.722804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.680 [2024-11-26 07:42:03.722815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.680 qpair failed and we were unable to recover it. 00:32:19.680 [2024-11-26 07:42:03.723034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.680 [2024-11-26 07:42:03.723046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.680 qpair failed and we were unable to recover it. 00:32:19.680 [2024-11-26 07:42:03.723358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.680 [2024-11-26 07:42:03.723368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.680 qpair failed and we were unable to recover it. 00:32:19.680 [2024-11-26 07:42:03.723672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.680 [2024-11-26 07:42:03.723682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.680 qpair failed and we were unable to recover it. 00:32:19.680 [2024-11-26 07:42:03.723974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.680 [2024-11-26 07:42:03.723985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.680 qpair failed and we were unable to recover it. 00:32:19.680 [2024-11-26 07:42:03.724200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.680 [2024-11-26 07:42:03.724211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.680 qpair failed and we were unable to recover it. 00:32:19.680 [2024-11-26 07:42:03.724557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.680 [2024-11-26 07:42:03.724567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.680 qpair failed and we were unable to recover it. 00:32:19.680 [2024-11-26 07:42:03.724872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.680 [2024-11-26 07:42:03.724884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.680 qpair failed and we were unable to recover it. 00:32:19.680 [2024-11-26 07:42:03.725061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.680 [2024-11-26 07:42:03.725072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.680 qpair failed and we were unable to recover it. 00:32:19.680 [2024-11-26 07:42:03.725251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.680 [2024-11-26 07:42:03.725263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.680 qpair failed and we were unable to recover it. 00:32:19.680 [2024-11-26 07:42:03.725568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.680 [2024-11-26 07:42:03.725579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.680 qpair failed and we were unable to recover it. 00:32:19.680 [2024-11-26 07:42:03.725882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.680 [2024-11-26 07:42:03.725893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.680 qpair failed and we were unable to recover it. 00:32:19.680 [2024-11-26 07:42:03.726207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.680 [2024-11-26 07:42:03.726218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.680 qpair failed and we were unable to recover it. 00:32:19.680 [2024-11-26 07:42:03.726526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.680 [2024-11-26 07:42:03.726536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.680 qpair failed and we were unable to recover it. 00:32:19.680 [2024-11-26 07:42:03.726606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.680 [2024-11-26 07:42:03.726618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.680 qpair failed and we were unable to recover it. 00:32:19.680 [2024-11-26 07:42:03.726908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.680 [2024-11-26 07:42:03.726920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.680 qpair failed and we were unable to recover it. 00:32:19.680 [2024-11-26 07:42:03.727228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.680 [2024-11-26 07:42:03.727239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.680 qpair failed and we were unable to recover it. 00:32:19.680 [2024-11-26 07:42:03.727554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.680 [2024-11-26 07:42:03.727564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.680 qpair failed and we were unable to recover it. 00:32:19.680 [2024-11-26 07:42:03.727908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.680 [2024-11-26 07:42:03.727919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.680 qpair failed and we were unable to recover it. 00:32:19.680 [2024-11-26 07:42:03.728231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.680 [2024-11-26 07:42:03.728242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.680 qpair failed and we were unable to recover it. 00:32:19.680 [2024-11-26 07:42:03.728451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.680 [2024-11-26 07:42:03.728463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.680 qpair failed and we were unable to recover it. 00:32:19.680 [2024-11-26 07:42:03.728787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.680 [2024-11-26 07:42:03.728798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.680 qpair failed and we were unable to recover it. 00:32:19.680 [2024-11-26 07:42:03.728984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.680 [2024-11-26 07:42:03.728996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.680 qpair failed and we were unable to recover it. 00:32:19.680 [2024-11-26 07:42:03.729297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.680 [2024-11-26 07:42:03.729308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.680 qpair failed and we were unable to recover it. 00:32:19.680 [2024-11-26 07:42:03.729621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.680 [2024-11-26 07:42:03.729632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.680 qpair failed and we were unable to recover it. 00:32:19.680 [2024-11-26 07:42:03.729922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.681 [2024-11-26 07:42:03.729933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.681 qpair failed and we were unable to recover it. 00:32:19.681 [2024-11-26 07:42:03.730142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.681 [2024-11-26 07:42:03.730152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.681 qpair failed and we were unable to recover it. 00:32:19.681 [2024-11-26 07:42:03.730382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.681 [2024-11-26 07:42:03.730393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.681 qpair failed and we were unable to recover it. 00:32:19.681 [2024-11-26 07:42:03.730686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.681 [2024-11-26 07:42:03.730697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.681 qpair failed and we were unable to recover it. 00:32:19.681 [2024-11-26 07:42:03.731004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.681 [2024-11-26 07:42:03.731015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.681 qpair failed and we were unable to recover it. 00:32:19.681 [2024-11-26 07:42:03.731351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.681 [2024-11-26 07:42:03.731362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.681 qpair failed and we were unable to recover it. 00:32:19.681 [2024-11-26 07:42:03.731648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.681 [2024-11-26 07:42:03.731660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.681 qpair failed and we were unable to recover it. 00:32:19.681 [2024-11-26 07:42:03.731988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.681 [2024-11-26 07:42:03.732000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.681 qpair failed and we were unable to recover it. 00:32:19.681 [2024-11-26 07:42:03.732295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.681 [2024-11-26 07:42:03.732306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.681 qpair failed and we were unable to recover it. 00:32:19.681 [2024-11-26 07:42:03.732620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.681 [2024-11-26 07:42:03.732630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.681 qpair failed and we were unable to recover it. 00:32:19.681 [2024-11-26 07:42:03.732930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.681 [2024-11-26 07:42:03.732941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.681 qpair failed and we were unable to recover it. 00:32:19.681 [2024-11-26 07:42:03.733254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.681 [2024-11-26 07:42:03.733264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.681 qpair failed and we were unable to recover it. 00:32:19.681 [2024-11-26 07:42:03.733575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.681 [2024-11-26 07:42:03.733585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.681 qpair failed and we were unable to recover it. 00:32:19.681 [2024-11-26 07:42:03.733964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.681 [2024-11-26 07:42:03.733975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.681 qpair failed and we were unable to recover it. 00:32:19.681 [2024-11-26 07:42:03.734275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.681 [2024-11-26 07:42:03.734285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.681 qpair failed and we were unable to recover it. 00:32:19.681 [2024-11-26 07:42:03.734616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.681 [2024-11-26 07:42:03.734628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.681 qpair failed and we were unable to recover it. 00:32:19.681 [2024-11-26 07:42:03.734934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.681 [2024-11-26 07:42:03.734945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.681 qpair failed and we were unable to recover it. 00:32:19.681 [2024-11-26 07:42:03.735261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.681 [2024-11-26 07:42:03.735272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.681 qpair failed and we were unable to recover it. 00:32:19.681 [2024-11-26 07:42:03.735584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.681 [2024-11-26 07:42:03.735595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.681 qpair failed and we were unable to recover it. 00:32:19.681 [2024-11-26 07:42:03.735893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.681 [2024-11-26 07:42:03.735904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.681 qpair failed and we were unable to recover it. 00:32:19.681 [2024-11-26 07:42:03.736198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.681 [2024-11-26 07:42:03.736209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.681 qpair failed and we were unable to recover it. 00:32:19.681 [2024-11-26 07:42:03.736391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.681 [2024-11-26 07:42:03.736402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.681 qpair failed and we were unable to recover it. 00:32:19.681 [2024-11-26 07:42:03.736761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.681 [2024-11-26 07:42:03.736772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.681 qpair failed and we were unable to recover it. 00:32:19.681 [2024-11-26 07:42:03.737067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.681 [2024-11-26 07:42:03.737078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.681 qpair failed and we were unable to recover it. 00:32:19.681 [2024-11-26 07:42:03.737321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.681 [2024-11-26 07:42:03.737332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.681 qpair failed and we were unable to recover it. 00:32:19.681 [2024-11-26 07:42:03.737662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.681 [2024-11-26 07:42:03.737673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.681 qpair failed and we were unable to recover it. 00:32:19.681 [2024-11-26 07:42:03.737971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.681 [2024-11-26 07:42:03.737982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.681 qpair failed and we were unable to recover it. 00:32:19.681 [2024-11-26 07:42:03.738294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.681 [2024-11-26 07:42:03.738306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.681 qpair failed and we were unable to recover it. 00:32:19.681 [2024-11-26 07:42:03.738615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.681 [2024-11-26 07:42:03.738627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.681 qpair failed and we were unable to recover it. 00:32:19.681 [2024-11-26 07:42:03.738913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.681 [2024-11-26 07:42:03.738924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.681 qpair failed and we were unable to recover it. 00:32:19.681 [2024-11-26 07:42:03.739263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.681 [2024-11-26 07:42:03.739274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.681 qpair failed and we were unable to recover it. 00:32:19.681 [2024-11-26 07:42:03.739602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.681 [2024-11-26 07:42:03.739613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.681 qpair failed and we were unable to recover it. 00:32:19.681 [2024-11-26 07:42:03.739780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.681 [2024-11-26 07:42:03.739791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.681 qpair failed and we were unable to recover it. 00:32:19.681 [2024-11-26 07:42:03.740113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.681 [2024-11-26 07:42:03.740124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.681 qpair failed and we were unable to recover it. 00:32:19.681 [2024-11-26 07:42:03.740431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.681 [2024-11-26 07:42:03.740443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.681 qpair failed and we were unable to recover it. 00:32:19.681 [2024-11-26 07:42:03.740745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.681 [2024-11-26 07:42:03.740756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.682 qpair failed and we were unable to recover it. 00:32:19.682 [2024-11-26 07:42:03.741041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.682 [2024-11-26 07:42:03.741052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.682 qpair failed and we were unable to recover it. 00:32:19.682 [2024-11-26 07:42:03.741349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.682 [2024-11-26 07:42:03.741360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.682 qpair failed and we were unable to recover it. 00:32:19.682 [2024-11-26 07:42:03.741655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.682 [2024-11-26 07:42:03.741666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.682 qpair failed and we were unable to recover it. 00:32:19.682 [2024-11-26 07:42:03.742002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.682 [2024-11-26 07:42:03.742013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.682 qpair failed and we were unable to recover it. 00:32:19.682 [2024-11-26 07:42:03.742210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.682 [2024-11-26 07:42:03.742221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.682 qpair failed and we were unable to recover it. 00:32:19.682 [2024-11-26 07:42:03.742555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.682 [2024-11-26 07:42:03.742566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.682 qpair failed and we were unable to recover it. 00:32:19.682 [2024-11-26 07:42:03.742950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.682 [2024-11-26 07:42:03.742961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.682 qpair failed and we were unable to recover it. 00:32:19.682 [2024-11-26 07:42:03.743369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.682 [2024-11-26 07:42:03.743380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.682 qpair failed and we were unable to recover it. 00:32:19.682 [2024-11-26 07:42:03.743742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.682 [2024-11-26 07:42:03.743753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.682 qpair failed and we were unable to recover it. 00:32:19.682 [2024-11-26 07:42:03.744063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.682 [2024-11-26 07:42:03.744075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.682 qpair failed and we were unable to recover it. 00:32:19.682 [2024-11-26 07:42:03.744444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.682 [2024-11-26 07:42:03.744455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.682 qpair failed and we were unable to recover it. 00:32:19.682 [2024-11-26 07:42:03.744766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.682 [2024-11-26 07:42:03.744777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.682 qpair failed and we were unable to recover it. 00:32:19.682 [2024-11-26 07:42:03.745071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.682 [2024-11-26 07:42:03.745084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.682 qpair failed and we were unable to recover it. 00:32:19.682 [2024-11-26 07:42:03.745282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.682 [2024-11-26 07:42:03.745294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.682 qpair failed and we were unable to recover it. 00:32:19.682 [2024-11-26 07:42:03.745481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.682 [2024-11-26 07:42:03.745493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.682 qpair failed and we were unable to recover it. 00:32:19.682 [2024-11-26 07:42:03.745852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.682 [2024-11-26 07:42:03.745867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.682 qpair failed and we were unable to recover it. 00:32:19.682 [2024-11-26 07:42:03.746191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.682 [2024-11-26 07:42:03.746202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.682 qpair failed and we were unable to recover it. 00:32:19.682 [2024-11-26 07:42:03.746426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.682 [2024-11-26 07:42:03.746438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.682 qpair failed and we were unable to recover it. 00:32:19.682 [2024-11-26 07:42:03.746730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.682 [2024-11-26 07:42:03.746740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.682 qpair failed and we were unable to recover it. 00:32:19.682 [2024-11-26 07:42:03.747037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.682 [2024-11-26 07:42:03.747048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.682 qpair failed and we were unable to recover it. 00:32:19.682 [2024-11-26 07:42:03.747378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.682 [2024-11-26 07:42:03.747389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.682 qpair failed and we were unable to recover it. 00:32:19.682 [2024-11-26 07:42:03.747722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.682 [2024-11-26 07:42:03.747735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.682 qpair failed and we were unable to recover it. 00:32:19.682 [2024-11-26 07:42:03.748024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.682 [2024-11-26 07:42:03.748035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.682 qpair failed and we were unable to recover it. 00:32:19.682 [2024-11-26 07:42:03.748356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.682 [2024-11-26 07:42:03.748367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.682 qpair failed and we were unable to recover it. 00:32:19.682 [2024-11-26 07:42:03.748487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.682 [2024-11-26 07:42:03.748498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.682 qpair failed and we were unable to recover it. 00:32:19.682 [2024-11-26 07:42:03.748804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.682 [2024-11-26 07:42:03.748815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.682 qpair failed and we were unable to recover it. 00:32:19.682 [2024-11-26 07:42:03.749130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.682 [2024-11-26 07:42:03.749142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.682 qpair failed and we were unable to recover it. 00:32:19.682 [2024-11-26 07:42:03.749355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.682 [2024-11-26 07:42:03.749366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.682 qpair failed and we were unable to recover it. 00:32:19.682 [2024-11-26 07:42:03.749441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.682 [2024-11-26 07:42:03.749452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.682 qpair failed and we were unable to recover it. 00:32:19.682 [2024-11-26 07:42:03.749759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.682 [2024-11-26 07:42:03.749770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.682 qpair failed and we were unable to recover it. 00:32:19.682 [2024-11-26 07:42:03.750118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.682 [2024-11-26 07:42:03.750129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.682 qpair failed and we were unable to recover it. 00:32:19.682 [2024-11-26 07:42:03.750439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.682 [2024-11-26 07:42:03.750450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.682 qpair failed and we were unable to recover it. 00:32:19.682 [2024-11-26 07:42:03.750757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.682 [2024-11-26 07:42:03.750767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.683 [2024-11-26 07:42:03.751063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.683 [2024-11-26 07:42:03.751074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.683 [2024-11-26 07:42:03.751274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.683 [2024-11-26 07:42:03.751286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.683 [2024-11-26 07:42:03.751603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.683 [2024-11-26 07:42:03.751614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.683 [2024-11-26 07:42:03.751964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.683 [2024-11-26 07:42:03.751975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.683 [2024-11-26 07:42:03.752280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.683 [2024-11-26 07:42:03.752292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.683 [2024-11-26 07:42:03.752605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.683 [2024-11-26 07:42:03.752616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.683 [2024-11-26 07:42:03.752926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.683 [2024-11-26 07:42:03.752937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.683 [2024-11-26 07:42:03.753284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.683 [2024-11-26 07:42:03.753296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.683 [2024-11-26 07:42:03.753613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.683 [2024-11-26 07:42:03.753624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.683 [2024-11-26 07:42:03.753920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.683 [2024-11-26 07:42:03.753932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.683 [2024-11-26 07:42:03.754055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.683 [2024-11-26 07:42:03.754064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.683 [2024-11-26 07:42:03.754390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.683 [2024-11-26 07:42:03.754401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.683 [2024-11-26 07:42:03.754699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.683 [2024-11-26 07:42:03.754710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.683 [2024-11-26 07:42:03.754988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.683 [2024-11-26 07:42:03.755000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.683 [2024-11-26 07:42:03.755346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.683 [2024-11-26 07:42:03.755358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.683 [2024-11-26 07:42:03.755642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.683 [2024-11-26 07:42:03.755657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.683 [2024-11-26 07:42:03.755949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.683 [2024-11-26 07:42:03.755960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.683 [2024-11-26 07:42:03.756165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.683 [2024-11-26 07:42:03.756176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.683 [2024-11-26 07:42:03.756376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.683 [2024-11-26 07:42:03.756388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.683 [2024-11-26 07:42:03.756692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.683 [2024-11-26 07:42:03.756703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.683 [2024-11-26 07:42:03.757028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.683 [2024-11-26 07:42:03.757040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.683 [2024-11-26 07:42:03.757332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.683 [2024-11-26 07:42:03.757343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.683 [2024-11-26 07:42:03.757675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.683 [2024-11-26 07:42:03.757686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.683 [2024-11-26 07:42:03.757952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.683 [2024-11-26 07:42:03.757965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.683 [2024-11-26 07:42:03.758287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.683 [2024-11-26 07:42:03.758299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.683 [2024-11-26 07:42:03.758612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.683 [2024-11-26 07:42:03.758623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.683 [2024-11-26 07:42:03.758931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.683 [2024-11-26 07:42:03.758943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.683 [2024-11-26 07:42:03.759279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.683 [2024-11-26 07:42:03.759289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.683 [2024-11-26 07:42:03.759622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.683 [2024-11-26 07:42:03.759633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.683 [2024-11-26 07:42:03.759940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.683 [2024-11-26 07:42:03.759952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.683 [2024-11-26 07:42:03.760308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.683 [2024-11-26 07:42:03.760320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.683 [2024-11-26 07:42:03.760628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.683 [2024-11-26 07:42:03.760639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.683 [2024-11-26 07:42:03.760934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.683 [2024-11-26 07:42:03.760946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.683 [2024-11-26 07:42:03.761267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.683 [2024-11-26 07:42:03.761278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.683 [2024-11-26 07:42:03.761613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.683 [2024-11-26 07:42:03.761624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.683 [2024-11-26 07:42:03.761923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.683 [2024-11-26 07:42:03.761935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.683 [2024-11-26 07:42:03.762267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.683 [2024-11-26 07:42:03.762278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.683 [2024-11-26 07:42:03.762588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.683 [2024-11-26 07:42:03.762600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.683 qpair failed and we were unable to recover it. 00:32:19.684 [2024-11-26 07:42:03.762808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.684 [2024-11-26 07:42:03.762820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.684 qpair failed and we were unable to recover it. 00:32:19.684 [2024-11-26 07:42:03.763093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.684 [2024-11-26 07:42:03.763105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.684 qpair failed and we were unable to recover it. 00:32:19.684 [2024-11-26 07:42:03.763319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.684 [2024-11-26 07:42:03.763330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.684 qpair failed and we were unable to recover it. 00:32:19.684 [2024-11-26 07:42:03.763641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.684 [2024-11-26 07:42:03.763654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.684 qpair failed and we were unable to recover it. 00:32:19.684 [2024-11-26 07:42:03.763978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.684 [2024-11-26 07:42:03.763992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.684 qpair failed and we were unable to recover it. 00:32:19.684 [2024-11-26 07:42:03.764199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.684 [2024-11-26 07:42:03.764211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.684 qpair failed and we were unable to recover it. 00:32:19.684 [2024-11-26 07:42:03.764535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.684 [2024-11-26 07:42:03.764547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.684 qpair failed and we were unable to recover it. 00:32:19.684 [2024-11-26 07:42:03.764858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.684 [2024-11-26 07:42:03.764875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.684 qpair failed and we were unable to recover it. 00:32:19.684 [2024-11-26 07:42:03.765232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.684 [2024-11-26 07:42:03.765244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.684 qpair failed and we were unable to recover it. 00:32:19.684 [2024-11-26 07:42:03.765555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.684 [2024-11-26 07:42:03.765567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.684 qpair failed and we were unable to recover it. 00:32:19.684 [2024-11-26 07:42:03.765792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.684 [2024-11-26 07:42:03.765804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.684 qpair failed and we were unable to recover it. 00:32:19.684 [2024-11-26 07:42:03.766143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.684 [2024-11-26 07:42:03.766155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.684 qpair failed and we were unable to recover it. 00:32:19.684 [2024-11-26 07:42:03.766506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.684 [2024-11-26 07:42:03.766518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.684 qpair failed and we were unable to recover it. 00:32:19.684 [2024-11-26 07:42:03.766854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.684 [2024-11-26 07:42:03.766872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.684 qpair failed and we were unable to recover it. 00:32:19.684 [2024-11-26 07:42:03.767189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.684 [2024-11-26 07:42:03.767201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.684 qpair failed and we were unable to recover it. 00:32:19.684 [2024-11-26 07:42:03.767526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.684 [2024-11-26 07:42:03.767537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.684 qpair failed and we were unable to recover it. 00:32:19.684 [2024-11-26 07:42:03.767846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.684 [2024-11-26 07:42:03.767858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.684 qpair failed and we were unable to recover it. 00:32:19.684 [2024-11-26 07:42:03.768180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.684 [2024-11-26 07:42:03.768192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.684 qpair failed and we were unable to recover it. 00:32:19.684 [2024-11-26 07:42:03.768376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.684 [2024-11-26 07:42:03.768388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.684 qpair failed and we were unable to recover it. 00:32:19.684 [2024-11-26 07:42:03.768696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.684 [2024-11-26 07:42:03.768707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.684 qpair failed and we were unable to recover it. 00:32:19.684 [2024-11-26 07:42:03.769023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.684 [2024-11-26 07:42:03.769035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.684 qpair failed and we were unable to recover it. 00:32:19.684 [2024-11-26 07:42:03.769367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.684 [2024-11-26 07:42:03.769378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.684 qpair failed and we were unable to recover it. 00:32:19.684 [2024-11-26 07:42:03.769575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.684 [2024-11-26 07:42:03.769586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.684 qpair failed and we were unable to recover it. 00:32:19.960 [2024-11-26 07:42:03.769904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.960 [2024-11-26 07:42:03.769924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.960 qpair failed and we were unable to recover it. 00:32:19.960 [2024-11-26 07:42:03.770203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.960 [2024-11-26 07:42:03.770215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.960 qpair failed and we were unable to recover it. 00:32:19.960 [2024-11-26 07:42:03.770552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.960 [2024-11-26 07:42:03.770564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.960 qpair failed and we were unable to recover it. 00:32:19.960 [2024-11-26 07:42:03.770871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.960 [2024-11-26 07:42:03.770882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.960 qpair failed and we were unable to recover it. 00:32:19.960 [2024-11-26 07:42:03.771226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.960 [2024-11-26 07:42:03.771238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.960 qpair failed and we were unable to recover it. 00:32:19.960 [2024-11-26 07:42:03.771426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.960 [2024-11-26 07:42:03.771439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.960 qpair failed and we were unable to recover it. 00:32:19.960 [2024-11-26 07:42:03.771741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.960 [2024-11-26 07:42:03.771753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.960 qpair failed and we were unable to recover it. 00:32:19.960 [2024-11-26 07:42:03.772089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.960 [2024-11-26 07:42:03.772100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.960 qpair failed and we were unable to recover it. 00:32:19.960 [2024-11-26 07:42:03.772413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.960 [2024-11-26 07:42:03.772424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.960 qpair failed and we were unable to recover it. 00:32:19.960 [2024-11-26 07:42:03.772645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.960 [2024-11-26 07:42:03.772656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.960 qpair failed and we were unable to recover it. 00:32:19.960 [2024-11-26 07:42:03.772871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.960 [2024-11-26 07:42:03.772883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.960 qpair failed and we were unable to recover it. 00:32:19.960 [2024-11-26 07:42:03.773216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.960 [2024-11-26 07:42:03.773227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.960 qpair failed and we were unable to recover it. 00:32:19.960 [2024-11-26 07:42:03.773323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.960 [2024-11-26 07:42:03.773332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.960 qpair failed and we were unable to recover it. 00:32:19.960 [2024-11-26 07:42:03.773532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.960 [2024-11-26 07:42:03.773544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.960 qpair failed and we were unable to recover it. 00:32:19.960 [2024-11-26 07:42:03.773755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.960 [2024-11-26 07:42:03.773767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.960 qpair failed and we were unable to recover it. 00:32:19.960 [2024-11-26 07:42:03.774114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.960 [2024-11-26 07:42:03.774125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.960 qpair failed and we were unable to recover it. 00:32:19.960 [2024-11-26 07:42:03.774347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.960 [2024-11-26 07:42:03.774359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.960 qpair failed and we were unable to recover it. 00:32:19.960 [2024-11-26 07:42:03.774571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.960 [2024-11-26 07:42:03.774582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.960 qpair failed and we were unable to recover it. 00:32:19.960 [2024-11-26 07:42:03.774874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.960 [2024-11-26 07:42:03.774886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.960 qpair failed and we were unable to recover it. 00:32:19.960 [2024-11-26 07:42:03.775278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.960 [2024-11-26 07:42:03.775289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.960 qpair failed and we were unable to recover it. 00:32:19.960 [2024-11-26 07:42:03.775497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.960 [2024-11-26 07:42:03.775509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.960 qpair failed and we were unable to recover it. 00:32:19.960 [2024-11-26 07:42:03.775785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.960 [2024-11-26 07:42:03.775795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.960 qpair failed and we were unable to recover it. 00:32:19.960 [2024-11-26 07:42:03.776043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.960 [2024-11-26 07:42:03.776055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.960 qpair failed and we were unable to recover it. 00:32:19.960 [2024-11-26 07:42:03.776261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.960 [2024-11-26 07:42:03.776272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.960 qpair failed and we were unable to recover it. 00:32:19.960 [2024-11-26 07:42:03.776564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.960 [2024-11-26 07:42:03.776575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.960 qpair failed and we were unable to recover it. 00:32:19.960 [2024-11-26 07:42:03.776883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.960 [2024-11-26 07:42:03.776895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.960 qpair failed and we were unable to recover it. 00:32:19.960 [2024-11-26 07:42:03.777216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.960 [2024-11-26 07:42:03.777228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.960 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.777561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.961 [2024-11-26 07:42:03.777572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.961 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.777873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.961 [2024-11-26 07:42:03.777885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.961 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.778108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.961 [2024-11-26 07:42:03.778118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.961 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.778424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.961 [2024-11-26 07:42:03.778435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.961 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.778725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.961 [2024-11-26 07:42:03.778737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.961 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.779187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.961 [2024-11-26 07:42:03.779198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.961 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.779421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.961 [2024-11-26 07:42:03.779434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.961 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.779722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.961 [2024-11-26 07:42:03.779734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.961 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.779961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.961 [2024-11-26 07:42:03.779972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.961 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.780335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.961 [2024-11-26 07:42:03.780346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.961 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.780677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.961 [2024-11-26 07:42:03.780689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.961 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.780917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.961 [2024-11-26 07:42:03.780928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.961 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.781232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.961 [2024-11-26 07:42:03.781244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.961 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.781572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.961 [2024-11-26 07:42:03.781584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.961 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.781794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.961 [2024-11-26 07:42:03.781806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.961 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.782134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.961 [2024-11-26 07:42:03.782146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.961 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.782442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.961 [2024-11-26 07:42:03.782453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.961 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.782728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.961 [2024-11-26 07:42:03.782739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.961 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.782950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.961 [2024-11-26 07:42:03.782961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.961 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.783308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.961 [2024-11-26 07:42:03.783320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.961 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.783631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.961 [2024-11-26 07:42:03.783643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.961 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.783827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.961 [2024-11-26 07:42:03.783839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.961 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.784198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.961 [2024-11-26 07:42:03.784214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.961 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.784509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.961 [2024-11-26 07:42:03.784521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.961 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.784855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.961 [2024-11-26 07:42:03.784873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.961 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.785072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.961 [2024-11-26 07:42:03.785083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.961 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.785397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.961 [2024-11-26 07:42:03.785408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.961 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.785739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.961 [2024-11-26 07:42:03.785750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.961 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.786052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.961 [2024-11-26 07:42:03.786064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.961 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.786396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.961 [2024-11-26 07:42:03.786407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.961 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.786767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.961 [2024-11-26 07:42:03.786779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.961 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.787104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.961 [2024-11-26 07:42:03.787115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.961 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.787423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.961 [2024-11-26 07:42:03.787435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.961 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.787651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.961 [2024-11-26 07:42:03.787663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.961 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.787931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.961 [2024-11-26 07:42:03.787942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.961 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.788291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.961 [2024-11-26 07:42:03.788302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.961 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.788491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.961 [2024-11-26 07:42:03.788504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.961 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.788811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.961 [2024-11-26 07:42:03.788823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.961 qpair failed and we were unable to recover it. 00:32:19.961 [2024-11-26 07:42:03.789173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.962 [2024-11-26 07:42:03.789185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.962 qpair failed and we were unable to recover it. 00:32:19.962 [2024-11-26 07:42:03.789517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.962 [2024-11-26 07:42:03.789528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.962 qpair failed and we were unable to recover it. 00:32:19.962 [2024-11-26 07:42:03.789835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.962 [2024-11-26 07:42:03.789847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.962 qpair failed and we were unable to recover it. 00:32:19.962 [2024-11-26 07:42:03.790216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.962 [2024-11-26 07:42:03.790228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.962 qpair failed and we were unable to recover it. 00:32:19.962 [2024-11-26 07:42:03.790545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.962 [2024-11-26 07:42:03.790556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.962 qpair failed and we were unable to recover it. 00:32:19.962 [2024-11-26 07:42:03.790853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.962 [2024-11-26 07:42:03.790871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.962 qpair failed and we were unable to recover it. 00:32:19.962 [2024-11-26 07:42:03.791207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.962 [2024-11-26 07:42:03.791218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.962 qpair failed and we were unable to recover it. 00:32:19.962 [2024-11-26 07:42:03.791544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.962 [2024-11-26 07:42:03.791555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.962 qpair failed and we were unable to recover it. 00:32:19.962 [2024-11-26 07:42:03.791879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.962 [2024-11-26 07:42:03.791890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.962 qpair failed and we were unable to recover it. 00:32:19.962 [2024-11-26 07:42:03.792087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.962 [2024-11-26 07:42:03.792099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.962 qpair failed and we were unable to recover it. 00:32:19.962 [2024-11-26 07:42:03.792436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.962 [2024-11-26 07:42:03.792447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.962 qpair failed and we were unable to recover it. 00:32:19.962 [2024-11-26 07:42:03.792765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.962 [2024-11-26 07:42:03.792779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.962 qpair failed and we were unable to recover it. 00:32:19.962 [2024-11-26 07:42:03.793085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.962 [2024-11-26 07:42:03.793096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.962 qpair failed and we were unable to recover it. 00:32:19.962 [2024-11-26 07:42:03.793439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.962 [2024-11-26 07:42:03.793450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.962 qpair failed and we were unable to recover it. 00:32:19.962 [2024-11-26 07:42:03.793728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.962 [2024-11-26 07:42:03.793738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.962 qpair failed and we were unable to recover it. 00:32:19.962 [2024-11-26 07:42:03.794052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.962 [2024-11-26 07:42:03.794064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.962 qpair failed and we were unable to recover it. 00:32:19.962 [2024-11-26 07:42:03.794383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.962 [2024-11-26 07:42:03.794395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.962 qpair failed and we were unable to recover it. 00:32:19.962 [2024-11-26 07:42:03.794693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.962 [2024-11-26 07:42:03.794704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.962 qpair failed and we were unable to recover it. 00:32:19.962 [2024-11-26 07:42:03.795025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.962 [2024-11-26 07:42:03.795035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.962 qpair failed and we were unable to recover it. 00:32:19.962 [2024-11-26 07:42:03.795331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.962 [2024-11-26 07:42:03.795342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.962 qpair failed and we were unable to recover it. 00:32:19.962 [2024-11-26 07:42:03.795684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.962 [2024-11-26 07:42:03.795695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.962 qpair failed and we were unable to recover it. 00:32:19.962 [2024-11-26 07:42:03.796011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.962 [2024-11-26 07:42:03.796023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.962 qpair failed and we were unable to recover it. 00:32:19.962 [2024-11-26 07:42:03.796243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.962 [2024-11-26 07:42:03.796254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.962 qpair failed and we were unable to recover it. 00:32:19.962 [2024-11-26 07:42:03.796454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.962 [2024-11-26 07:42:03.796465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.962 qpair failed and we were unable to recover it. 00:32:19.962 [2024-11-26 07:42:03.796781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.962 [2024-11-26 07:42:03.796792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.962 qpair failed and we were unable to recover it. 00:32:19.962 [2024-11-26 07:42:03.797119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.962 [2024-11-26 07:42:03.797130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.962 qpair failed and we were unable to recover it. 00:32:19.962 [2024-11-26 07:42:03.797462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.962 [2024-11-26 07:42:03.797474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.962 qpair failed and we were unable to recover it. 00:32:19.962 [2024-11-26 07:42:03.797657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.962 [2024-11-26 07:42:03.797669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.962 qpair failed and we were unable to recover it. 00:32:19.962 [2024-11-26 07:42:03.797986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.962 [2024-11-26 07:42:03.797997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.962 qpair failed and we were unable to recover it. 00:32:19.962 [2024-11-26 07:42:03.798199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.962 [2024-11-26 07:42:03.798211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.962 qpair failed and we were unable to recover it. 00:32:19.962 [2024-11-26 07:42:03.798535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.962 [2024-11-26 07:42:03.798547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.962 qpair failed and we were unable to recover it. 00:32:19.962 [2024-11-26 07:42:03.798734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.962 [2024-11-26 07:42:03.798745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.962 qpair failed and we were unable to recover it. 00:32:19.962 [2024-11-26 07:42:03.799059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.962 [2024-11-26 07:42:03.799070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.962 qpair failed and we were unable to recover it. 00:32:19.962 [2024-11-26 07:42:03.799383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.962 [2024-11-26 07:42:03.799394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.962 qpair failed and we were unable to recover it. 00:32:19.962 [2024-11-26 07:42:03.799729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.962 [2024-11-26 07:42:03.799740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.962 qpair failed and we were unable to recover it. 00:32:19.962 [2024-11-26 07:42:03.799947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.962 [2024-11-26 07:42:03.799958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.962 qpair failed and we were unable to recover it. 00:32:19.962 [2024-11-26 07:42:03.800204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.962 [2024-11-26 07:42:03.800215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.962 qpair failed and we were unable to recover it. 00:32:19.962 [2024-11-26 07:42:03.800511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.962 [2024-11-26 07:42:03.800522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.962 qpair failed and we were unable to recover it. 00:32:19.962 [2024-11-26 07:42:03.800790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.800801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.963 [2024-11-26 07:42:03.801089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.801101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.963 [2024-11-26 07:42:03.801452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.801464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.963 [2024-11-26 07:42:03.801807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.801819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.963 [2024-11-26 07:42:03.802147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.802158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.963 [2024-11-26 07:42:03.802464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.802475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.963 [2024-11-26 07:42:03.802821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.802832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.963 [2024-11-26 07:42:03.803156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.803167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.963 [2024-11-26 07:42:03.803476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.803486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.963 [2024-11-26 07:42:03.803787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.803798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.963 [2024-11-26 07:42:03.804124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.804136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.963 [2024-11-26 07:42:03.804450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.804462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.963 [2024-11-26 07:42:03.804794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.804806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.963 [2024-11-26 07:42:03.804903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.804914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.963 [2024-11-26 07:42:03.805220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.805231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.963 [2024-11-26 07:42:03.805572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.805583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.963 [2024-11-26 07:42:03.805926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.805937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.963 [2024-11-26 07:42:03.806242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.806253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.963 [2024-11-26 07:42:03.806432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.806444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.963 [2024-11-26 07:42:03.806753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.806764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.963 [2024-11-26 07:42:03.807062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.807073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.963 [2024-11-26 07:42:03.807388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.807398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.963 [2024-11-26 07:42:03.807713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.807724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.963 [2024-11-26 07:42:03.807889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.807901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.963 [2024-11-26 07:42:03.808231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.808242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.963 [2024-11-26 07:42:03.808546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.808558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.963 [2024-11-26 07:42:03.808743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.808754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.963 [2024-11-26 07:42:03.808816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.808826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.963 [2024-11-26 07:42:03.809146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.809157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.963 [2024-11-26 07:42:03.809493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.809504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.963 [2024-11-26 07:42:03.809711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.809722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.963 [2024-11-26 07:42:03.810036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.810047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.963 [2024-11-26 07:42:03.810357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.810368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.963 [2024-11-26 07:42:03.810687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.810698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.963 [2024-11-26 07:42:03.810871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.810883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.963 [2024-11-26 07:42:03.811252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.811262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.963 [2024-11-26 07:42:03.811570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.811582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.963 [2024-11-26 07:42:03.811871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.811883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.963 [2024-11-26 07:42:03.812191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.963 [2024-11-26 07:42:03.812202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.963 qpair failed and we were unable to recover it. 00:32:19.964 [2024-11-26 07:42:03.812505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.964 [2024-11-26 07:42:03.812516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.964 qpair failed and we were unable to recover it. 00:32:19.964 [2024-11-26 07:42:03.812823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.964 [2024-11-26 07:42:03.812834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.964 qpair failed and we were unable to recover it. 00:32:19.964 [2024-11-26 07:42:03.813101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.964 [2024-11-26 07:42:03.813115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.964 qpair failed and we were unable to recover it. 00:32:19.964 [2024-11-26 07:42:03.813192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.964 [2024-11-26 07:42:03.813201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.964 qpair failed and we were unable to recover it. 00:32:19.964 [2024-11-26 07:42:03.813477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.964 [2024-11-26 07:42:03.813488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.964 qpair failed and we were unable to recover it. 00:32:19.964 [2024-11-26 07:42:03.813688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.964 [2024-11-26 07:42:03.813700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.964 qpair failed and we were unable to recover it. 00:32:19.964 [2024-11-26 07:42:03.813908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.964 [2024-11-26 07:42:03.813920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.964 qpair failed and we were unable to recover it. 00:32:19.964 [2024-11-26 07:42:03.814145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.964 [2024-11-26 07:42:03.814156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.964 qpair failed and we were unable to recover it. 00:32:19.964 [2024-11-26 07:42:03.814468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.964 [2024-11-26 07:42:03.814479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.964 qpair failed and we were unable to recover it. 00:32:19.964 [2024-11-26 07:42:03.814797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.964 [2024-11-26 07:42:03.814808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.964 qpair failed and we were unable to recover it. 00:32:19.964 [2024-11-26 07:42:03.815187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.964 [2024-11-26 07:42:03.815199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.964 qpair failed and we were unable to recover it. 00:32:19.964 [2024-11-26 07:42:03.815378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.964 [2024-11-26 07:42:03.815390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.964 qpair failed and we were unable to recover it. 00:32:19.964 [2024-11-26 07:42:03.815770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.964 [2024-11-26 07:42:03.815782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.964 qpair failed and we were unable to recover it. 00:32:19.964 [2024-11-26 07:42:03.816075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.964 [2024-11-26 07:42:03.816086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.964 qpair failed and we were unable to recover it. 00:32:19.964 [2024-11-26 07:42:03.816287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.964 [2024-11-26 07:42:03.816297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.964 qpair failed and we were unable to recover it. 00:32:19.964 [2024-11-26 07:42:03.816580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.964 [2024-11-26 07:42:03.816590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.964 qpair failed and we were unable to recover it. 00:32:19.964 [2024-11-26 07:42:03.816908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.964 [2024-11-26 07:42:03.816919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.964 qpair failed and we were unable to recover it. 00:32:19.964 [2024-11-26 07:42:03.817267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.964 [2024-11-26 07:42:03.817279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.964 qpair failed and we were unable to recover it. 00:32:19.964 [2024-11-26 07:42:03.817588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.964 [2024-11-26 07:42:03.817600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.964 qpair failed and we were unable to recover it. 00:32:19.964 [2024-11-26 07:42:03.817933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.964 [2024-11-26 07:42:03.817945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.964 qpair failed and we were unable to recover it. 00:32:19.964 [2024-11-26 07:42:03.818125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.964 [2024-11-26 07:42:03.818136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.964 qpair failed and we were unable to recover it. 00:32:19.964 [2024-11-26 07:42:03.818467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.964 [2024-11-26 07:42:03.818478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.964 qpair failed and we were unable to recover it. 00:32:19.964 [2024-11-26 07:42:03.818864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.964 [2024-11-26 07:42:03.818876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.964 qpair failed and we were unable to recover it. 00:32:19.964 [2024-11-26 07:42:03.819213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.964 [2024-11-26 07:42:03.819225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.964 qpair failed and we were unable to recover it. 00:32:19.964 [2024-11-26 07:42:03.819362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.964 [2024-11-26 07:42:03.819373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.964 qpair failed and we were unable to recover it. 00:32:19.964 [2024-11-26 07:42:03.819552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.964 [2024-11-26 07:42:03.819562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.964 qpair failed and we were unable to recover it. 00:32:19.964 [2024-11-26 07:42:03.819896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.964 [2024-11-26 07:42:03.819908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.964 qpair failed and we were unable to recover it. 00:32:19.964 [2024-11-26 07:42:03.820222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.964 [2024-11-26 07:42:03.820233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.964 qpair failed and we were unable to recover it. 00:32:19.964 [2024-11-26 07:42:03.820542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.964 [2024-11-26 07:42:03.820552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.964 qpair failed and we were unable to recover it. 00:32:19.964 [2024-11-26 07:42:03.820860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.964 [2024-11-26 07:42:03.820884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.964 qpair failed and we were unable to recover it. 00:32:19.964 [2024-11-26 07:42:03.821204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.964 [2024-11-26 07:42:03.821215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.964 qpair failed and we were unable to recover it. 00:32:19.964 [2024-11-26 07:42:03.821550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.964 [2024-11-26 07:42:03.821561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.964 qpair failed and we were unable to recover it. 00:32:19.964 [2024-11-26 07:42:03.821784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.965 [2024-11-26 07:42:03.821794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.965 qpair failed and we were unable to recover it. 00:32:19.965 [2024-11-26 07:42:03.822115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.965 [2024-11-26 07:42:03.822127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.965 qpair failed and we were unable to recover it. 00:32:19.965 [2024-11-26 07:42:03.822494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.965 [2024-11-26 07:42:03.822507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.965 qpair failed and we were unable to recover it. 00:32:19.965 [2024-11-26 07:42:03.822809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.965 [2024-11-26 07:42:03.822820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.965 qpair failed and we were unable to recover it. 00:32:19.965 [2024-11-26 07:42:03.823136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.965 [2024-11-26 07:42:03.823147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.965 qpair failed and we were unable to recover it. 00:32:19.965 [2024-11-26 07:42:03.823340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.965 [2024-11-26 07:42:03.823351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.965 qpair failed and we were unable to recover it. 00:32:19.965 [2024-11-26 07:42:03.823676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.965 [2024-11-26 07:42:03.823687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.965 qpair failed and we were unable to recover it. 00:32:19.965 [2024-11-26 07:42:03.824003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.965 [2024-11-26 07:42:03.824015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.965 qpair failed and we were unable to recover it. 00:32:19.965 [2024-11-26 07:42:03.824192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.965 [2024-11-26 07:42:03.824204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.965 qpair failed and we were unable to recover it. 00:32:19.965 [2024-11-26 07:42:03.824398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.965 [2024-11-26 07:42:03.824409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.965 qpair failed and we were unable to recover it. 00:32:19.965 [2024-11-26 07:42:03.824725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.965 [2024-11-26 07:42:03.824735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.965 qpair failed and we were unable to recover it. 00:32:19.965 [2024-11-26 07:42:03.825040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.965 [2024-11-26 07:42:03.825051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.965 qpair failed and we were unable to recover it. 00:32:19.965 [2024-11-26 07:42:03.825377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.965 [2024-11-26 07:42:03.825388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.965 qpair failed and we were unable to recover it. 00:32:19.965 [2024-11-26 07:42:03.825695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.965 [2024-11-26 07:42:03.825706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.965 qpair failed and we were unable to recover it. 00:32:19.965 [2024-11-26 07:42:03.826069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.965 [2024-11-26 07:42:03.826081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.965 qpair failed and we were unable to recover it. 00:32:19.965 [2024-11-26 07:42:03.826370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.965 [2024-11-26 07:42:03.826381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.965 qpair failed and we were unable to recover it. 00:32:19.965 [2024-11-26 07:42:03.826575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.965 [2024-11-26 07:42:03.826586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.965 qpair failed and we were unable to recover it. 00:32:19.965 [2024-11-26 07:42:03.826907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.965 [2024-11-26 07:42:03.826918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.965 qpair failed and we were unable to recover it. 00:32:19.965 [2024-11-26 07:42:03.827250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.965 [2024-11-26 07:42:03.827260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.965 qpair failed and we were unable to recover it. 00:32:19.965 [2024-11-26 07:42:03.827601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.965 [2024-11-26 07:42:03.827613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.965 qpair failed and we were unable to recover it. 00:32:19.965 [2024-11-26 07:42:03.827796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.965 [2024-11-26 07:42:03.827808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.965 qpair failed and we were unable to recover it. 00:32:19.965 [2024-11-26 07:42:03.828122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.965 [2024-11-26 07:42:03.828133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.965 qpair failed and we were unable to recover it. 00:32:19.965 [2024-11-26 07:42:03.828432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.965 [2024-11-26 07:42:03.828443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.965 qpair failed and we were unable to recover it. 00:32:19.965 [2024-11-26 07:42:03.828789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.965 [2024-11-26 07:42:03.828800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.965 qpair failed and we were unable to recover it. 00:32:19.965 [2024-11-26 07:42:03.829199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.965 [2024-11-26 07:42:03.829212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.965 qpair failed and we were unable to recover it. 00:32:19.965 [2024-11-26 07:42:03.829540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.965 [2024-11-26 07:42:03.829551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.965 qpair failed and we were unable to recover it. 00:32:19.965 [2024-11-26 07:42:03.829853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.965 [2024-11-26 07:42:03.829874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.965 qpair failed and we were unable to recover it. 00:32:19.965 [2024-11-26 07:42:03.830173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.965 [2024-11-26 07:42:03.830184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.965 qpair failed and we were unable to recover it. 00:32:19.965 [2024-11-26 07:42:03.830525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.965 [2024-11-26 07:42:03.830536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.965 qpair failed and we were unable to recover it. 00:32:19.965 [2024-11-26 07:42:03.830829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.965 [2024-11-26 07:42:03.830841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.965 qpair failed and we were unable to recover it. 00:32:19.965 [2024-11-26 07:42:03.831140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.965 [2024-11-26 07:42:03.831152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.965 qpair failed and we were unable to recover it. 00:32:19.965 [2024-11-26 07:42:03.831345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.965 [2024-11-26 07:42:03.831356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.965 qpair failed and we were unable to recover it. 00:32:19.965 [2024-11-26 07:42:03.831686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.965 [2024-11-26 07:42:03.831697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.965 qpair failed and we were unable to recover it. 00:32:19.965 [2024-11-26 07:42:03.832004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.965 [2024-11-26 07:42:03.832015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.965 qpair failed and we were unable to recover it. 00:32:19.965 [2024-11-26 07:42:03.832339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.965 [2024-11-26 07:42:03.832349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.965 qpair failed and we were unable to recover it. 00:32:19.965 [2024-11-26 07:42:03.832681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.965 [2024-11-26 07:42:03.832692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.965 qpair failed and we were unable to recover it. 00:32:19.965 [2024-11-26 07:42:03.832996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.965 [2024-11-26 07:42:03.833007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.965 qpair failed and we were unable to recover it. 00:32:19.965 [2024-11-26 07:42:03.833328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.965 [2024-11-26 07:42:03.833338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.965 qpair failed and we were unable to recover it. 00:32:19.965 [2024-11-26 07:42:03.833729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.833740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.966 qpair failed and we were unable to recover it. 00:32:19.966 [2024-11-26 07:42:03.834038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.834049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.966 qpair failed and we were unable to recover it. 00:32:19.966 [2024-11-26 07:42:03.834363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.834373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.966 qpair failed and we were unable to recover it. 00:32:19.966 [2024-11-26 07:42:03.834677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.834688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.966 qpair failed and we were unable to recover it. 00:32:19.966 [2024-11-26 07:42:03.834888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.834900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.966 qpair failed and we were unable to recover it. 00:32:19.966 [2024-11-26 07:42:03.835215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.835225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.966 qpair failed and we were unable to recover it. 00:32:19.966 [2024-11-26 07:42:03.835546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.835558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.966 qpair failed and we were unable to recover it. 00:32:19.966 [2024-11-26 07:42:03.835873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.835885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.966 qpair failed and we were unable to recover it. 00:32:19.966 [2024-11-26 07:42:03.836220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.836231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.966 qpair failed and we were unable to recover it. 00:32:19.966 [2024-11-26 07:42:03.836535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.836546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.966 qpair failed and we were unable to recover it. 00:32:19.966 [2024-11-26 07:42:03.836730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.836743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.966 qpair failed and we were unable to recover it. 00:32:19.966 [2024-11-26 07:42:03.836941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.836953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.966 qpair failed and we were unable to recover it. 00:32:19.966 [2024-11-26 07:42:03.837273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.837284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.966 qpair failed and we were unable to recover it. 00:32:19.966 [2024-11-26 07:42:03.837570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.837581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.966 qpair failed and we were unable to recover it. 00:32:19.966 [2024-11-26 07:42:03.837887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.837898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.966 qpair failed and we were unable to recover it. 00:32:19.966 [2024-11-26 07:42:03.838206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.838217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.966 qpair failed and we were unable to recover it. 00:32:19.966 [2024-11-26 07:42:03.838546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.838557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.966 qpair failed and we were unable to recover it. 00:32:19.966 [2024-11-26 07:42:03.838844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.838855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.966 qpair failed and we were unable to recover it. 00:32:19.966 [2024-11-26 07:42:03.839149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.839161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.966 qpair failed and we were unable to recover it. 00:32:19.966 [2024-11-26 07:42:03.839475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.839486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.966 qpair failed and we were unable to recover it. 00:32:19.966 [2024-11-26 07:42:03.839758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.839769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.966 qpair failed and we were unable to recover it. 00:32:19.966 [2024-11-26 07:42:03.839881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.839892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.966 qpair failed and we were unable to recover it. 00:32:19.966 [2024-11-26 07:42:03.840201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.840211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.966 qpair failed and we were unable to recover it. 00:32:19.966 [2024-11-26 07:42:03.840486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.840498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.966 qpair failed and we were unable to recover it. 00:32:19.966 [2024-11-26 07:42:03.840713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.840725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.966 qpair failed and we were unable to recover it. 00:32:19.966 [2024-11-26 07:42:03.841051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.841063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.966 qpair failed and we were unable to recover it. 00:32:19.966 [2024-11-26 07:42:03.841381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.841392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.966 qpair failed and we were unable to recover it. 00:32:19.966 [2024-11-26 07:42:03.841767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.841778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.966 qpair failed and we were unable to recover it. 00:32:19.966 [2024-11-26 07:42:03.842099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.842110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.966 qpair failed and we were unable to recover it. 00:32:19.966 [2024-11-26 07:42:03.842448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.842459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.966 qpair failed and we were unable to recover it. 00:32:19.966 [2024-11-26 07:42:03.842795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.842806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.966 qpair failed and we were unable to recover it. 00:32:19.966 [2024-11-26 07:42:03.843003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.843016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.966 qpair failed and we were unable to recover it. 00:32:19.966 [2024-11-26 07:42:03.843341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.843352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.966 qpair failed and we were unable to recover it. 00:32:19.966 [2024-11-26 07:42:03.843690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.843701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.966 qpair failed and we were unable to recover it. 00:32:19.966 [2024-11-26 07:42:03.843916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.843927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.966 qpair failed and we were unable to recover it. 00:32:19.966 [2024-11-26 07:42:03.844252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.844264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.966 qpair failed and we were unable to recover it. 00:32:19.966 [2024-11-26 07:42:03.844567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.844578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.966 qpair failed and we were unable to recover it. 00:32:19.966 [2024-11-26 07:42:03.844914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.844926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.966 qpair failed and we were unable to recover it. 00:32:19.966 [2024-11-26 07:42:03.845255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.966 [2024-11-26 07:42:03.845266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.967 [2024-11-26 07:42:03.845581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.967 [2024-11-26 07:42:03.845592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.967 [2024-11-26 07:42:03.845849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.967 [2024-11-26 07:42:03.845859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.967 [2024-11-26 07:42:03.846061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.967 [2024-11-26 07:42:03.846072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.967 [2024-11-26 07:42:03.846362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.967 [2024-11-26 07:42:03.846373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.967 [2024-11-26 07:42:03.846754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.967 [2024-11-26 07:42:03.846765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.967 [2024-11-26 07:42:03.847080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.967 [2024-11-26 07:42:03.847091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.967 [2024-11-26 07:42:03.847375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.967 [2024-11-26 07:42:03.847386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.967 [2024-11-26 07:42:03.847666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.967 [2024-11-26 07:42:03.847678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.967 [2024-11-26 07:42:03.848015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.967 [2024-11-26 07:42:03.848026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.967 [2024-11-26 07:42:03.848246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.967 [2024-11-26 07:42:03.848256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.967 [2024-11-26 07:42:03.848479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.967 [2024-11-26 07:42:03.848489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.967 [2024-11-26 07:42:03.848801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.967 [2024-11-26 07:42:03.848811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.967 [2024-11-26 07:42:03.849120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.967 [2024-11-26 07:42:03.849132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.967 [2024-11-26 07:42:03.849499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.967 [2024-11-26 07:42:03.849510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.967 [2024-11-26 07:42:03.849856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.967 [2024-11-26 07:42:03.849872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.967 [2024-11-26 07:42:03.850197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.967 [2024-11-26 07:42:03.850211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.967 [2024-11-26 07:42:03.850518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.967 [2024-11-26 07:42:03.850529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.967 [2024-11-26 07:42:03.850831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.967 [2024-11-26 07:42:03.850842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.967 [2024-11-26 07:42:03.851170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.967 [2024-11-26 07:42:03.851182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.967 [2024-11-26 07:42:03.851508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.967 [2024-11-26 07:42:03.851519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.967 [2024-11-26 07:42:03.851829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.967 [2024-11-26 07:42:03.851840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.967 [2024-11-26 07:42:03.852170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.967 [2024-11-26 07:42:03.852181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.967 [2024-11-26 07:42:03.852534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.967 [2024-11-26 07:42:03.852545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.967 [2024-11-26 07:42:03.852859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.967 [2024-11-26 07:42:03.852880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.967 [2024-11-26 07:42:03.853209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.967 [2024-11-26 07:42:03.853220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.967 [2024-11-26 07:42:03.853521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.967 [2024-11-26 07:42:03.853531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.967 [2024-11-26 07:42:03.853859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.967 [2024-11-26 07:42:03.853876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.967 [2024-11-26 07:42:03.854097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.967 [2024-11-26 07:42:03.854109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.967 [2024-11-26 07:42:03.854439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.967 [2024-11-26 07:42:03.854450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.967 [2024-11-26 07:42:03.854764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.967 [2024-11-26 07:42:03.854775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.967 [2024-11-26 07:42:03.855011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.967 [2024-11-26 07:42:03.855023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.967 [2024-11-26 07:42:03.855369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.967 [2024-11-26 07:42:03.855380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.967 [2024-11-26 07:42:03.855592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.967 [2024-11-26 07:42:03.855603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.967 [2024-11-26 07:42:03.855827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.967 [2024-11-26 07:42:03.855840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.967 [2024-11-26 07:42:03.856136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.967 [2024-11-26 07:42:03.856150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.967 [2024-11-26 07:42:03.856327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.967 [2024-11-26 07:42:03.856339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.967 [2024-11-26 07:42:03.856672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.967 [2024-11-26 07:42:03.856683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.967 [2024-11-26 07:42:03.857033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.967 [2024-11-26 07:42:03.857045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.967 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.857354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.968 [2024-11-26 07:42:03.857364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.968 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.857698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.968 [2024-11-26 07:42:03.857709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.968 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.857937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.968 [2024-11-26 07:42:03.857948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.968 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.858272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.968 [2024-11-26 07:42:03.858283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.968 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.858562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.968 [2024-11-26 07:42:03.858576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.968 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.858894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.968 [2024-11-26 07:42:03.858906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.968 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.859192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.968 [2024-11-26 07:42:03.859203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.968 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.859406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.968 [2024-11-26 07:42:03.859418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.968 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.859732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.968 [2024-11-26 07:42:03.859743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.968 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.860084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.968 [2024-11-26 07:42:03.860095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.968 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.860426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.968 [2024-11-26 07:42:03.860437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.968 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.860659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.968 [2024-11-26 07:42:03.860670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.968 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.860983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.968 [2024-11-26 07:42:03.860994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.968 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.861329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.968 [2024-11-26 07:42:03.861340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.968 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.861523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.968 [2024-11-26 07:42:03.861535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.968 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.861859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.968 [2024-11-26 07:42:03.861874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.968 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.862217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.968 [2024-11-26 07:42:03.862228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.968 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.862536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.968 [2024-11-26 07:42:03.862547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.968 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.862888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.968 [2024-11-26 07:42:03.862900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.968 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.863224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.968 [2024-11-26 07:42:03.863235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.968 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.863562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.968 [2024-11-26 07:42:03.863573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.968 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.863804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.968 [2024-11-26 07:42:03.863815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.968 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.864122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.968 [2024-11-26 07:42:03.864134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.968 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.864488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.968 [2024-11-26 07:42:03.864499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.968 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.864873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.968 [2024-11-26 07:42:03.864884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.968 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.865207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.968 [2024-11-26 07:42:03.865218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.968 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.865519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.968 [2024-11-26 07:42:03.865530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.968 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.865910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.968 [2024-11-26 07:42:03.865922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.968 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.866190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.968 [2024-11-26 07:42:03.866201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.968 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.866524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.968 [2024-11-26 07:42:03.866535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.968 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.866843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.968 [2024-11-26 07:42:03.866855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.968 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.867124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.968 [2024-11-26 07:42:03.867136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.968 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.867529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.968 [2024-11-26 07:42:03.867541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.968 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.867877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.968 [2024-11-26 07:42:03.867889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.968 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.868186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.968 [2024-11-26 07:42:03.868196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.968 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.868475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.968 [2024-11-26 07:42:03.868486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.968 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.868819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.968 [2024-11-26 07:42:03.868830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.968 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.869159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.968 [2024-11-26 07:42:03.869171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.968 qpair failed and we were unable to recover it. 00:32:19.968 [2024-11-26 07:42:03.869477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.969 [2024-11-26 07:42:03.869488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.969 qpair failed and we were unable to recover it. 00:32:19.969 [2024-11-26 07:42:03.869788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.969 [2024-11-26 07:42:03.869799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.969 qpair failed and we were unable to recover it. 00:32:19.969 [2024-11-26 07:42:03.870111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.969 [2024-11-26 07:42:03.870122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.969 qpair failed and we were unable to recover it. 00:32:19.969 [2024-11-26 07:42:03.870423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.969 [2024-11-26 07:42:03.870434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.969 qpair failed and we were unable to recover it. 00:32:19.969 [2024-11-26 07:42:03.870735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.969 [2024-11-26 07:42:03.870747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.969 qpair failed and we were unable to recover it. 00:32:19.969 [2024-11-26 07:42:03.871076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.969 [2024-11-26 07:42:03.871088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.969 qpair failed and we were unable to recover it. 00:32:19.969 [2024-11-26 07:42:03.871390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.969 [2024-11-26 07:42:03.871401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.969 qpair failed and we were unable to recover it. 00:32:19.969 [2024-11-26 07:42:03.871738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.969 [2024-11-26 07:42:03.871749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.969 qpair failed and we were unable to recover it. 00:32:19.969 [2024-11-26 07:42:03.872054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.969 [2024-11-26 07:42:03.872065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.969 qpair failed and we were unable to recover it. 00:32:19.969 [2024-11-26 07:42:03.872376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.969 [2024-11-26 07:42:03.872386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.969 qpair failed and we were unable to recover it. 00:32:19.969 [2024-11-26 07:42:03.872740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.969 [2024-11-26 07:42:03.872751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.969 qpair failed and we were unable to recover it. 00:32:19.969 [2024-11-26 07:42:03.873081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.969 [2024-11-26 07:42:03.873092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.969 qpair failed and we were unable to recover it. 00:32:19.969 [2024-11-26 07:42:03.873395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.969 [2024-11-26 07:42:03.873406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.969 qpair failed and we were unable to recover it. 00:32:19.969 [2024-11-26 07:42:03.873622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.969 [2024-11-26 07:42:03.873634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.969 qpair failed and we were unable to recover it. 00:32:19.969 [2024-11-26 07:42:03.873975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.969 [2024-11-26 07:42:03.873986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.969 qpair failed and we were unable to recover it. 00:32:19.969 [2024-11-26 07:42:03.874290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.969 [2024-11-26 07:42:03.874301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.969 qpair failed and we were unable to recover it. 00:32:19.969 [2024-11-26 07:42:03.874609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.969 [2024-11-26 07:42:03.874621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.969 qpair failed and we were unable to recover it. 00:32:19.969 [2024-11-26 07:42:03.874970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.969 [2024-11-26 07:42:03.874982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.969 qpair failed and we were unable to recover it. 00:32:19.969 [2024-11-26 07:42:03.875264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.969 [2024-11-26 07:42:03.875276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.969 qpair failed and we were unable to recover it. 00:32:19.969 [2024-11-26 07:42:03.875614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.969 [2024-11-26 07:42:03.875626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.969 qpair failed and we were unable to recover it. 00:32:19.969 [2024-11-26 07:42:03.875969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.969 [2024-11-26 07:42:03.875980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.969 qpair failed and we were unable to recover it. 00:32:19.969 [2024-11-26 07:42:03.876292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.969 [2024-11-26 07:42:03.876303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.969 qpair failed and we were unable to recover it. 00:32:19.969 [2024-11-26 07:42:03.876512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.969 [2024-11-26 07:42:03.876522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.969 qpair failed and we were unable to recover it. 00:32:19.969 [2024-11-26 07:42:03.876808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.969 [2024-11-26 07:42:03.876820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.969 qpair failed and we were unable to recover it. 00:32:19.969 [2024-11-26 07:42:03.877069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.969 [2024-11-26 07:42:03.877080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.969 qpair failed and we were unable to recover it. 00:32:19.969 [2024-11-26 07:42:03.877446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.969 [2024-11-26 07:42:03.877457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.969 qpair failed and we were unable to recover it. 00:32:19.969 [2024-11-26 07:42:03.877752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.969 [2024-11-26 07:42:03.877763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.969 qpair failed and we were unable to recover it. 00:32:19.969 [2024-11-26 07:42:03.878053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.969 [2024-11-26 07:42:03.878064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.969 qpair failed and we were unable to recover it. 00:32:19.969 [2024-11-26 07:42:03.878371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.969 [2024-11-26 07:42:03.878383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.969 qpair failed and we were unable to recover it. 00:32:19.969 [2024-11-26 07:42:03.878683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.969 [2024-11-26 07:42:03.878694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.969 qpair failed and we were unable to recover it. 00:32:19.969 [2024-11-26 07:42:03.879004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.969 [2024-11-26 07:42:03.879015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.969 qpair failed and we were unable to recover it. 00:32:19.969 [2024-11-26 07:42:03.879306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.969 [2024-11-26 07:42:03.879317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.969 qpair failed and we were unable to recover it. 00:32:19.969 [2024-11-26 07:42:03.879422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.969 [2024-11-26 07:42:03.879434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.969 qpair failed and we were unable to recover it. 00:32:19.969 [2024-11-26 07:42:03.879782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.879794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.970 qpair failed and we were unable to recover it. 00:32:19.970 [2024-11-26 07:42:03.880114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.880130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.970 qpair failed and we were unable to recover it. 00:32:19.970 [2024-11-26 07:42:03.880456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.880466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.970 qpair failed and we were unable to recover it. 00:32:19.970 [2024-11-26 07:42:03.880786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.880797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.970 qpair failed and we were unable to recover it. 00:32:19.970 [2024-11-26 07:42:03.881104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.881115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.970 qpair failed and we were unable to recover it. 00:32:19.970 [2024-11-26 07:42:03.881323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.881333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.970 qpair failed and we were unable to recover it. 00:32:19.970 [2024-11-26 07:42:03.881655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.881666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.970 qpair failed and we were unable to recover it. 00:32:19.970 [2024-11-26 07:42:03.881909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.881921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.970 qpair failed and we were unable to recover it. 00:32:19.970 [2024-11-26 07:42:03.882242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.882254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.970 qpair failed and we were unable to recover it. 00:32:19.970 [2024-11-26 07:42:03.882436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.882449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.970 qpair failed and we were unable to recover it. 00:32:19.970 [2024-11-26 07:42:03.882767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.882777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.970 qpair failed and we were unable to recover it. 00:32:19.970 [2024-11-26 07:42:03.883067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.883077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.970 qpair failed and we were unable to recover it. 00:32:19.970 [2024-11-26 07:42:03.883381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.883392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.970 qpair failed and we were unable to recover it. 00:32:19.970 [2024-11-26 07:42:03.883622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.883633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.970 qpair failed and we were unable to recover it. 00:32:19.970 [2024-11-26 07:42:03.883935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.883946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.970 qpair failed and we were unable to recover it. 00:32:19.970 [2024-11-26 07:42:03.884263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.884275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.970 qpair failed and we were unable to recover it. 00:32:19.970 [2024-11-26 07:42:03.884582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.884593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.970 qpair failed and we were unable to recover it. 00:32:19.970 [2024-11-26 07:42:03.884941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.884953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.970 qpair failed and we were unable to recover it. 00:32:19.970 [2024-11-26 07:42:03.885208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.885220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.970 qpair failed and we were unable to recover it. 00:32:19.970 [2024-11-26 07:42:03.885528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.885539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.970 qpair failed and we were unable to recover it. 00:32:19.970 [2024-11-26 07:42:03.885683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.885695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.970 qpair failed and we were unable to recover it. 00:32:19.970 [2024-11-26 07:42:03.886010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.886021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.970 qpair failed and we were unable to recover it. 00:32:19.970 [2024-11-26 07:42:03.886347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.886358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.970 qpair failed and we were unable to recover it. 00:32:19.970 [2024-11-26 07:42:03.886664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.886675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.970 qpair failed and we were unable to recover it. 00:32:19.970 [2024-11-26 07:42:03.886989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.887000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.970 qpair failed and we were unable to recover it. 00:32:19.970 [2024-11-26 07:42:03.887302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.887313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.970 qpair failed and we were unable to recover it. 00:32:19.970 [2024-11-26 07:42:03.887600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.887611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.970 qpair failed and we were unable to recover it. 00:32:19.970 [2024-11-26 07:42:03.887924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.887936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.970 qpair failed and we were unable to recover it. 00:32:19.970 [2024-11-26 07:42:03.888290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.888304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.970 qpair failed and we were unable to recover it. 00:32:19.970 [2024-11-26 07:42:03.888510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.888521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.970 qpair failed and we were unable to recover it. 00:32:19.970 [2024-11-26 07:42:03.888843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.888854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.970 qpair failed and we were unable to recover it. 00:32:19.970 [2024-11-26 07:42:03.889201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.889213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.970 qpair failed and we were unable to recover it. 00:32:19.970 [2024-11-26 07:42:03.889522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.889533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.970 qpair failed and we were unable to recover it. 00:32:19.970 [2024-11-26 07:42:03.889870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.889882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.970 qpair failed and we were unable to recover it. 00:32:19.970 [2024-11-26 07:42:03.890192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.890203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.970 qpair failed and we were unable to recover it. 00:32:19.970 [2024-11-26 07:42:03.890472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.890483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.970 qpair failed and we were unable to recover it. 00:32:19.970 [2024-11-26 07:42:03.890691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.890702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.970 qpair failed and we were unable to recover it. 00:32:19.970 [2024-11-26 07:42:03.890993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.891005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.970 qpair failed and we were unable to recover it. 00:32:19.970 [2024-11-26 07:42:03.891297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.970 [2024-11-26 07:42:03.891309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.891515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.971 [2024-11-26 07:42:03.891526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.891850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.971 [2024-11-26 07:42:03.891866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.892157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.971 [2024-11-26 07:42:03.892168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.892457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.971 [2024-11-26 07:42:03.892468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.892766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.971 [2024-11-26 07:42:03.892777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.892962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.971 [2024-11-26 07:42:03.892973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.893276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.971 [2024-11-26 07:42:03.893287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.893620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.971 [2024-11-26 07:42:03.893632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.893846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.971 [2024-11-26 07:42:03.893857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.894147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.971 [2024-11-26 07:42:03.894157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.894526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.971 [2024-11-26 07:42:03.894538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.894871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.971 [2024-11-26 07:42:03.894882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.895249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.971 [2024-11-26 07:42:03.895260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.895441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.971 [2024-11-26 07:42:03.895453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.895792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.971 [2024-11-26 07:42:03.895803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.896101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.971 [2024-11-26 07:42:03.896113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.896406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.971 [2024-11-26 07:42:03.896419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.896737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.971 [2024-11-26 07:42:03.896747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.897047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.971 [2024-11-26 07:42:03.897058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.897371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.971 [2024-11-26 07:42:03.897382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.897553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.971 [2024-11-26 07:42:03.897565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.897892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.971 [2024-11-26 07:42:03.897904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.898215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.971 [2024-11-26 07:42:03.898226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.898563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.971 [2024-11-26 07:42:03.898574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.898763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.971 [2024-11-26 07:42:03.898774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.899079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.971 [2024-11-26 07:42:03.899090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.899425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.971 [2024-11-26 07:42:03.899436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.899765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.971 [2024-11-26 07:42:03.899776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.900152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.971 [2024-11-26 07:42:03.900164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.900473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.971 [2024-11-26 07:42:03.900485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.900725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.971 [2024-11-26 07:42:03.900737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.901108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.971 [2024-11-26 07:42:03.901119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.901420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.971 [2024-11-26 07:42:03.901430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.901615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.971 [2024-11-26 07:42:03.901628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.901922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.971 [2024-11-26 07:42:03.901933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.902267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.971 [2024-11-26 07:42:03.902278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.902623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.971 [2024-11-26 07:42:03.902634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.902808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.971 [2024-11-26 07:42:03.902821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.971 qpair failed and we were unable to recover it. 00:32:19.971 [2024-11-26 07:42:03.903178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.972 [2024-11-26 07:42:03.903189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.972 qpair failed and we were unable to recover it. 00:32:19.972 [2024-11-26 07:42:03.903499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.972 [2024-11-26 07:42:03.903509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.972 qpair failed and we were unable to recover it. 00:32:19.972 [2024-11-26 07:42:03.903822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.972 [2024-11-26 07:42:03.903833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.972 qpair failed and we were unable to recover it. 00:32:19.972 [2024-11-26 07:42:03.904143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.972 [2024-11-26 07:42:03.904155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.972 qpair failed and we were unable to recover it. 00:32:19.972 [2024-11-26 07:42:03.904376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.972 [2024-11-26 07:42:03.904388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.972 qpair failed and we were unable to recover it. 00:32:19.972 [2024-11-26 07:42:03.904565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.972 [2024-11-26 07:42:03.904577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.972 qpair failed and we were unable to recover it. 00:32:19.972 [2024-11-26 07:42:03.904892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.972 [2024-11-26 07:42:03.904903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.972 qpair failed and we were unable to recover it. 00:32:19.972 [2024-11-26 07:42:03.905184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.972 [2024-11-26 07:42:03.905195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.972 qpair failed and we were unable to recover it. 00:32:19.972 [2024-11-26 07:42:03.905530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.972 [2024-11-26 07:42:03.905541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.972 qpair failed and we were unable to recover it. 00:32:19.972 [2024-11-26 07:42:03.905733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.972 [2024-11-26 07:42:03.905745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.972 qpair failed and we were unable to recover it. 00:32:19.972 [2024-11-26 07:42:03.905917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.972 [2024-11-26 07:42:03.905928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.972 qpair failed and we were unable to recover it. 00:32:19.972 [2024-11-26 07:42:03.906120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.972 [2024-11-26 07:42:03.906131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.972 qpair failed and we were unable to recover it. 00:32:19.972 [2024-11-26 07:42:03.906301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.972 [2024-11-26 07:42:03.906313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.972 qpair failed and we were unable to recover it. 00:32:19.972 [2024-11-26 07:42:03.906526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.972 [2024-11-26 07:42:03.906537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.972 qpair failed and we were unable to recover it. 00:32:19.972 [2024-11-26 07:42:03.906870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.972 [2024-11-26 07:42:03.906882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.972 qpair failed and we were unable to recover it. 00:32:19.972 [2024-11-26 07:42:03.907116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.972 [2024-11-26 07:42:03.907127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.972 qpair failed and we were unable to recover it. 00:32:19.972 [2024-11-26 07:42:03.907411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.972 [2024-11-26 07:42:03.907421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.972 qpair failed and we were unable to recover it. 00:32:19.972 [2024-11-26 07:42:03.907728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.972 [2024-11-26 07:42:03.907739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.972 qpair failed and we were unable to recover it. 00:32:19.972 [2024-11-26 07:42:03.908061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.972 [2024-11-26 07:42:03.908072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.972 qpair failed and we were unable to recover it. 00:32:19.972 [2024-11-26 07:42:03.908403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.972 [2024-11-26 07:42:03.908414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.972 qpair failed and we were unable to recover it. 00:32:19.972 [2024-11-26 07:42:03.908747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.972 [2024-11-26 07:42:03.908759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.972 qpair failed and we were unable to recover it. 00:32:19.972 [2024-11-26 07:42:03.909100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.972 [2024-11-26 07:42:03.909111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.972 qpair failed and we were unable to recover it. 00:32:19.972 [2024-11-26 07:42:03.909201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.972 [2024-11-26 07:42:03.909211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.972 qpair failed and we were unable to recover it. 00:32:19.972 [2024-11-26 07:42:03.909473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.972 [2024-11-26 07:42:03.909485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.972 qpair failed and we were unable to recover it. 00:32:19.972 [2024-11-26 07:42:03.909813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.972 [2024-11-26 07:42:03.909824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.972 qpair failed and we were unable to recover it. 00:32:19.972 [2024-11-26 07:42:03.910206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.972 [2024-11-26 07:42:03.910217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.972 qpair failed and we were unable to recover it. 00:32:19.972 [2024-11-26 07:42:03.910551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.972 [2024-11-26 07:42:03.910563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.972 qpair failed and we were unable to recover it. 00:32:19.972 [2024-11-26 07:42:03.910877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.972 [2024-11-26 07:42:03.910888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.972 qpair failed and we were unable to recover it. 00:32:19.972 [2024-11-26 07:42:03.911215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.972 [2024-11-26 07:42:03.911226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.972 qpair failed and we were unable to recover it. 00:32:19.972 [2024-11-26 07:42:03.911448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.972 [2024-11-26 07:42:03.911460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.972 qpair failed and we were unable to recover it. 00:32:19.972 [2024-11-26 07:42:03.911769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.972 [2024-11-26 07:42:03.911780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.972 qpair failed and we were unable to recover it. 00:32:19.972 [2024-11-26 07:42:03.912099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.972 [2024-11-26 07:42:03.912110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.972 qpair failed and we were unable to recover it. 00:32:19.972 [2024-11-26 07:42:03.912406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.972 [2024-11-26 07:42:03.912417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.972 qpair failed and we were unable to recover it. 00:32:19.972 [2024-11-26 07:42:03.912751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.972 [2024-11-26 07:42:03.912762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.972 qpair failed and we were unable to recover it. 00:32:19.972 [2024-11-26 07:42:03.913084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.972 [2024-11-26 07:42:03.913095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.972 qpair failed and we were unable to recover it. 00:32:19.972 [2024-11-26 07:42:03.913289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.972 [2024-11-26 07:42:03.913301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.972 qpair failed and we were unable to recover it. 00:32:19.972 [2024-11-26 07:42:03.913585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.972 [2024-11-26 07:42:03.913596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.972 qpair failed and we were unable to recover it. 00:32:19.972 [2024-11-26 07:42:03.913929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.972 [2024-11-26 07:42:03.913941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.972 qpair failed and we were unable to recover it. 00:32:19.972 [2024-11-26 07:42:03.914256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.973 [2024-11-26 07:42:03.914267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.973 qpair failed and we were unable to recover it. 00:32:19.973 [2024-11-26 07:42:03.914599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.973 [2024-11-26 07:42:03.914611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.973 qpair failed and we were unable to recover it. 00:32:19.973 [2024-11-26 07:42:03.914910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.973 [2024-11-26 07:42:03.914921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.973 qpair failed and we were unable to recover it. 00:32:19.973 [2024-11-26 07:42:03.915238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.973 [2024-11-26 07:42:03.915249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.973 qpair failed and we were unable to recover it. 00:32:19.973 [2024-11-26 07:42:03.915586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.973 [2024-11-26 07:42:03.915597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.973 qpair failed and we were unable to recover it. 00:32:19.973 [2024-11-26 07:42:03.915902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.973 [2024-11-26 07:42:03.915914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.973 qpair failed and we were unable to recover it. 00:32:19.973 [2024-11-26 07:42:03.916131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.973 [2024-11-26 07:42:03.916142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.973 qpair failed and we were unable to recover it. 00:32:19.973 [2024-11-26 07:42:03.916409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.973 [2024-11-26 07:42:03.916420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.973 qpair failed and we were unable to recover it. 00:32:19.973 [2024-11-26 07:42:03.916720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.973 [2024-11-26 07:42:03.916733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.973 qpair failed and we were unable to recover it. 00:32:19.973 [2024-11-26 07:42:03.917031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.973 [2024-11-26 07:42:03.917042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.973 qpair failed and we were unable to recover it. 00:32:19.973 [2024-11-26 07:42:03.917371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.973 [2024-11-26 07:42:03.917382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.973 qpair failed and we were unable to recover it. 00:32:19.973 [2024-11-26 07:42:03.917722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.973 [2024-11-26 07:42:03.917733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.973 qpair failed and we were unable to recover it. 00:32:19.973 [2024-11-26 07:42:03.918063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.973 [2024-11-26 07:42:03.918074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.973 qpair failed and we were unable to recover it. 00:32:19.973 [2024-11-26 07:42:03.918382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.973 [2024-11-26 07:42:03.918392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.973 qpair failed and we were unable to recover it. 00:32:19.973 [2024-11-26 07:42:03.918761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.973 [2024-11-26 07:42:03.918771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.973 qpair failed and we were unable to recover it. 00:32:19.973 [2024-11-26 07:42:03.919080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.973 [2024-11-26 07:42:03.919092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.973 qpair failed and we were unable to recover it. 00:32:19.973 [2024-11-26 07:42:03.919426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.973 [2024-11-26 07:42:03.919438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.973 qpair failed and we were unable to recover it. 00:32:19.973 [2024-11-26 07:42:03.919769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.973 [2024-11-26 07:42:03.919780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.973 qpair failed and we were unable to recover it. 00:32:19.973 [2024-11-26 07:42:03.919987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.973 [2024-11-26 07:42:03.919998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.973 qpair failed and we were unable to recover it. 00:32:19.973 [2024-11-26 07:42:03.920175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.973 [2024-11-26 07:42:03.920187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.973 qpair failed and we were unable to recover it. 00:32:19.973 [2024-11-26 07:42:03.920516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.973 [2024-11-26 07:42:03.920526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.973 qpair failed and we were unable to recover it. 00:32:19.973 [2024-11-26 07:42:03.920836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.973 [2024-11-26 07:42:03.920847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.973 qpair failed and we were unable to recover it. 00:32:19.973 [2024-11-26 07:42:03.921190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.973 [2024-11-26 07:42:03.921202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.973 qpair failed and we were unable to recover it. 00:32:19.973 [2024-11-26 07:42:03.921536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.973 [2024-11-26 07:42:03.921547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.973 qpair failed and we were unable to recover it. 00:32:19.973 [2024-11-26 07:42:03.921843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.973 [2024-11-26 07:42:03.921854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.973 qpair failed and we were unable to recover it. 00:32:19.973 [2024-11-26 07:42:03.922185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.973 [2024-11-26 07:42:03.922197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.973 qpair failed and we were unable to recover it. 00:32:19.973 [2024-11-26 07:42:03.922379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.973 [2024-11-26 07:42:03.922390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.973 qpair failed and we were unable to recover it. 00:32:19.973 [2024-11-26 07:42:03.922643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.973 [2024-11-26 07:42:03.922653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.973 qpair failed and we were unable to recover it. 00:32:19.973 [2024-11-26 07:42:03.922979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.973 [2024-11-26 07:42:03.922991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.973 qpair failed and we were unable to recover it. 00:32:19.973 [2024-11-26 07:42:03.923177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.973 [2024-11-26 07:42:03.923188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.973 qpair failed and we were unable to recover it. 00:32:19.973 [2024-11-26 07:42:03.923517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.973 [2024-11-26 07:42:03.923528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.973 qpair failed and we were unable to recover it. 00:32:19.973 [2024-11-26 07:42:03.923859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.973 [2024-11-26 07:42:03.923875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.973 qpair failed and we were unable to recover it. 00:32:19.973 [2024-11-26 07:42:03.924190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.973 [2024-11-26 07:42:03.924201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.973 qpair failed and we were unable to recover it. 00:32:19.973 [2024-11-26 07:42:03.924494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.973 [2024-11-26 07:42:03.924505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.973 qpair failed and we were unable to recover it. 00:32:19.973 [2024-11-26 07:42:03.924812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.973 [2024-11-26 07:42:03.924823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.973 qpair failed and we were unable to recover it. 00:32:19.973 [2024-11-26 07:42:03.925126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.973 [2024-11-26 07:42:03.925139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.973 qpair failed and we were unable to recover it. 00:32:19.973 [2024-11-26 07:42:03.925472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.973 [2024-11-26 07:42:03.925483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.973 qpair failed and we were unable to recover it. 00:32:19.973 [2024-11-26 07:42:03.925785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.973 [2024-11-26 07:42:03.925798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.973 qpair failed and we were unable to recover it. 00:32:19.973 [2024-11-26 07:42:03.926126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.974 [2024-11-26 07:42:03.926138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.974 qpair failed and we were unable to recover it. 00:32:19.974 [2024-11-26 07:42:03.926460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.974 [2024-11-26 07:42:03.926471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.974 qpair failed and we were unable to recover it. 00:32:19.974 [2024-11-26 07:42:03.926785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.974 [2024-11-26 07:42:03.926796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.974 qpair failed and we were unable to recover it. 00:32:19.974 [2024-11-26 07:42:03.927130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.974 [2024-11-26 07:42:03.927141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.974 qpair failed and we were unable to recover it. 00:32:19.974 [2024-11-26 07:42:03.927483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.974 [2024-11-26 07:42:03.927494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.974 qpair failed and we were unable to recover it. 00:32:19.974 [2024-11-26 07:42:03.927623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.974 [2024-11-26 07:42:03.927634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.974 qpair failed and we were unable to recover it. 00:32:19.974 [2024-11-26 07:42:03.927957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.974 [2024-11-26 07:42:03.927968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.974 qpair failed and we were unable to recover it. 00:32:19.974 [2024-11-26 07:42:03.928136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.974 [2024-11-26 07:42:03.928147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.974 qpair failed and we were unable to recover it. 00:32:19.974 [2024-11-26 07:42:03.928447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.974 [2024-11-26 07:42:03.928458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.974 qpair failed and we were unable to recover it. 00:32:19.974 [2024-11-26 07:42:03.928764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.974 [2024-11-26 07:42:03.928775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.974 qpair failed and we were unable to recover it. 00:32:19.974 [2024-11-26 07:42:03.929135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.974 [2024-11-26 07:42:03.929146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.974 qpair failed and we were unable to recover it. 00:32:19.974 [2024-11-26 07:42:03.929357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.974 [2024-11-26 07:42:03.929368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.974 qpair failed and we were unable to recover it. 00:32:19.974 [2024-11-26 07:42:03.929693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.974 [2024-11-26 07:42:03.929704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.974 qpair failed and we were unable to recover it. 00:32:19.974 [2024-11-26 07:42:03.930034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.974 [2024-11-26 07:42:03.930046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.974 qpair failed and we were unable to recover it. 00:32:19.974 [2024-11-26 07:42:03.930257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.974 [2024-11-26 07:42:03.930267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.974 qpair failed and we were unable to recover it. 00:32:19.974 [2024-11-26 07:42:03.930573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.974 [2024-11-26 07:42:03.930584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.974 qpair failed and we were unable to recover it. 00:32:19.974 [2024-11-26 07:42:03.930772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.974 [2024-11-26 07:42:03.930784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.974 qpair failed and we were unable to recover it. 00:32:19.974 [2024-11-26 07:42:03.931074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.974 [2024-11-26 07:42:03.931085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.974 qpair failed and we were unable to recover it. 00:32:19.974 [2024-11-26 07:42:03.931441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.974 [2024-11-26 07:42:03.931452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.974 qpair failed and we were unable to recover it. 00:32:19.974 [2024-11-26 07:42:03.931765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.974 [2024-11-26 07:42:03.931777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.974 qpair failed and we were unable to recover it. 00:32:19.974 [2024-11-26 07:42:03.932088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.974 [2024-11-26 07:42:03.932100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.974 qpair failed and we were unable to recover it. 00:32:19.974 [2024-11-26 07:42:03.932438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.974 [2024-11-26 07:42:03.932448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.974 qpair failed and we were unable to recover it. 00:32:19.974 [2024-11-26 07:42:03.932753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.974 [2024-11-26 07:42:03.932764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.974 qpair failed and we were unable to recover it. 00:32:19.974 [2024-11-26 07:42:03.933036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.974 [2024-11-26 07:42:03.933047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.974 qpair failed and we were unable to recover it. 00:32:19.974 [2024-11-26 07:42:03.933355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.974 [2024-11-26 07:42:03.933366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.974 qpair failed and we were unable to recover it. 00:32:19.974 [2024-11-26 07:42:03.933738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.974 [2024-11-26 07:42:03.933749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.974 qpair failed and we were unable to recover it. 00:32:19.974 [2024-11-26 07:42:03.934080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.974 [2024-11-26 07:42:03.934091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.974 qpair failed and we were unable to recover it. 00:32:19.974 [2024-11-26 07:42:03.934436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.974 [2024-11-26 07:42:03.934447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.974 qpair failed and we were unable to recover it. 00:32:19.974 [2024-11-26 07:42:03.934747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.974 [2024-11-26 07:42:03.934758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.974 qpair failed and we were unable to recover it. 00:32:19.974 [2024-11-26 07:42:03.935100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.974 [2024-11-26 07:42:03.935112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.974 qpair failed and we were unable to recover it. 00:32:19.974 [2024-11-26 07:42:03.935442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.974 [2024-11-26 07:42:03.935454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.974 qpair failed and we were unable to recover it. 00:32:19.974 [2024-11-26 07:42:03.935671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.974 [2024-11-26 07:42:03.935681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.974 qpair failed and we were unable to recover it. 00:32:19.974 [2024-11-26 07:42:03.935844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.974 [2024-11-26 07:42:03.935857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.974 qpair failed and we were unable to recover it. 00:32:19.974 [2024-11-26 07:42:03.936260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.974 [2024-11-26 07:42:03.936272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.974 qpair failed and we were unable to recover it. 00:32:19.974 [2024-11-26 07:42:03.936565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.974 [2024-11-26 07:42:03.936576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.975 [2024-11-26 07:42:03.936890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.975 [2024-11-26 07:42:03.936902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.975 [2024-11-26 07:42:03.937222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.975 [2024-11-26 07:42:03.937233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.975 [2024-11-26 07:42:03.937567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.975 [2024-11-26 07:42:03.937578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.975 [2024-11-26 07:42:03.937868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.975 [2024-11-26 07:42:03.937880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.975 [2024-11-26 07:42:03.938182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.975 [2024-11-26 07:42:03.938193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.975 [2024-11-26 07:42:03.938502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.975 [2024-11-26 07:42:03.938514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.975 [2024-11-26 07:42:03.938812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.975 [2024-11-26 07:42:03.938824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.975 [2024-11-26 07:42:03.939114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.975 [2024-11-26 07:42:03.939125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.975 [2024-11-26 07:42:03.939435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.975 [2024-11-26 07:42:03.939446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.975 [2024-11-26 07:42:03.939775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.975 [2024-11-26 07:42:03.939787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.975 [2024-11-26 07:42:03.940094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.975 [2024-11-26 07:42:03.940105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.975 [2024-11-26 07:42:03.940413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.975 [2024-11-26 07:42:03.940423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.975 [2024-11-26 07:42:03.940730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.975 [2024-11-26 07:42:03.940741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.975 [2024-11-26 07:42:03.941079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.975 [2024-11-26 07:42:03.941091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.975 [2024-11-26 07:42:03.941425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.975 [2024-11-26 07:42:03.941435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.975 [2024-11-26 07:42:03.941746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.975 [2024-11-26 07:42:03.941757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.975 [2024-11-26 07:42:03.942050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.975 [2024-11-26 07:42:03.942061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.975 [2024-11-26 07:42:03.942453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.975 [2024-11-26 07:42:03.942464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.975 [2024-11-26 07:42:03.942654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.975 [2024-11-26 07:42:03.942666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.975 [2024-11-26 07:42:03.942997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.975 [2024-11-26 07:42:03.943009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.975 [2024-11-26 07:42:03.943396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.975 [2024-11-26 07:42:03.943407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.975 [2024-11-26 07:42:03.943624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.975 [2024-11-26 07:42:03.943636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.975 [2024-11-26 07:42:03.943967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.975 [2024-11-26 07:42:03.943978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.975 [2024-11-26 07:42:03.944345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.975 [2024-11-26 07:42:03.944356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.975 [2024-11-26 07:42:03.944654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.975 [2024-11-26 07:42:03.944665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.975 [2024-11-26 07:42:03.945051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.975 [2024-11-26 07:42:03.945063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.975 [2024-11-26 07:42:03.945391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.975 [2024-11-26 07:42:03.945402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.975 [2024-11-26 07:42:03.945711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.975 [2024-11-26 07:42:03.945722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.975 [2024-11-26 07:42:03.946027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.975 [2024-11-26 07:42:03.946038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.975 [2024-11-26 07:42:03.946405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.975 [2024-11-26 07:42:03.946417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.975 [2024-11-26 07:42:03.946715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.975 [2024-11-26 07:42:03.946729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.975 [2024-11-26 07:42:03.947121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.975 [2024-11-26 07:42:03.947132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.975 [2024-11-26 07:42:03.947435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.975 [2024-11-26 07:42:03.947446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.975 [2024-11-26 07:42:03.947753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.975 [2024-11-26 07:42:03.947764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.975 [2024-11-26 07:42:03.948075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.975 [2024-11-26 07:42:03.948087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.975 [2024-11-26 07:42:03.948393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.975 [2024-11-26 07:42:03.948404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.975 [2024-11-26 07:42:03.948750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.975 [2024-11-26 07:42:03.948761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.975 [2024-11-26 07:42:03.949087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.975 [2024-11-26 07:42:03.949099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.975 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.949382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.976 [2024-11-26 07:42:03.949394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.976 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.949700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.976 [2024-11-26 07:42:03.949712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.976 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.950004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.976 [2024-11-26 07:42:03.950016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.976 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.950340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.976 [2024-11-26 07:42:03.950351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.976 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.950695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.976 [2024-11-26 07:42:03.950706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.976 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.951018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.976 [2024-11-26 07:42:03.951029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.976 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.951432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.976 [2024-11-26 07:42:03.951442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.976 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.951758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.976 [2024-11-26 07:42:03.951769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.976 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.952005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.976 [2024-11-26 07:42:03.952016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.976 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.952210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.976 [2024-11-26 07:42:03.952220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.976 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.952548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.976 [2024-11-26 07:42:03.952559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.976 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.952779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.976 [2024-11-26 07:42:03.952790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.976 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.953116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.976 [2024-11-26 07:42:03.953127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.976 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.953435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.976 [2024-11-26 07:42:03.953445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.976 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.953734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.976 [2024-11-26 07:42:03.953745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.976 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.954076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.976 [2024-11-26 07:42:03.954088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.976 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.954384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.976 [2024-11-26 07:42:03.954395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.976 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.954708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.976 [2024-11-26 07:42:03.954719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.976 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.955018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.976 [2024-11-26 07:42:03.955029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.976 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.955223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.976 [2024-11-26 07:42:03.955238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.976 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.955553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.976 [2024-11-26 07:42:03.955564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.976 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.955901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.976 [2024-11-26 07:42:03.955912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.976 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.956132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.976 [2024-11-26 07:42:03.956144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.976 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.956554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.976 [2024-11-26 07:42:03.956566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.976 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.956882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.976 [2024-11-26 07:42:03.956894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.976 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.957072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.976 [2024-11-26 07:42:03.957085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.976 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.957406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.976 [2024-11-26 07:42:03.957417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.976 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.957740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.976 [2024-11-26 07:42:03.957750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.976 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.958021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.976 [2024-11-26 07:42:03.958032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.976 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.958401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.976 [2024-11-26 07:42:03.958413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.976 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.958706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.976 [2024-11-26 07:42:03.958717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.976 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.959025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.976 [2024-11-26 07:42:03.959036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.976 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.959328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.976 [2024-11-26 07:42:03.959340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.976 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.959650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.976 [2024-11-26 07:42:03.959662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.976 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.959969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.976 [2024-11-26 07:42:03.959980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.976 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.960293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.976 [2024-11-26 07:42:03.960304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.976 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.960636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.976 [2024-11-26 07:42:03.960647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.976 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.960959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.976 [2024-11-26 07:42:03.960970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.976 qpair failed and we were unable to recover it. 00:32:19.976 [2024-11-26 07:42:03.961264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.977 [2024-11-26 07:42:03.961275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.977 qpair failed and we were unable to recover it. 00:32:19.977 [2024-11-26 07:42:03.961582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.977 [2024-11-26 07:42:03.961593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.977 qpair failed and we were unable to recover it. 00:32:19.977 [2024-11-26 07:42:03.961928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.977 [2024-11-26 07:42:03.961939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.977 qpair failed and we were unable to recover it. 00:32:19.977 [2024-11-26 07:42:03.962247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.977 [2024-11-26 07:42:03.962257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.977 qpair failed and we were unable to recover it. 00:32:19.977 [2024-11-26 07:42:03.962587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.977 [2024-11-26 07:42:03.962598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.977 qpair failed and we were unable to recover it. 00:32:19.977 [2024-11-26 07:42:03.962814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.977 [2024-11-26 07:42:03.962825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.977 qpair failed and we were unable to recover it. 00:32:19.977 [2024-11-26 07:42:03.963131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.977 [2024-11-26 07:42:03.963143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.977 qpair failed and we were unable to recover it. 00:32:19.977 [2024-11-26 07:42:03.963497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.977 [2024-11-26 07:42:03.963509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.977 qpair failed and we were unable to recover it. 00:32:19.977 [2024-11-26 07:42:03.963836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.977 [2024-11-26 07:42:03.963850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.977 qpair failed and we were unable to recover it. 00:32:19.977 [2024-11-26 07:42:03.964030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.977 [2024-11-26 07:42:03.964043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.977 qpair failed and we were unable to recover it. 00:32:19.977 [2024-11-26 07:42:03.964252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.977 [2024-11-26 07:42:03.964262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.977 qpair failed and we were unable to recover it. 00:32:19.977 [2024-11-26 07:42:03.964537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.977 [2024-11-26 07:42:03.964548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.977 qpair failed and we were unable to recover it. 00:32:19.977 [2024-11-26 07:42:03.964855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.977 [2024-11-26 07:42:03.964870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.977 qpair failed and we were unable to recover it. 00:32:19.977 [2024-11-26 07:42:03.965177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.977 [2024-11-26 07:42:03.965188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.977 qpair failed and we were unable to recover it. 00:32:19.977 [2024-11-26 07:42:03.965385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.977 [2024-11-26 07:42:03.965397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.977 qpair failed and we were unable to recover it. 00:32:19.977 [2024-11-26 07:42:03.965739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.977 [2024-11-26 07:42:03.965750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.977 qpair failed and we were unable to recover it. 00:32:19.977 [2024-11-26 07:42:03.966052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.977 [2024-11-26 07:42:03.966063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.977 qpair failed and we were unable to recover it. 00:32:19.977 [2024-11-26 07:42:03.966376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.977 [2024-11-26 07:42:03.966387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.977 qpair failed and we were unable to recover it. 00:32:19.977 [2024-11-26 07:42:03.966669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.977 [2024-11-26 07:42:03.966680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.977 qpair failed and we were unable to recover it. 00:32:19.977 [2024-11-26 07:42:03.966869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.977 [2024-11-26 07:42:03.966882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.977 qpair failed and we were unable to recover it. 00:32:19.977 [2024-11-26 07:42:03.967203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.977 [2024-11-26 07:42:03.967214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.977 qpair failed and we were unable to recover it. 00:32:19.977 [2024-11-26 07:42:03.967556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.977 [2024-11-26 07:42:03.967568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.977 qpair failed and we were unable to recover it. 00:32:19.977 [2024-11-26 07:42:03.967907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.977 [2024-11-26 07:42:03.967918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.977 qpair failed and we were unable to recover it. 00:32:19.977 [2024-11-26 07:42:03.968248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.977 [2024-11-26 07:42:03.968260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.977 qpair failed and we were unable to recover it. 00:32:19.977 [2024-11-26 07:42:03.968589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.977 [2024-11-26 07:42:03.968601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.977 qpair failed and we were unable to recover it. 00:32:19.977 [2024-11-26 07:42:03.968825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.977 [2024-11-26 07:42:03.968837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.977 qpair failed and we were unable to recover it. 00:32:19.977 [2024-11-26 07:42:03.969151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.977 [2024-11-26 07:42:03.969163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.977 qpair failed and we were unable to recover it. 00:32:19.977 [2024-11-26 07:42:03.969539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.977 [2024-11-26 07:42:03.969550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.977 qpair failed and we were unable to recover it. 00:32:19.977 [2024-11-26 07:42:03.969865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.977 [2024-11-26 07:42:03.969878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.977 qpair failed and we were unable to recover it. 00:32:19.977 [2024-11-26 07:42:03.970222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.977 [2024-11-26 07:42:03.970233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.977 qpair failed and we were unable to recover it. 00:32:19.977 [2024-11-26 07:42:03.970567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.977 [2024-11-26 07:42:03.970578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.977 qpair failed and we were unable to recover it. 00:32:19.977 [2024-11-26 07:42:03.970886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.977 [2024-11-26 07:42:03.970898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.977 qpair failed and we were unable to recover it. 00:32:19.977 [2024-11-26 07:42:03.971220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.977 [2024-11-26 07:42:03.971231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.977 qpair failed and we were unable to recover it. 00:32:19.977 [2024-11-26 07:42:03.971530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.977 [2024-11-26 07:42:03.971541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.977 qpair failed and we were unable to recover it. 00:32:19.977 [2024-11-26 07:42:03.971899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.977 [2024-11-26 07:42:03.971911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.977 qpair failed and we were unable to recover it. 00:32:19.977 [2024-11-26 07:42:03.972272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.977 [2024-11-26 07:42:03.972282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.977 qpair failed and we were unable to recover it. 00:32:19.977 [2024-11-26 07:42:03.972570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.977 [2024-11-26 07:42:03.972580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.977 qpair failed and we were unable to recover it. 00:32:19.977 [2024-11-26 07:42:03.972886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.977 [2024-11-26 07:42:03.972897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.977 qpair failed and we were unable to recover it. 00:32:19.977 [2024-11-26 07:42:03.973285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.978 [2024-11-26 07:42:03.973296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.978 qpair failed and we were unable to recover it. 00:32:19.978 [2024-11-26 07:42:03.973482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.978 [2024-11-26 07:42:03.973493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.978 qpair failed and we were unable to recover it. 00:32:19.978 [2024-11-26 07:42:03.973813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.978 [2024-11-26 07:42:03.973824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.978 qpair failed and we were unable to recover it. 00:32:19.978 [2024-11-26 07:42:03.974020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.978 [2024-11-26 07:42:03.974032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.978 qpair failed and we were unable to recover it. 00:32:19.978 [2024-11-26 07:42:03.974233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.978 [2024-11-26 07:42:03.974244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.978 qpair failed and we were unable to recover it. 00:32:19.978 [2024-11-26 07:42:03.974567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.978 [2024-11-26 07:42:03.974578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.978 qpair failed and we were unable to recover it. 00:32:19.978 [2024-11-26 07:42:03.974876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.978 [2024-11-26 07:42:03.974888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.978 qpair failed and we were unable to recover it. 00:32:19.978 [2024-11-26 07:42:03.975189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.978 [2024-11-26 07:42:03.975200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.978 qpair failed and we were unable to recover it. 00:32:19.978 [2024-11-26 07:42:03.975529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.978 [2024-11-26 07:42:03.975540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.978 qpair failed and we were unable to recover it. 00:32:19.978 [2024-11-26 07:42:03.975847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.978 [2024-11-26 07:42:03.975858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.978 qpair failed and we were unable to recover it. 00:32:19.978 [2024-11-26 07:42:03.976169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.978 [2024-11-26 07:42:03.976180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.978 qpair failed and we were unable to recover it. 00:32:19.978 [2024-11-26 07:42:03.976360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.978 [2024-11-26 07:42:03.976373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.978 qpair failed and we were unable to recover it. 00:32:19.978 [2024-11-26 07:42:03.976656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.978 [2024-11-26 07:42:03.976667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.978 qpair failed and we were unable to recover it. 00:32:19.978 [2024-11-26 07:42:03.976866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.978 [2024-11-26 07:42:03.976878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.978 qpair failed and we were unable to recover it. 00:32:19.978 [2024-11-26 07:42:03.977196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.978 [2024-11-26 07:42:03.977207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.978 qpair failed and we were unable to recover it. 00:32:19.978 [2024-11-26 07:42:03.977510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.978 [2024-11-26 07:42:03.977520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.978 qpair failed and we were unable to recover it. 00:32:19.978 [2024-11-26 07:42:03.977849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.978 [2024-11-26 07:42:03.977860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.978 qpair failed and we were unable to recover it. 00:32:19.978 [2024-11-26 07:42:03.978069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.978 [2024-11-26 07:42:03.978081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.978 qpair failed and we were unable to recover it. 00:32:19.978 [2024-11-26 07:42:03.978368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.978 [2024-11-26 07:42:03.978379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.978 qpair failed and we were unable to recover it. 00:32:19.978 [2024-11-26 07:42:03.978689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.978 [2024-11-26 07:42:03.978700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.978 qpair failed and we were unable to recover it. 00:32:19.978 [2024-11-26 07:42:03.979008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.978 [2024-11-26 07:42:03.979020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.978 qpair failed and we were unable to recover it. 00:32:19.978 [2024-11-26 07:42:03.979333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.978 [2024-11-26 07:42:03.979344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.978 qpair failed and we were unable to recover it. 00:32:19.978 [2024-11-26 07:42:03.979680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.978 [2024-11-26 07:42:03.979691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.978 qpair failed and we were unable to recover it. 00:32:19.978 [2024-11-26 07:42:03.980023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.978 [2024-11-26 07:42:03.980035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.978 qpair failed and we were unable to recover it. 00:32:19.978 [2024-11-26 07:42:03.980314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.978 [2024-11-26 07:42:03.980325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.978 qpair failed and we were unable to recover it. 00:32:19.978 [2024-11-26 07:42:03.980526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.978 [2024-11-26 07:42:03.980538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.978 qpair failed and we were unable to recover it. 00:32:19.978 [2024-11-26 07:42:03.980819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.978 [2024-11-26 07:42:03.980830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.978 qpair failed and we were unable to recover it. 00:32:19.978 [2024-11-26 07:42:03.981139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.978 [2024-11-26 07:42:03.981150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.978 qpair failed and we were unable to recover it. 00:32:19.978 [2024-11-26 07:42:03.981429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.978 [2024-11-26 07:42:03.981440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.978 qpair failed and we were unable to recover it. 00:32:19.978 [2024-11-26 07:42:03.981762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.978 [2024-11-26 07:42:03.981773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.978 qpair failed and we were unable to recover it. 00:32:19.978 [2024-11-26 07:42:03.982102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.978 [2024-11-26 07:42:03.982113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.978 qpair failed and we were unable to recover it. 00:32:19.978 [2024-11-26 07:42:03.982444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.978 [2024-11-26 07:42:03.982455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.978 qpair failed and we were unable to recover it. 00:32:19.978 [2024-11-26 07:42:03.982663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.978 [2024-11-26 07:42:03.982674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.978 qpair failed and we were unable to recover it. 00:32:19.978 [2024-11-26 07:42:03.982987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.978 [2024-11-26 07:42:03.982999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.978 qpair failed and we were unable to recover it. 00:32:19.978 [2024-11-26 07:42:03.983397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.983408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.979 qpair failed and we were unable to recover it. 00:32:19.979 [2024-11-26 07:42:03.983711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.983722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.979 qpair failed and we were unable to recover it. 00:32:19.979 [2024-11-26 07:42:03.984086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.984097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.979 qpair failed and we were unable to recover it. 00:32:19.979 [2024-11-26 07:42:03.984399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.984410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.979 qpair failed and we were unable to recover it. 00:32:19.979 [2024-11-26 07:42:03.984720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.984733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.979 qpair failed and we were unable to recover it. 00:32:19.979 [2024-11-26 07:42:03.984875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.984886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.979 qpair failed and we were unable to recover it. 00:32:19.979 [2024-11-26 07:42:03.985233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.985244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.979 qpair failed and we were unable to recover it. 00:32:19.979 [2024-11-26 07:42:03.985575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.985585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.979 qpair failed and we were unable to recover it. 00:32:19.979 [2024-11-26 07:42:03.985887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.985898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.979 qpair failed and we were unable to recover it. 00:32:19.979 [2024-11-26 07:42:03.986219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.986230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.979 qpair failed and we were unable to recover it. 00:32:19.979 [2024-11-26 07:42:03.986531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.986543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.979 qpair failed and we were unable to recover it. 00:32:19.979 [2024-11-26 07:42:03.986871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.986884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.979 qpair failed and we were unable to recover it. 00:32:19.979 [2024-11-26 07:42:03.987189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.987199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.979 qpair failed and we were unable to recover it. 00:32:19.979 [2024-11-26 07:42:03.987506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.987517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.979 qpair failed and we were unable to recover it. 00:32:19.979 [2024-11-26 07:42:03.987848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.987859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.979 qpair failed and we were unable to recover it. 00:32:19.979 [2024-11-26 07:42:03.988203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.988214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.979 qpair failed and we were unable to recover it. 00:32:19.979 [2024-11-26 07:42:03.988534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.988544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.979 qpair failed and we were unable to recover it. 00:32:19.979 [2024-11-26 07:42:03.988838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.988849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.979 qpair failed and we were unable to recover it. 00:32:19.979 [2024-11-26 07:42:03.989160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.989172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.979 qpair failed and we were unable to recover it. 00:32:19.979 [2024-11-26 07:42:03.989508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.989519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.979 qpair failed and we were unable to recover it. 00:32:19.979 [2024-11-26 07:42:03.989830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.989841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.979 qpair failed and we were unable to recover it. 00:32:19.979 [2024-11-26 07:42:03.990197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.990209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.979 qpair failed and we were unable to recover it. 00:32:19.979 [2024-11-26 07:42:03.990511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.990523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.979 qpair failed and we were unable to recover it. 00:32:19.979 [2024-11-26 07:42:03.990860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.990877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.979 qpair failed and we were unable to recover it. 00:32:19.979 [2024-11-26 07:42:03.991205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.991216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.979 qpair failed and we were unable to recover it. 00:32:19.979 [2024-11-26 07:42:03.991519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.991530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.979 qpair failed and we were unable to recover it. 00:32:19.979 [2024-11-26 07:42:03.991841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.991852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.979 qpair failed and we were unable to recover it. 00:32:19.979 [2024-11-26 07:42:03.992130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.992140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.979 qpair failed and we were unable to recover it. 00:32:19.979 [2024-11-26 07:42:03.992453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.992464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.979 qpair failed and we were unable to recover it. 00:32:19.979 [2024-11-26 07:42:03.992797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.992808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.979 qpair failed and we were unable to recover it. 00:32:19.979 [2024-11-26 07:42:03.993146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.993158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.979 qpair failed and we were unable to recover it. 00:32:19.979 [2024-11-26 07:42:03.993493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.993506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.979 qpair failed and we were unable to recover it. 00:32:19.979 [2024-11-26 07:42:03.993842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.993854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.979 qpair failed and we were unable to recover it. 00:32:19.979 [2024-11-26 07:42:03.994185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.994197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.979 qpair failed and we were unable to recover it. 00:32:19.979 [2024-11-26 07:42:03.994387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.994398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.979 qpair failed and we were unable to recover it. 00:32:19.979 [2024-11-26 07:42:03.994694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.994705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.979 qpair failed and we were unable to recover it. 00:32:19.979 [2024-11-26 07:42:03.995033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.995045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.979 qpair failed and we were unable to recover it. 00:32:19.979 [2024-11-26 07:42:03.995342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.995352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.979 qpair failed and we were unable to recover it. 00:32:19.979 [2024-11-26 07:42:03.995616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.979 [2024-11-26 07:42:03.995627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.980 [2024-11-26 07:42:03.995951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.980 [2024-11-26 07:42:03.995963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.980 [2024-11-26 07:42:03.996273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.980 [2024-11-26 07:42:03.996285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.980 [2024-11-26 07:42:03.996588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.980 [2024-11-26 07:42:03.996599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.980 [2024-11-26 07:42:03.996911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.980 [2024-11-26 07:42:03.996923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.980 [2024-11-26 07:42:03.997274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.980 [2024-11-26 07:42:03.997284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.980 [2024-11-26 07:42:03.997588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.980 [2024-11-26 07:42:03.997600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.980 [2024-11-26 07:42:03.997900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.980 [2024-11-26 07:42:03.997912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.980 [2024-11-26 07:42:03.998226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.980 [2024-11-26 07:42:03.998236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.980 [2024-11-26 07:42:03.998424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.980 [2024-11-26 07:42:03.998436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.980 [2024-11-26 07:42:03.998757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.980 [2024-11-26 07:42:03.998768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.980 [2024-11-26 07:42:03.999080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.980 [2024-11-26 07:42:03.999091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.980 [2024-11-26 07:42:03.999415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.980 [2024-11-26 07:42:03.999427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.980 [2024-11-26 07:42:03.999718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.980 [2024-11-26 07:42:03.999730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.980 [2024-11-26 07:42:04.000146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.980 [2024-11-26 07:42:04.000159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.980 [2024-11-26 07:42:04.000496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.980 [2024-11-26 07:42:04.000507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.980 [2024-11-26 07:42:04.000846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.980 [2024-11-26 07:42:04.000856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.980 [2024-11-26 07:42:04.001173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.980 [2024-11-26 07:42:04.001185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.980 [2024-11-26 07:42:04.001379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.980 [2024-11-26 07:42:04.001391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.980 [2024-11-26 07:42:04.001711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.980 [2024-11-26 07:42:04.001722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.980 [2024-11-26 07:42:04.002030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.980 [2024-11-26 07:42:04.002041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.980 [2024-11-26 07:42:04.002342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.980 [2024-11-26 07:42:04.002352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.980 [2024-11-26 07:42:04.002658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.980 [2024-11-26 07:42:04.002668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.980 [2024-11-26 07:42:04.002972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.980 [2024-11-26 07:42:04.002984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.980 [2024-11-26 07:42:04.003296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.980 [2024-11-26 07:42:04.003307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.980 [2024-11-26 07:42:04.003636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.980 [2024-11-26 07:42:04.003647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.980 [2024-11-26 07:42:04.003970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.980 [2024-11-26 07:42:04.003981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.980 [2024-11-26 07:42:04.004297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.980 [2024-11-26 07:42:04.004308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.980 [2024-11-26 07:42:04.004516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.980 [2024-11-26 07:42:04.004526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.980 [2024-11-26 07:42:04.004868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.980 [2024-11-26 07:42:04.004880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.980 [2024-11-26 07:42:04.005133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.980 [2024-11-26 07:42:04.005144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.980 [2024-11-26 07:42:04.005456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.980 [2024-11-26 07:42:04.005467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.980 [2024-11-26 07:42:04.005820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.980 [2024-11-26 07:42:04.005831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.980 [2024-11-26 07:42:04.006010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.980 [2024-11-26 07:42:04.006023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.980 [2024-11-26 07:42:04.006226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.980 [2024-11-26 07:42:04.006238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.980 [2024-11-26 07:42:04.006551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.980 [2024-11-26 07:42:04.006562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.980 [2024-11-26 07:42:04.006877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.980 [2024-11-26 07:42:04.006888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.980 [2024-11-26 07:42:04.007275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.980 [2024-11-26 07:42:04.007286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.980 [2024-11-26 07:42:04.007581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.980 [2024-11-26 07:42:04.007592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.980 qpair failed and we were unable to recover it. 00:32:19.981 [2024-11-26 07:42:04.007886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.981 [2024-11-26 07:42:04.007898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.981 qpair failed and we were unable to recover it. 00:32:19.981 [2024-11-26 07:42:04.008098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.981 [2024-11-26 07:42:04.008109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.981 qpair failed and we were unable to recover it. 00:32:19.981 [2024-11-26 07:42:04.008393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.981 [2024-11-26 07:42:04.008405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.981 qpair failed and we were unable to recover it. 00:32:19.981 [2024-11-26 07:42:04.008735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.981 [2024-11-26 07:42:04.008748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.981 qpair failed and we were unable to recover it. 00:32:19.981 [2024-11-26 07:42:04.009057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.981 [2024-11-26 07:42:04.009068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.981 qpair failed and we were unable to recover it. 00:32:19.981 [2024-11-26 07:42:04.009404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.981 [2024-11-26 07:42:04.009415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.981 qpair failed and we were unable to recover it. 00:32:19.981 [2024-11-26 07:42:04.009714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.981 [2024-11-26 07:42:04.009726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.981 qpair failed and we were unable to recover it. 00:32:19.981 [2024-11-26 07:42:04.010106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.981 [2024-11-26 07:42:04.010117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.981 qpair failed and we were unable to recover it. 00:32:19.981 [2024-11-26 07:42:04.010439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.981 [2024-11-26 07:42:04.010451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.981 qpair failed and we were unable to recover it. 00:32:19.981 [2024-11-26 07:42:04.010780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.981 [2024-11-26 07:42:04.010791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.981 qpair failed and we were unable to recover it. 00:32:19.981 [2024-11-26 07:42:04.010972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.981 [2024-11-26 07:42:04.010984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.981 qpair failed and we were unable to recover it. 00:32:19.981 [2024-11-26 07:42:04.011309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.981 [2024-11-26 07:42:04.011320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.981 qpair failed and we were unable to recover it. 00:32:19.981 [2024-11-26 07:42:04.011654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.981 [2024-11-26 07:42:04.011664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.981 qpair failed and we were unable to recover it. 00:32:19.981 [2024-11-26 07:42:04.011973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.981 [2024-11-26 07:42:04.011984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.981 qpair failed and we were unable to recover it. 00:32:19.981 [2024-11-26 07:42:04.012268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.981 [2024-11-26 07:42:04.012279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.981 qpair failed and we were unable to recover it. 00:32:19.981 [2024-11-26 07:42:04.012581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.981 [2024-11-26 07:42:04.012592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.981 qpair failed and we were unable to recover it. 00:32:19.981 [2024-11-26 07:42:04.012904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.981 [2024-11-26 07:42:04.012916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.981 qpair failed and we were unable to recover it. 00:32:19.981 [2024-11-26 07:42:04.013236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.981 [2024-11-26 07:42:04.013247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.981 qpair failed and we were unable to recover it. 00:32:19.981 [2024-11-26 07:42:04.013574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.981 [2024-11-26 07:42:04.013586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.981 qpair failed and we were unable to recover it. 00:32:19.981 [2024-11-26 07:42:04.013897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.981 [2024-11-26 07:42:04.013908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.981 qpair failed and we were unable to recover it. 00:32:19.981 [2024-11-26 07:42:04.014243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.981 [2024-11-26 07:42:04.014254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.981 qpair failed and we were unable to recover it. 00:32:19.981 [2024-11-26 07:42:04.014561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.981 [2024-11-26 07:42:04.014572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.981 qpair failed and we were unable to recover it. 00:32:19.981 [2024-11-26 07:42:04.014902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.981 [2024-11-26 07:42:04.014916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.981 qpair failed and we were unable to recover it. 00:32:19.981 [2024-11-26 07:42:04.015109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.981 [2024-11-26 07:42:04.015121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.981 qpair failed and we were unable to recover it. 00:32:19.981 [2024-11-26 07:42:04.015432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.981 [2024-11-26 07:42:04.015442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.981 qpair failed and we were unable to recover it. 00:32:19.981 [2024-11-26 07:42:04.015741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.981 [2024-11-26 07:42:04.015752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.981 qpair failed and we were unable to recover it. 00:32:19.981 [2024-11-26 07:42:04.016089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.981 [2024-11-26 07:42:04.016100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.981 qpair failed and we were unable to recover it. 00:32:19.981 [2024-11-26 07:42:04.016410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.981 [2024-11-26 07:42:04.016421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.981 qpair failed and we were unable to recover it. 00:32:19.981 [2024-11-26 07:42:04.016717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.981 [2024-11-26 07:42:04.016728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.981 qpair failed and we were unable to recover it. 00:32:19.981 [2024-11-26 07:42:04.017039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.981 [2024-11-26 07:42:04.017051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.981 qpair failed and we were unable to recover it. 00:32:19.981 [2024-11-26 07:42:04.017334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.981 [2024-11-26 07:42:04.017345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.981 qpair failed and we were unable to recover it. 00:32:19.981 [2024-11-26 07:42:04.017645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.981 [2024-11-26 07:42:04.017656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.981 qpair failed and we were unable to recover it. 00:32:19.981 [2024-11-26 07:42:04.017962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.981 [2024-11-26 07:42:04.017973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.981 qpair failed and we were unable to recover it. 00:32:19.981 [2024-11-26 07:42:04.018281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.981 [2024-11-26 07:42:04.018293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.981 qpair failed and we were unable to recover it. 00:32:19.981 [2024-11-26 07:42:04.018581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.981 [2024-11-26 07:42:04.018592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.981 qpair failed and we were unable to recover it. 00:32:19.981 [2024-11-26 07:42:04.018797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.981 [2024-11-26 07:42:04.018808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.981 qpair failed and we were unable to recover it. 00:32:19.981 [2024-11-26 07:42:04.019108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.981 [2024-11-26 07:42:04.019119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.981 qpair failed and we were unable to recover it. 00:32:19.981 [2024-11-26 07:42:04.019441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.019453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.982 qpair failed and we were unable to recover it. 00:32:19.982 [2024-11-26 07:42:04.019808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.019819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.982 qpair failed and we were unable to recover it. 00:32:19.982 [2024-11-26 07:42:04.020009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.020020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.982 qpair failed and we were unable to recover it. 00:32:19.982 [2024-11-26 07:42:04.020353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.020364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.982 qpair failed and we were unable to recover it. 00:32:19.982 [2024-11-26 07:42:04.020643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.020654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.982 qpair failed and we were unable to recover it. 00:32:19.982 [2024-11-26 07:42:04.020879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.020891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.982 qpair failed and we were unable to recover it. 00:32:19.982 [2024-11-26 07:42:04.021212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.021223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.982 qpair failed and we were unable to recover it. 00:32:19.982 [2024-11-26 07:42:04.021422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.021434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.982 qpair failed and we were unable to recover it. 00:32:19.982 [2024-11-26 07:42:04.021620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.021631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.982 qpair failed and we were unable to recover it. 00:32:19.982 [2024-11-26 07:42:04.021942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.021954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.982 qpair failed and we were unable to recover it. 00:32:19.982 [2024-11-26 07:42:04.022140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.022152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.982 qpair failed and we were unable to recover it. 00:32:19.982 [2024-11-26 07:42:04.022466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.022476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.982 qpair failed and we were unable to recover it. 00:32:19.982 [2024-11-26 07:42:04.022816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.022829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.982 qpair failed and we were unable to recover it. 00:32:19.982 [2024-11-26 07:42:04.023125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.023136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.982 qpair failed and we were unable to recover it. 00:32:19.982 [2024-11-26 07:42:04.023338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.023349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.982 qpair failed and we were unable to recover it. 00:32:19.982 [2024-11-26 07:42:04.023690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.023702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.982 qpair failed and we were unable to recover it. 00:32:19.982 [2024-11-26 07:42:04.024033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.024045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.982 qpair failed and we were unable to recover it. 00:32:19.982 [2024-11-26 07:42:04.024386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.024397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.982 qpair failed and we were unable to recover it. 00:32:19.982 [2024-11-26 07:42:04.024726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.024737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.982 qpair failed and we were unable to recover it. 00:32:19.982 [2024-11-26 07:42:04.025051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.025062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.982 qpair failed and we were unable to recover it. 00:32:19.982 [2024-11-26 07:42:04.025369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.025380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.982 qpair failed and we were unable to recover it. 00:32:19.982 [2024-11-26 07:42:04.025714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.025725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.982 qpair failed and we were unable to recover it. 00:32:19.982 [2024-11-26 07:42:04.026037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.026048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.982 qpair failed and we were unable to recover it. 00:32:19.982 [2024-11-26 07:42:04.026467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.026478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.982 qpair failed and we were unable to recover it. 00:32:19.982 [2024-11-26 07:42:04.026776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.026786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.982 qpair failed and we were unable to recover it. 00:32:19.982 [2024-11-26 07:42:04.027084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.027095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.982 qpair failed and we were unable to recover it. 00:32:19.982 [2024-11-26 07:42:04.027425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.027438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.982 qpair failed and we were unable to recover it. 00:32:19.982 [2024-11-26 07:42:04.027749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.027760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.982 qpair failed and we were unable to recover it. 00:32:19.982 [2024-11-26 07:42:04.028043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.028054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.982 qpair failed and we were unable to recover it. 00:32:19.982 [2024-11-26 07:42:04.028344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.028355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.982 qpair failed and we were unable to recover it. 00:32:19.982 [2024-11-26 07:42:04.028573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.028584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.982 qpair failed and we were unable to recover it. 00:32:19.982 [2024-11-26 07:42:04.028882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.028893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.982 qpair failed and we were unable to recover it. 00:32:19.982 [2024-11-26 07:42:04.029203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.029214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.982 qpair failed and we were unable to recover it. 00:32:19.982 [2024-11-26 07:42:04.029509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.029520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.982 qpair failed and we were unable to recover it. 00:32:19.982 [2024-11-26 07:42:04.029827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.029838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.982 qpair failed and we were unable to recover it. 00:32:19.982 [2024-11-26 07:42:04.030147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.030160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.982 qpair failed and we were unable to recover it. 00:32:19.982 [2024-11-26 07:42:04.030469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.030481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.982 qpair failed and we were unable to recover it. 00:32:19.982 [2024-11-26 07:42:04.030713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.030724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.982 qpair failed and we were unable to recover it. 00:32:19.982 [2024-11-26 07:42:04.031038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.982 [2024-11-26 07:42:04.031049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.983 qpair failed and we were unable to recover it. 00:32:19.983 [2024-11-26 07:42:04.031352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.983 [2024-11-26 07:42:04.031365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.983 qpair failed and we were unable to recover it. 00:32:19.983 [2024-11-26 07:42:04.031675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.983 [2024-11-26 07:42:04.031686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.983 qpair failed and we were unable to recover it. 00:32:19.983 [2024-11-26 07:42:04.031998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.983 [2024-11-26 07:42:04.032009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.983 qpair failed and we were unable to recover it. 00:32:19.983 [2024-11-26 07:42:04.032332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.983 [2024-11-26 07:42:04.032343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.983 qpair failed and we were unable to recover it. 00:32:19.983 [2024-11-26 07:42:04.032530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.983 [2024-11-26 07:42:04.032542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.983 qpair failed and we were unable to recover it. 00:32:19.983 [2024-11-26 07:42:04.032853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.983 [2024-11-26 07:42:04.032876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.983 qpair failed and we were unable to recover it. 00:32:19.983 [2024-11-26 07:42:04.033191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.983 [2024-11-26 07:42:04.033201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.983 qpair failed and we were unable to recover it. 00:32:19.983 [2024-11-26 07:42:04.033496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.983 [2024-11-26 07:42:04.033506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.983 qpair failed and we were unable to recover it. 00:32:19.983 [2024-11-26 07:42:04.033704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.983 [2024-11-26 07:42:04.033716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.983 qpair failed and we were unable to recover it. 00:32:19.983 [2024-11-26 07:42:04.034043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.983 [2024-11-26 07:42:04.034054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.983 qpair failed and we were unable to recover it. 00:32:19.983 [2024-11-26 07:42:04.034361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.983 [2024-11-26 07:42:04.034373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.983 qpair failed and we were unable to recover it. 00:32:19.983 [2024-11-26 07:42:04.034580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.983 [2024-11-26 07:42:04.034591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.983 qpair failed and we were unable to recover it. 00:32:19.983 [2024-11-26 07:42:04.034770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.983 [2024-11-26 07:42:04.034781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.983 qpair failed and we were unable to recover it. 00:32:19.983 [2024-11-26 07:42:04.035116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.983 [2024-11-26 07:42:04.035126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.983 qpair failed and we were unable to recover it. 00:32:19.983 [2024-11-26 07:42:04.035467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.983 [2024-11-26 07:42:04.035477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.983 qpair failed and we were unable to recover it. 00:32:19.983 [2024-11-26 07:42:04.035807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.983 [2024-11-26 07:42:04.035818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.983 qpair failed and we were unable to recover it. 00:32:19.983 [2024-11-26 07:42:04.036148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.983 [2024-11-26 07:42:04.036159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.983 qpair failed and we were unable to recover it. 00:32:19.983 [2024-11-26 07:42:04.036510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.983 [2024-11-26 07:42:04.036522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.983 qpair failed and we were unable to recover it. 00:32:19.983 [2024-11-26 07:42:04.036820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.983 [2024-11-26 07:42:04.036832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.983 qpair failed and we were unable to recover it. 00:32:19.983 [2024-11-26 07:42:04.037126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.983 [2024-11-26 07:42:04.037138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.983 qpair failed and we were unable to recover it. 00:32:19.983 [2024-11-26 07:42:04.037439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.983 [2024-11-26 07:42:04.037450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.983 qpair failed and we were unable to recover it. 00:32:19.983 [2024-11-26 07:42:04.037782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.983 [2024-11-26 07:42:04.037793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.983 qpair failed and we were unable to recover it. 00:32:19.983 [2024-11-26 07:42:04.038101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.983 [2024-11-26 07:42:04.038113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.983 qpair failed and we were unable to recover it. 00:32:19.983 [2024-11-26 07:42:04.038450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.983 [2024-11-26 07:42:04.038461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.983 qpair failed and we were unable to recover it. 00:32:19.983 [2024-11-26 07:42:04.038659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.983 [2024-11-26 07:42:04.038672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.983 qpair failed and we were unable to recover it. 00:32:19.983 [2024-11-26 07:42:04.038993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.983 [2024-11-26 07:42:04.039004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.983 qpair failed and we were unable to recover it. 00:32:19.983 [2024-11-26 07:42:04.039323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.983 [2024-11-26 07:42:04.039334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.983 qpair failed and we were unable to recover it. 00:32:19.983 [2024-11-26 07:42:04.039661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.983 [2024-11-26 07:42:04.039672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.983 qpair failed and we were unable to recover it. 00:32:19.983 [2024-11-26 07:42:04.040001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.983 [2024-11-26 07:42:04.040012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.983 qpair failed and we were unable to recover it. 00:32:19.983 [2024-11-26 07:42:04.040382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.983 [2024-11-26 07:42:04.040392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.983 qpair failed and we were unable to recover it. 00:32:19.983 [2024-11-26 07:42:04.040724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.983 [2024-11-26 07:42:04.040735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.983 qpair failed and we were unable to recover it. 00:32:19.983 [2024-11-26 07:42:04.041124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.983 [2024-11-26 07:42:04.041137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.983 qpair failed and we were unable to recover it. 00:32:19.983 [2024-11-26 07:42:04.041418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.983 [2024-11-26 07:42:04.041430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.983 qpair failed and we were unable to recover it. 00:32:19.983 [2024-11-26 07:42:04.041736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.041747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.984 qpair failed and we were unable to recover it. 00:32:19.984 [2024-11-26 07:42:04.042045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.042056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.984 qpair failed and we were unable to recover it. 00:32:19.984 [2024-11-26 07:42:04.042388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.042398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.984 qpair failed and we were unable to recover it. 00:32:19.984 [2024-11-26 07:42:04.042710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.042722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.984 qpair failed and we were unable to recover it. 00:32:19.984 [2024-11-26 07:42:04.043032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.043044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.984 qpair failed and we were unable to recover it. 00:32:19.984 [2024-11-26 07:42:04.043349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.043360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.984 qpair failed and we were unable to recover it. 00:32:19.984 [2024-11-26 07:42:04.043666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.043677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.984 qpair failed and we were unable to recover it. 00:32:19.984 [2024-11-26 07:42:04.043994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.044006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.984 qpair failed and we were unable to recover it. 00:32:19.984 [2024-11-26 07:42:04.044333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.044346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.984 qpair failed and we were unable to recover it. 00:32:19.984 [2024-11-26 07:42:04.044651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.044663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.984 qpair failed and we were unable to recover it. 00:32:19.984 [2024-11-26 07:42:04.044974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.044985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.984 qpair failed and we were unable to recover it. 00:32:19.984 [2024-11-26 07:42:04.045299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.045310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.984 qpair failed and we were unable to recover it. 00:32:19.984 [2024-11-26 07:42:04.045620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.045630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.984 qpair failed and we were unable to recover it. 00:32:19.984 [2024-11-26 07:42:04.045908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.045919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.984 qpair failed and we were unable to recover it. 00:32:19.984 [2024-11-26 07:42:04.046222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.046233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.984 qpair failed and we were unable to recover it. 00:32:19.984 [2024-11-26 07:42:04.046537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.046548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.984 qpair failed and we were unable to recover it. 00:32:19.984 [2024-11-26 07:42:04.046850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.046866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.984 qpair failed and we were unable to recover it. 00:32:19.984 [2024-11-26 07:42:04.047204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.047216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.984 qpair failed and we were unable to recover it. 00:32:19.984 [2024-11-26 07:42:04.047422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.047434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.984 qpair failed and we were unable to recover it. 00:32:19.984 [2024-11-26 07:42:04.047701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.047713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.984 qpair failed and we were unable to recover it. 00:32:19.984 [2024-11-26 07:42:04.048125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.048136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.984 qpair failed and we were unable to recover it. 00:32:19.984 [2024-11-26 07:42:04.048400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.048411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.984 qpair failed and we were unable to recover it. 00:32:19.984 [2024-11-26 07:42:04.048748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.048758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.984 qpair failed and we were unable to recover it. 00:32:19.984 [2024-11-26 07:42:04.049064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.049075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.984 qpair failed and we were unable to recover it. 00:32:19.984 [2024-11-26 07:42:04.049420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.049432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.984 qpair failed and we were unable to recover it. 00:32:19.984 [2024-11-26 07:42:04.049767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.049777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.984 qpair failed and we were unable to recover it. 00:32:19.984 [2024-11-26 07:42:04.049964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.049975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.984 qpair failed and we were unable to recover it. 00:32:19.984 [2024-11-26 07:42:04.050313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.050324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.984 qpair failed and we were unable to recover it. 00:32:19.984 [2024-11-26 07:42:04.050598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.050609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.984 qpair failed and we were unable to recover it. 00:32:19.984 [2024-11-26 07:42:04.050944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.050957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.984 qpair failed and we were unable to recover it. 00:32:19.984 [2024-11-26 07:42:04.051269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.051280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.984 qpair failed and we were unable to recover it. 00:32:19.984 [2024-11-26 07:42:04.051574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.051585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.984 qpair failed and we were unable to recover it. 00:32:19.984 [2024-11-26 07:42:04.051913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.051924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.984 qpair failed and we were unable to recover it. 00:32:19.984 [2024-11-26 07:42:04.052234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.052245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.984 qpair failed and we were unable to recover it. 00:32:19.984 [2024-11-26 07:42:04.052553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.052564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.984 qpair failed and we were unable to recover it. 00:32:19.984 [2024-11-26 07:42:04.052740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.052753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.984 qpair failed and we were unable to recover it. 00:32:19.984 [2024-11-26 07:42:04.053079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.053091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.984 qpair failed and we were unable to recover it. 00:32:19.984 [2024-11-26 07:42:04.053392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.053404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.984 qpair failed and we were unable to recover it. 00:32:19.984 [2024-11-26 07:42:04.053747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.984 [2024-11-26 07:42:04.053757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.985 [2024-11-26 07:42:04.053956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.985 [2024-11-26 07:42:04.053969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.985 [2024-11-26 07:42:04.054290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.985 [2024-11-26 07:42:04.054301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.985 [2024-11-26 07:42:04.054633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.985 [2024-11-26 07:42:04.054643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.985 [2024-11-26 07:42:04.054971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.985 [2024-11-26 07:42:04.054982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.985 [2024-11-26 07:42:04.055304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.985 [2024-11-26 07:42:04.055316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.985 [2024-11-26 07:42:04.055610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.985 [2024-11-26 07:42:04.055621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.985 [2024-11-26 07:42:04.055928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.985 [2024-11-26 07:42:04.055940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.985 [2024-11-26 07:42:04.056276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.985 [2024-11-26 07:42:04.056287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.985 [2024-11-26 07:42:04.056596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.985 [2024-11-26 07:42:04.056607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.985 [2024-11-26 07:42:04.056786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.985 [2024-11-26 07:42:04.056799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.985 [2024-11-26 07:42:04.057117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.985 [2024-11-26 07:42:04.057129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.985 [2024-11-26 07:42:04.057441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.985 [2024-11-26 07:42:04.057452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.985 [2024-11-26 07:42:04.057672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.985 [2024-11-26 07:42:04.057684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.985 [2024-11-26 07:42:04.057996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.985 [2024-11-26 07:42:04.058007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.985 [2024-11-26 07:42:04.058307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.985 [2024-11-26 07:42:04.058318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.985 [2024-11-26 07:42:04.058651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.985 [2024-11-26 07:42:04.058662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.985 [2024-11-26 07:42:04.058873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.985 [2024-11-26 07:42:04.058884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.985 [2024-11-26 07:42:04.059055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.985 [2024-11-26 07:42:04.059067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.985 [2024-11-26 07:42:04.059361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.985 [2024-11-26 07:42:04.059372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.985 [2024-11-26 07:42:04.059673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.985 [2024-11-26 07:42:04.059684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.985 [2024-11-26 07:42:04.059903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.985 [2024-11-26 07:42:04.059915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.985 [2024-11-26 07:42:04.060178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.985 [2024-11-26 07:42:04.060188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.985 [2024-11-26 07:42:04.060521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.985 [2024-11-26 07:42:04.060531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.985 [2024-11-26 07:42:04.060873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.985 [2024-11-26 07:42:04.060887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.985 [2024-11-26 07:42:04.061074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.985 [2024-11-26 07:42:04.061085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.985 [2024-11-26 07:42:04.061361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.985 [2024-11-26 07:42:04.061371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.985 [2024-11-26 07:42:04.061697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.985 [2024-11-26 07:42:04.061708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.985 [2024-11-26 07:42:04.062013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.985 [2024-11-26 07:42:04.062025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.985 [2024-11-26 07:42:04.062356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.985 [2024-11-26 07:42:04.062367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.985 [2024-11-26 07:42:04.062670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.985 [2024-11-26 07:42:04.062681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.985 [2024-11-26 07:42:04.062966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.985 [2024-11-26 07:42:04.062978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.985 [2024-11-26 07:42:04.063287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.985 [2024-11-26 07:42:04.063298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.985 [2024-11-26 07:42:04.063607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.985 [2024-11-26 07:42:04.063618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.985 [2024-11-26 07:42:04.063946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.985 [2024-11-26 07:42:04.063958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.985 [2024-11-26 07:42:04.064289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.985 [2024-11-26 07:42:04.064301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.985 [2024-11-26 07:42:04.064634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.985 [2024-11-26 07:42:04.064647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.985 [2024-11-26 07:42:04.064852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.985 [2024-11-26 07:42:04.064867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.985 [2024-11-26 07:42:04.065191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.985 [2024-11-26 07:42:04.065202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.985 qpair failed and we were unable to recover it. 00:32:19.986 [2024-11-26 07:42:04.065480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.986 [2024-11-26 07:42:04.065491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.986 qpair failed and we were unable to recover it. 00:32:19.986 [2024-11-26 07:42:04.065817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.986 [2024-11-26 07:42:04.065828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.986 qpair failed and we were unable to recover it. 00:32:19.986 [2024-11-26 07:42:04.066165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.986 [2024-11-26 07:42:04.066177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.986 qpair failed and we were unable to recover it. 00:32:19.986 [2024-11-26 07:42:04.066356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.986 [2024-11-26 07:42:04.066368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.986 qpair failed and we were unable to recover it. 00:32:19.986 [2024-11-26 07:42:04.066563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.986 [2024-11-26 07:42:04.066574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.986 qpair failed and we were unable to recover it. 00:32:19.986 [2024-11-26 07:42:04.066896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.986 [2024-11-26 07:42:04.066907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.986 qpair failed and we were unable to recover it. 00:32:19.986 [2024-11-26 07:42:04.067282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.986 [2024-11-26 07:42:04.067293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.986 qpair failed and we were unable to recover it. 00:32:19.986 [2024-11-26 07:42:04.067601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.986 [2024-11-26 07:42:04.067612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.986 qpair failed and we were unable to recover it. 00:32:19.986 [2024-11-26 07:42:04.067933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.986 [2024-11-26 07:42:04.067945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.986 qpair failed and we were unable to recover it. 00:32:19.986 [2024-11-26 07:42:04.068156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.986 [2024-11-26 07:42:04.068166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.986 qpair failed and we were unable to recover it. 00:32:19.986 [2024-11-26 07:42:04.068423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.986 [2024-11-26 07:42:04.068434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.986 qpair failed and we were unable to recover it. 00:32:19.986 [2024-11-26 07:42:04.068738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.986 [2024-11-26 07:42:04.068749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.986 qpair failed and we were unable to recover it. 00:32:19.986 [2024-11-26 07:42:04.069030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.986 [2024-11-26 07:42:04.069041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.986 qpair failed and we were unable to recover it. 00:32:19.986 [2024-11-26 07:42:04.069357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.986 [2024-11-26 07:42:04.069368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.986 qpair failed and we were unable to recover it. 00:32:19.986 [2024-11-26 07:42:04.069555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.986 [2024-11-26 07:42:04.069567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.986 qpair failed and we were unable to recover it. 00:32:19.986 [2024-11-26 07:42:04.069872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.986 [2024-11-26 07:42:04.069884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.986 qpair failed and we were unable to recover it. 00:32:19.986 [2024-11-26 07:42:04.070206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.986 [2024-11-26 07:42:04.070216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.986 qpair failed and we were unable to recover it. 00:32:19.986 [2024-11-26 07:42:04.070408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.986 [2024-11-26 07:42:04.070420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.986 qpair failed and we were unable to recover it. 00:32:19.986 [2024-11-26 07:42:04.070686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.986 [2024-11-26 07:42:04.070697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.986 qpair failed and we were unable to recover it. 00:32:19.986 [2024-11-26 07:42:04.071026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.986 [2024-11-26 07:42:04.071038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.986 qpair failed and we were unable to recover it. 00:32:19.986 [2024-11-26 07:42:04.071372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.986 [2024-11-26 07:42:04.071384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.986 qpair failed and we were unable to recover it. 00:32:19.986 [2024-11-26 07:42:04.071563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.986 [2024-11-26 07:42:04.071575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.986 qpair failed and we were unable to recover it. 00:32:19.986 [2024-11-26 07:42:04.071827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.986 [2024-11-26 07:42:04.071838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.986 qpair failed and we were unable to recover it. 00:32:19.986 [2024-11-26 07:42:04.072176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.986 [2024-11-26 07:42:04.072188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.986 qpair failed and we were unable to recover it. 00:32:19.986 [2024-11-26 07:42:04.072490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.986 [2024-11-26 07:42:04.072501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.986 qpair failed and we were unable to recover it. 00:32:19.986 [2024-11-26 07:42:04.072773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.986 [2024-11-26 07:42:04.072784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.986 qpair failed and we were unable to recover it. 00:32:19.986 [2024-11-26 07:42:04.072970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.986 [2024-11-26 07:42:04.072981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.986 qpair failed and we were unable to recover it. 00:32:19.986 [2024-11-26 07:42:04.073204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.986 [2024-11-26 07:42:04.073215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.986 qpair failed and we were unable to recover it. 00:32:19.986 [2024-11-26 07:42:04.073508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.986 [2024-11-26 07:42:04.073519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.986 qpair failed and we were unable to recover it. 00:32:19.986 [2024-11-26 07:42:04.073821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.986 [2024-11-26 07:42:04.073832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.986 qpair failed and we were unable to recover it. 00:32:19.986 [2024-11-26 07:42:04.074142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.986 [2024-11-26 07:42:04.074153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.986 qpair failed and we were unable to recover it. 00:32:19.986 [2024-11-26 07:42:04.074460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.986 [2024-11-26 07:42:04.074471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.986 qpair failed and we were unable to recover it. 00:32:19.986 [2024-11-26 07:42:04.074829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.986 [2024-11-26 07:42:04.074841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.986 qpair failed and we were unable to recover it. 00:32:19.986 [2024-11-26 07:42:04.075062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.986 [2024-11-26 07:42:04.075074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:19.986 qpair failed and we were unable to recover it. 00:32:20.262 [2024-11-26 07:42:04.075287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.262 [2024-11-26 07:42:04.075300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.262 qpair failed and we were unable to recover it. 00:32:20.262 [2024-11-26 07:42:04.075614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.262 [2024-11-26 07:42:04.075626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.262 qpair failed and we were unable to recover it. 00:32:20.262 [2024-11-26 07:42:04.075961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.262 [2024-11-26 07:42:04.075972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.262 qpair failed and we were unable to recover it. 00:32:20.262 [2024-11-26 07:42:04.076166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.262 [2024-11-26 07:42:04.076178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.262 qpair failed and we were unable to recover it. 00:32:20.262 [2024-11-26 07:42:04.076482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.262 [2024-11-26 07:42:04.076493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.262 qpair failed and we were unable to recover it. 00:32:20.262 [2024-11-26 07:42:04.076800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.262 [2024-11-26 07:42:04.076811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.262 qpair failed and we were unable to recover it. 00:32:20.262 [2024-11-26 07:42:04.077153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.262 [2024-11-26 07:42:04.077164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.262 qpair failed and we were unable to recover it. 00:32:20.262 [2024-11-26 07:42:04.077475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.262 [2024-11-26 07:42:04.077486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.262 qpair failed and we were unable to recover it. 00:32:20.262 [2024-11-26 07:42:04.077781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.262 [2024-11-26 07:42:04.077793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.262 qpair failed and we were unable to recover it. 00:32:20.262 [2024-11-26 07:42:04.077985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.262 [2024-11-26 07:42:04.077996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.262 qpair failed and we were unable to recover it. 00:32:20.262 [2024-11-26 07:42:04.078309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.262 [2024-11-26 07:42:04.078321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.262 qpair failed and we were unable to recover it. 00:32:20.262 [2024-11-26 07:42:04.078648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.263 [2024-11-26 07:42:04.078659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.263 qpair failed and we were unable to recover it. 00:32:20.263 [2024-11-26 07:42:04.078956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.263 [2024-11-26 07:42:04.078967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.263 qpair failed and we were unable to recover it. 00:32:20.263 [2024-11-26 07:42:04.079274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.263 [2024-11-26 07:42:04.079285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.263 qpair failed and we were unable to recover it. 00:32:20.263 [2024-11-26 07:42:04.079623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.263 [2024-11-26 07:42:04.079635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.263 qpair failed and we were unable to recover it. 00:32:20.263 [2024-11-26 07:42:04.079944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.263 [2024-11-26 07:42:04.079955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.263 qpair failed and we were unable to recover it. 00:32:20.263 [2024-11-26 07:42:04.080265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.263 [2024-11-26 07:42:04.080276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.263 qpair failed and we were unable to recover it. 00:32:20.263 [2024-11-26 07:42:04.080481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.263 [2024-11-26 07:42:04.080492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.263 qpair failed and we were unable to recover it. 00:32:20.263 [2024-11-26 07:42:04.080779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.263 [2024-11-26 07:42:04.080791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.263 qpair failed and we were unable to recover it. 00:32:20.263 [2024-11-26 07:42:04.081106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.263 [2024-11-26 07:42:04.081119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.263 qpair failed and we were unable to recover it. 00:32:20.263 [2024-11-26 07:42:04.081424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.263 [2024-11-26 07:42:04.081435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.263 qpair failed and we were unable to recover it. 00:32:20.263 [2024-11-26 07:42:04.081764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.263 [2024-11-26 07:42:04.081776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.263 qpair failed and we were unable to recover it. 00:32:20.263 [2024-11-26 07:42:04.082095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.263 [2024-11-26 07:42:04.082107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.263 qpair failed and we were unable to recover it. 00:32:20.263 [2024-11-26 07:42:04.082435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.263 [2024-11-26 07:42:04.082446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.263 qpair failed and we were unable to recover it. 00:32:20.263 [2024-11-26 07:42:04.082774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.263 [2024-11-26 07:42:04.082785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.263 qpair failed and we were unable to recover it. 00:32:20.263 [2024-11-26 07:42:04.083098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.263 [2024-11-26 07:42:04.083110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.263 qpair failed and we were unable to recover it. 00:32:20.263 [2024-11-26 07:42:04.083451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.263 [2024-11-26 07:42:04.083463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.263 qpair failed and we were unable to recover it. 00:32:20.263 [2024-11-26 07:42:04.083771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.263 [2024-11-26 07:42:04.083782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.263 qpair failed and we were unable to recover it. 00:32:20.263 [2024-11-26 07:42:04.084166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.263 [2024-11-26 07:42:04.084178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.263 qpair failed and we were unable to recover it. 00:32:20.263 [2024-11-26 07:42:04.084482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.263 [2024-11-26 07:42:04.084493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.263 qpair failed and we were unable to recover it. 00:32:20.263 [2024-11-26 07:42:04.084824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.263 [2024-11-26 07:42:04.084834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.263 qpair failed and we were unable to recover it. 00:32:20.263 [2024-11-26 07:42:04.085057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.263 [2024-11-26 07:42:04.085068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.263 qpair failed and we were unable to recover it. 00:32:20.263 [2024-11-26 07:42:04.085389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.263 [2024-11-26 07:42:04.085400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.263 qpair failed and we were unable to recover it. 00:32:20.263 [2024-11-26 07:42:04.085732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.263 [2024-11-26 07:42:04.085743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.263 qpair failed and we were unable to recover it. 00:32:20.263 [2024-11-26 07:42:04.086041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.263 [2024-11-26 07:42:04.086052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.263 qpair failed and we were unable to recover it. 00:32:20.263 [2024-11-26 07:42:04.086352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.263 [2024-11-26 07:42:04.086364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.263 qpair failed and we were unable to recover it. 00:32:20.263 [2024-11-26 07:42:04.086692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.263 [2024-11-26 07:42:04.086704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.263 qpair failed and we were unable to recover it. 00:32:20.263 [2024-11-26 07:42:04.087037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.263 [2024-11-26 07:42:04.087049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.263 qpair failed and we were unable to recover it. 00:32:20.263 [2024-11-26 07:42:04.087336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.263 [2024-11-26 07:42:04.087347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.263 qpair failed and we were unable to recover it. 00:32:20.263 [2024-11-26 07:42:04.087653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.263 [2024-11-26 07:42:04.087663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.263 qpair failed and we were unable to recover it. 00:32:20.263 [2024-11-26 07:42:04.087744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.263 [2024-11-26 07:42:04.087753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.263 qpair failed and we were unable to recover it. 00:32:20.263 [2024-11-26 07:42:04.088018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.263 [2024-11-26 07:42:04.088029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.263 qpair failed and we were unable to recover it. 00:32:20.263 [2024-11-26 07:42:04.088354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.263 [2024-11-26 07:42:04.088364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.263 qpair failed and we were unable to recover it. 00:32:20.263 [2024-11-26 07:42:04.088663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.263 [2024-11-26 07:42:04.088674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.263 qpair failed and we were unable to recover it. 00:32:20.263 [2024-11-26 07:42:04.088992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.263 [2024-11-26 07:42:04.089004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.263 qpair failed and we were unable to recover it. 00:32:20.263 [2024-11-26 07:42:04.089308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.263 [2024-11-26 07:42:04.089319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.263 qpair failed and we were unable to recover it. 00:32:20.263 [2024-11-26 07:42:04.089545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.263 [2024-11-26 07:42:04.089557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.263 qpair failed and we were unable to recover it. 00:32:20.263 [2024-11-26 07:42:04.089870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.263 [2024-11-26 07:42:04.089881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.263 qpair failed and we were unable to recover it. 00:32:20.263 [2024-11-26 07:42:04.090157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.263 [2024-11-26 07:42:04.090169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.263 qpair failed and we were unable to recover it. 00:32:20.263 [2024-11-26 07:42:04.090463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.090474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.264 [2024-11-26 07:42:04.090675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.090687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.264 [2024-11-26 07:42:04.091004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.091015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.264 [2024-11-26 07:42:04.091325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.091336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.264 [2024-11-26 07:42:04.091635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.091646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.264 [2024-11-26 07:42:04.091952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.091964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.264 [2024-11-26 07:42:04.092266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.092277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.264 [2024-11-26 07:42:04.092458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.092470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.264 [2024-11-26 07:42:04.092821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.092832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.264 [2024-11-26 07:42:04.093196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.093207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.264 [2024-11-26 07:42:04.093539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.093550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.264 [2024-11-26 07:42:04.093883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.093895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.264 [2024-11-26 07:42:04.094245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.094256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.264 [2024-11-26 07:42:04.094598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.094610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.264 [2024-11-26 07:42:04.094937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.094948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.264 [2024-11-26 07:42:04.095276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.095287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.264 [2024-11-26 07:42:04.095592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.095604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.264 [2024-11-26 07:42:04.095932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.095944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.264 [2024-11-26 07:42:04.096301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.096311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.264 [2024-11-26 07:42:04.096688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.096699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.264 [2024-11-26 07:42:04.097030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.097041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.264 [2024-11-26 07:42:04.097370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.097382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.264 [2024-11-26 07:42:04.097709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.097720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.264 [2024-11-26 07:42:04.098035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.098047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.264 [2024-11-26 07:42:04.098374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.098387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.264 [2024-11-26 07:42:04.098691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.098702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.264 [2024-11-26 07:42:04.098921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.098934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.264 [2024-11-26 07:42:04.099206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.099217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.264 [2024-11-26 07:42:04.099568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.099580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.264 [2024-11-26 07:42:04.099881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.099892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.264 [2024-11-26 07:42:04.100201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.100211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.264 [2024-11-26 07:42:04.100521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.100532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.264 [2024-11-26 07:42:04.100868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.100880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.264 [2024-11-26 07:42:04.101205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.101216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.264 [2024-11-26 07:42:04.101526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.101536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.264 [2024-11-26 07:42:04.101836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.101847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.264 [2024-11-26 07:42:04.102068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.102081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.264 [2024-11-26 07:42:04.102395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.102406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.264 [2024-11-26 07:42:04.102740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.264 [2024-11-26 07:42:04.102752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.264 qpair failed and we were unable to recover it. 00:32:20.265 [2024-11-26 07:42:04.103049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.265 [2024-11-26 07:42:04.103060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.265 qpair failed and we were unable to recover it. 00:32:20.265 [2024-11-26 07:42:04.103386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.265 [2024-11-26 07:42:04.103396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.265 qpair failed and we were unable to recover it. 00:32:20.265 [2024-11-26 07:42:04.103703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.265 [2024-11-26 07:42:04.103714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.265 qpair failed and we were unable to recover it. 00:32:20.265 [2024-11-26 07:42:04.104015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.265 [2024-11-26 07:42:04.104027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.265 qpair failed and we were unable to recover it. 00:32:20.265 [2024-11-26 07:42:04.104360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.265 [2024-11-26 07:42:04.104372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.265 qpair failed and we were unable to recover it. 00:32:20.265 [2024-11-26 07:42:04.104676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.265 [2024-11-26 07:42:04.104688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.265 qpair failed and we were unable to recover it. 00:32:20.265 [2024-11-26 07:42:04.105036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.265 [2024-11-26 07:42:04.105048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.265 qpair failed and we were unable to recover it. 00:32:20.265 [2024-11-26 07:42:04.105381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.265 [2024-11-26 07:42:04.105392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.265 qpair failed and we were unable to recover it. 00:32:20.265 [2024-11-26 07:42:04.105719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.265 [2024-11-26 07:42:04.105730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.265 qpair failed and we were unable to recover it. 00:32:20.265 [2024-11-26 07:42:04.106033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.265 [2024-11-26 07:42:04.106044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.265 qpair failed and we were unable to recover it. 00:32:20.265 [2024-11-26 07:42:04.106357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.265 [2024-11-26 07:42:04.106368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.265 qpair failed and we were unable to recover it. 00:32:20.265 [2024-11-26 07:42:04.106696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.265 [2024-11-26 07:42:04.106708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.265 qpair failed and we were unable to recover it. 00:32:20.265 [2024-11-26 07:42:04.107021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.265 [2024-11-26 07:42:04.107032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.265 qpair failed and we were unable to recover it. 00:32:20.265 [2024-11-26 07:42:04.107231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.265 [2024-11-26 07:42:04.107243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.265 qpair failed and we were unable to recover it. 00:32:20.265 [2024-11-26 07:42:04.107513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.265 [2024-11-26 07:42:04.107523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.265 qpair failed and we were unable to recover it. 00:32:20.265 [2024-11-26 07:42:04.107845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.265 [2024-11-26 07:42:04.107856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.265 qpair failed and we were unable to recover it. 00:32:20.265 [2024-11-26 07:42:04.108149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.265 [2024-11-26 07:42:04.108160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.265 qpair failed and we were unable to recover it. 00:32:20.265 [2024-11-26 07:42:04.108496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.265 [2024-11-26 07:42:04.108507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.265 qpair failed and we were unable to recover it. 00:32:20.265 [2024-11-26 07:42:04.108680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.265 [2024-11-26 07:42:04.108691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.265 qpair failed and we were unable to recover it. 00:32:20.265 [2024-11-26 07:42:04.109048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.265 [2024-11-26 07:42:04.109059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.265 qpair failed and we were unable to recover it. 00:32:20.265 [2024-11-26 07:42:04.109386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.265 [2024-11-26 07:42:04.109397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.265 qpair failed and we were unable to recover it. 00:32:20.265 [2024-11-26 07:42:04.109732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.265 [2024-11-26 07:42:04.109742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.265 qpair failed and we were unable to recover it. 00:32:20.265 [2024-11-26 07:42:04.110061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.265 [2024-11-26 07:42:04.110072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.265 qpair failed and we were unable to recover it. 00:32:20.265 [2024-11-26 07:42:04.110400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.265 [2024-11-26 07:42:04.110412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.265 qpair failed and we were unable to recover it. 00:32:20.265 [2024-11-26 07:42:04.110723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.265 [2024-11-26 07:42:04.110734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.265 qpair failed and we were unable to recover it. 00:32:20.265 [2024-11-26 07:42:04.111041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.265 [2024-11-26 07:42:04.111053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.265 qpair failed and we were unable to recover it. 00:32:20.265 [2024-11-26 07:42:04.111267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.265 [2024-11-26 07:42:04.111279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.265 qpair failed and we were unable to recover it. 00:32:20.265 [2024-11-26 07:42:04.111608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.265 [2024-11-26 07:42:04.111619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.265 qpair failed and we were unable to recover it. 00:32:20.265 [2024-11-26 07:42:04.111927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.265 [2024-11-26 07:42:04.111938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.265 qpair failed and we were unable to recover it. 00:32:20.265 [2024-11-26 07:42:04.112132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.265 [2024-11-26 07:42:04.112142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.265 qpair failed and we were unable to recover it. 00:32:20.265 [2024-11-26 07:42:04.112414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.265 [2024-11-26 07:42:04.112425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.265 qpair failed and we were unable to recover it. 00:32:20.265 [2024-11-26 07:42:04.112710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.265 [2024-11-26 07:42:04.112720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.265 qpair failed and we were unable to recover it. 00:32:20.265 [2024-11-26 07:42:04.113066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.265 [2024-11-26 07:42:04.113077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.265 qpair failed and we were unable to recover it. 00:32:20.265 [2024-11-26 07:42:04.113371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.265 [2024-11-26 07:42:04.113383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.265 qpair failed and we were unable to recover it. 00:32:20.265 [2024-11-26 07:42:04.113555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.265 [2024-11-26 07:42:04.113567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.265 qpair failed and we were unable to recover it. 00:32:20.265 [2024-11-26 07:42:04.113872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.265 [2024-11-26 07:42:04.113883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.265 qpair failed and we were unable to recover it. 00:32:20.265 [2024-11-26 07:42:04.114176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.265 [2024-11-26 07:42:04.114187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.265 qpair failed and we were unable to recover it. 00:32:20.265 [2024-11-26 07:42:04.114495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.266 [2024-11-26 07:42:04.114506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.266 qpair failed and we were unable to recover it. 00:32:20.266 [2024-11-26 07:42:04.114875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.266 [2024-11-26 07:42:04.114886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.266 qpair failed and we were unable to recover it. 00:32:20.266 [2024-11-26 07:42:04.115091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.266 [2024-11-26 07:42:04.115102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.266 qpair failed and we were unable to recover it. 00:32:20.266 [2024-11-26 07:42:04.115301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.266 [2024-11-26 07:42:04.115313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.266 qpair failed and we were unable to recover it. 00:32:20.266 [2024-11-26 07:42:04.115596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.266 [2024-11-26 07:42:04.115607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.266 qpair failed and we were unable to recover it. 00:32:20.266 [2024-11-26 07:42:04.115920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.266 [2024-11-26 07:42:04.115931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.266 qpair failed and we were unable to recover it. 00:32:20.266 [2024-11-26 07:42:04.116265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.266 [2024-11-26 07:42:04.116276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.266 qpair failed and we were unable to recover it. 00:32:20.266 [2024-11-26 07:42:04.116491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.266 [2024-11-26 07:42:04.116502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.266 qpair failed and we were unable to recover it. 00:32:20.266 [2024-11-26 07:42:04.116823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.266 [2024-11-26 07:42:04.116835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.266 qpair failed and we were unable to recover it. 00:32:20.266 [2024-11-26 07:42:04.117165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.266 [2024-11-26 07:42:04.117176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.266 qpair failed and we were unable to recover it. 00:32:20.266 [2024-11-26 07:42:04.117462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.266 [2024-11-26 07:42:04.117474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.266 qpair failed and we were unable to recover it. 00:32:20.266 [2024-11-26 07:42:04.117785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.266 [2024-11-26 07:42:04.117797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.266 qpair failed and we were unable to recover it. 00:32:20.266 [2024-11-26 07:42:04.118131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.266 [2024-11-26 07:42:04.118142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.266 qpair failed and we were unable to recover it. 00:32:20.266 [2024-11-26 07:42:04.118449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.266 [2024-11-26 07:42:04.118460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.266 qpair failed and we were unable to recover it. 00:32:20.266 [2024-11-26 07:42:04.118758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.266 [2024-11-26 07:42:04.118769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.266 qpair failed and we were unable to recover it. 00:32:20.266 [2024-11-26 07:42:04.119091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.266 [2024-11-26 07:42:04.119102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.266 qpair failed and we were unable to recover it. 00:32:20.266 [2024-11-26 07:42:04.119399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.266 [2024-11-26 07:42:04.119411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.266 qpair failed and we were unable to recover it. 00:32:20.266 [2024-11-26 07:42:04.119630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.266 [2024-11-26 07:42:04.119642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.266 qpair failed and we were unable to recover it. 00:32:20.266 [2024-11-26 07:42:04.119938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.266 [2024-11-26 07:42:04.119949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.266 qpair failed and we were unable to recover it. 00:32:20.266 [2024-11-26 07:42:04.120279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.266 [2024-11-26 07:42:04.120289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.266 qpair failed and we were unable to recover it. 00:32:20.266 [2024-11-26 07:42:04.120604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.266 [2024-11-26 07:42:04.120616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.266 qpair failed and we were unable to recover it. 00:32:20.266 [2024-11-26 07:42:04.120921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.266 [2024-11-26 07:42:04.120933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.266 qpair failed and we were unable to recover it. 00:32:20.266 [2024-11-26 07:42:04.121241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.266 [2024-11-26 07:42:04.121252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.266 qpair failed and we were unable to recover it. 00:32:20.266 [2024-11-26 07:42:04.121436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.266 [2024-11-26 07:42:04.121448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.266 qpair failed and we were unable to recover it. 00:32:20.266 [2024-11-26 07:42:04.121647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.266 [2024-11-26 07:42:04.121657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.266 qpair failed and we were unable to recover it. 00:32:20.266 [2024-11-26 07:42:04.121964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.266 [2024-11-26 07:42:04.121975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.266 qpair failed and we were unable to recover it. 00:32:20.266 [2024-11-26 07:42:04.122310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.266 [2024-11-26 07:42:04.122321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.266 qpair failed and we were unable to recover it. 00:32:20.266 [2024-11-26 07:42:04.122627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.266 [2024-11-26 07:42:04.122639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.266 qpair failed and we were unable to recover it. 00:32:20.266 [2024-11-26 07:42:04.122977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.266 [2024-11-26 07:42:04.122988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.266 qpair failed and we were unable to recover it. 00:32:20.266 [2024-11-26 07:42:04.123298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.266 [2024-11-26 07:42:04.123309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.266 qpair failed and we were unable to recover it. 00:32:20.266 [2024-11-26 07:42:04.123645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.266 [2024-11-26 07:42:04.123656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.266 qpair failed and we were unable to recover it. 00:32:20.266 [2024-11-26 07:42:04.123991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.266 [2024-11-26 07:42:04.124002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.266 qpair failed and we were unable to recover it. 00:32:20.266 [2024-11-26 07:42:04.124302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.266 [2024-11-26 07:42:04.124313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.266 qpair failed and we were unable to recover it. 00:32:20.266 [2024-11-26 07:42:04.124630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.266 [2024-11-26 07:42:04.124642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.266 qpair failed and we were unable to recover it. 00:32:20.266 [2024-11-26 07:42:04.124854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.266 [2024-11-26 07:42:04.124869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.266 qpair failed and we were unable to recover it. 00:32:20.266 [2024-11-26 07:42:04.125195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.266 [2024-11-26 07:42:04.125206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.266 qpair failed and we were unable to recover it. 00:32:20.266 [2024-11-26 07:42:04.125508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.266 [2024-11-26 07:42:04.125519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.266 qpair failed and we were unable to recover it. 00:32:20.266 [2024-11-26 07:42:04.125830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.266 [2024-11-26 07:42:04.125840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.266 qpair failed and we were unable to recover it. 00:32:20.266 [2024-11-26 07:42:04.126162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.267 [2024-11-26 07:42:04.126173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.267 qpair failed and we were unable to recover it. 00:32:20.267 [2024-11-26 07:42:04.126492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.267 [2024-11-26 07:42:04.126504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.267 qpair failed and we were unable to recover it. 00:32:20.267 [2024-11-26 07:42:04.126763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.267 [2024-11-26 07:42:04.126774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.267 qpair failed and we were unable to recover it. 00:32:20.267 [2024-11-26 07:42:04.127084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.267 [2024-11-26 07:42:04.127095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.267 qpair failed and we were unable to recover it. 00:32:20.267 [2024-11-26 07:42:04.127427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.267 [2024-11-26 07:42:04.127439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.267 qpair failed and we were unable to recover it. 00:32:20.267 [2024-11-26 07:42:04.127748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.267 [2024-11-26 07:42:04.127761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.267 qpair failed and we were unable to recover it. 00:32:20.267 [2024-11-26 07:42:04.128035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.267 [2024-11-26 07:42:04.128046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.267 qpair failed and we were unable to recover it. 00:32:20.267 [2024-11-26 07:42:04.128370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.267 [2024-11-26 07:42:04.128380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.267 qpair failed and we were unable to recover it. 00:32:20.267 [2024-11-26 07:42:04.128713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.267 [2024-11-26 07:42:04.128724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.267 qpair failed and we were unable to recover it. 00:32:20.267 [2024-11-26 07:42:04.129066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.267 [2024-11-26 07:42:04.129078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.267 qpair failed and we were unable to recover it. 00:32:20.267 [2024-11-26 07:42:04.129388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.267 [2024-11-26 07:42:04.129398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.267 qpair failed and we were unable to recover it. 00:32:20.267 [2024-11-26 07:42:04.129699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.267 [2024-11-26 07:42:04.129711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.267 qpair failed and we were unable to recover it. 00:32:20.267 [2024-11-26 07:42:04.130025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.267 [2024-11-26 07:42:04.130037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.267 qpair failed and we were unable to recover it. 00:32:20.267 [2024-11-26 07:42:04.130351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.267 [2024-11-26 07:42:04.130362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.267 qpair failed and we were unable to recover it. 00:32:20.267 [2024-11-26 07:42:04.130663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.267 [2024-11-26 07:42:04.130674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.267 qpair failed and we were unable to recover it. 00:32:20.267 [2024-11-26 07:42:04.130967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.267 [2024-11-26 07:42:04.130978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.267 qpair failed and we were unable to recover it. 00:32:20.267 [2024-11-26 07:42:04.131303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.267 [2024-11-26 07:42:04.131314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.267 qpair failed and we were unable to recover it. 00:32:20.267 [2024-11-26 07:42:04.131645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.267 [2024-11-26 07:42:04.131657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.267 qpair failed and we were unable to recover it. 00:32:20.267 [2024-11-26 07:42:04.131967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.267 [2024-11-26 07:42:04.131978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.267 qpair failed and we were unable to recover it. 00:32:20.267 [2024-11-26 07:42:04.132292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.267 [2024-11-26 07:42:04.132303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.267 qpair failed and we were unable to recover it. 00:32:20.267 [2024-11-26 07:42:04.132581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.267 [2024-11-26 07:42:04.132592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.267 qpair failed and we were unable to recover it. 00:32:20.267 [2024-11-26 07:42:04.132921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.267 [2024-11-26 07:42:04.132932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.267 qpair failed and we were unable to recover it. 00:32:20.267 [2024-11-26 07:42:04.133240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.267 [2024-11-26 07:42:04.133251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.267 qpair failed and we were unable to recover it. 00:32:20.267 [2024-11-26 07:42:04.133507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.267 [2024-11-26 07:42:04.133518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.267 qpair failed and we were unable to recover it. 00:32:20.267 [2024-11-26 07:42:04.133683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.267 [2024-11-26 07:42:04.133695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.267 qpair failed and we were unable to recover it. 00:32:20.267 [2024-11-26 07:42:04.134033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.267 [2024-11-26 07:42:04.134045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.267 qpair failed and we were unable to recover it. 00:32:20.267 [2024-11-26 07:42:04.134374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.267 [2024-11-26 07:42:04.134384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.267 qpair failed and we were unable to recover it. 00:32:20.267 [2024-11-26 07:42:04.134708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.267 [2024-11-26 07:42:04.134718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.267 qpair failed and we were unable to recover it. 00:32:20.267 [2024-11-26 07:42:04.135095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.267 [2024-11-26 07:42:04.135106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.267 qpair failed and we were unable to recover it. 00:32:20.267 [2024-11-26 07:42:04.135317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.267 [2024-11-26 07:42:04.135328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.267 qpair failed and we were unable to recover it. 00:32:20.267 [2024-11-26 07:42:04.135634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.267 [2024-11-26 07:42:04.135645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.267 qpair failed and we were unable to recover it. 00:32:20.267 [2024-11-26 07:42:04.135997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.267 [2024-11-26 07:42:04.136008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.267 qpair failed and we were unable to recover it. 00:32:20.267 [2024-11-26 07:42:04.136337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.267 [2024-11-26 07:42:04.136348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.268 [2024-11-26 07:42:04.136654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.268 [2024-11-26 07:42:04.136665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.268 [2024-11-26 07:42:04.136966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.268 [2024-11-26 07:42:04.136978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.268 [2024-11-26 07:42:04.137164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.268 [2024-11-26 07:42:04.137176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.268 [2024-11-26 07:42:04.137496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.268 [2024-11-26 07:42:04.137507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.268 [2024-11-26 07:42:04.137808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.268 [2024-11-26 07:42:04.137819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.268 [2024-11-26 07:42:04.138147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.268 [2024-11-26 07:42:04.138158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.268 [2024-11-26 07:42:04.138476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.268 [2024-11-26 07:42:04.138487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.268 [2024-11-26 07:42:04.138846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.268 [2024-11-26 07:42:04.138857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.268 [2024-11-26 07:42:04.139187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.268 [2024-11-26 07:42:04.139198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.268 [2024-11-26 07:42:04.139505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.268 [2024-11-26 07:42:04.139517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.268 [2024-11-26 07:42:04.139868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.268 [2024-11-26 07:42:04.139880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.268 [2024-11-26 07:42:04.140159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.268 [2024-11-26 07:42:04.140170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.268 [2024-11-26 07:42:04.140377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.268 [2024-11-26 07:42:04.140389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.268 [2024-11-26 07:42:04.140746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.268 [2024-11-26 07:42:04.140757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.268 [2024-11-26 07:42:04.140928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.268 [2024-11-26 07:42:04.140942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.268 [2024-11-26 07:42:04.141145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.268 [2024-11-26 07:42:04.141155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.268 [2024-11-26 07:42:04.141363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.268 [2024-11-26 07:42:04.141373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.268 [2024-11-26 07:42:04.141638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.268 [2024-11-26 07:42:04.141649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.268 [2024-11-26 07:42:04.141972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.268 [2024-11-26 07:42:04.141983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.268 [2024-11-26 07:42:04.142184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.268 [2024-11-26 07:42:04.142196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.268 [2024-11-26 07:42:04.142531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.268 [2024-11-26 07:42:04.142542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.268 [2024-11-26 07:42:04.142894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.268 [2024-11-26 07:42:04.142905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.268 [2024-11-26 07:42:04.143264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.268 [2024-11-26 07:42:04.143274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.268 [2024-11-26 07:42:04.143637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.268 [2024-11-26 07:42:04.143648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.268 [2024-11-26 07:42:04.143953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.268 [2024-11-26 07:42:04.143964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.268 [2024-11-26 07:42:04.144297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.268 [2024-11-26 07:42:04.144308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.268 [2024-11-26 07:42:04.144618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.268 [2024-11-26 07:42:04.144629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.268 [2024-11-26 07:42:04.144925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.268 [2024-11-26 07:42:04.144937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.268 [2024-11-26 07:42:04.145242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.268 [2024-11-26 07:42:04.145253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.268 [2024-11-26 07:42:04.145586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.268 [2024-11-26 07:42:04.145597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.268 [2024-11-26 07:42:04.145904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.268 [2024-11-26 07:42:04.145916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.268 [2024-11-26 07:42:04.146305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.268 [2024-11-26 07:42:04.146316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.268 [2024-11-26 07:42:04.146662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.268 [2024-11-26 07:42:04.146673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.268 [2024-11-26 07:42:04.146836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.268 [2024-11-26 07:42:04.146847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.268 [2024-11-26 07:42:04.147162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.268 [2024-11-26 07:42:04.147173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.268 [2024-11-26 07:42:04.147497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.268 [2024-11-26 07:42:04.147508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.268 [2024-11-26 07:42:04.147692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.268 [2024-11-26 07:42:04.147704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.268 [2024-11-26 07:42:04.148026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.268 [2024-11-26 07:42:04.148037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.268 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.148431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.269 [2024-11-26 07:42:04.148441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.269 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.148753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.269 [2024-11-26 07:42:04.148763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.269 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.149077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.269 [2024-11-26 07:42:04.149090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.269 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.149421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.269 [2024-11-26 07:42:04.149432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.269 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.149752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.269 [2024-11-26 07:42:04.149763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.269 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.149950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.269 [2024-11-26 07:42:04.149962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.269 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.150293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.269 [2024-11-26 07:42:04.150303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.269 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.150696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.269 [2024-11-26 07:42:04.150707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.269 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.151030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.269 [2024-11-26 07:42:04.151042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.269 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.151339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.269 [2024-11-26 07:42:04.151349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.269 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.151685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.269 [2024-11-26 07:42:04.151696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.269 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.152008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.269 [2024-11-26 07:42:04.152019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.269 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.152338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.269 [2024-11-26 07:42:04.152349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.269 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.152733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.269 [2024-11-26 07:42:04.152743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.269 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.153092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.269 [2024-11-26 07:42:04.153104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.269 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.153413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.269 [2024-11-26 07:42:04.153423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.269 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.153623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.269 [2024-11-26 07:42:04.153636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.269 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.153969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.269 [2024-11-26 07:42:04.153980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.269 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.154343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.269 [2024-11-26 07:42:04.154354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.269 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.154661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.269 [2024-11-26 07:42:04.154672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.269 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.154977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.269 [2024-11-26 07:42:04.154988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.269 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.155185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.269 [2024-11-26 07:42:04.155195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.269 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.155428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.269 [2024-11-26 07:42:04.155440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.269 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.155629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.269 [2024-11-26 07:42:04.155640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.269 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.155857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.269 [2024-11-26 07:42:04.155872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.269 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.156200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.269 [2024-11-26 07:42:04.156211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.269 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.156548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.269 [2024-11-26 07:42:04.156559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.269 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.156868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.269 [2024-11-26 07:42:04.156879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.269 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.157233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.269 [2024-11-26 07:42:04.157243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.269 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.157508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.269 [2024-11-26 07:42:04.157521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.269 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.157877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.269 [2024-11-26 07:42:04.157889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.269 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.158254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.269 [2024-11-26 07:42:04.158265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.269 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.158571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.269 [2024-11-26 07:42:04.158582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.269 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.158916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.269 [2024-11-26 07:42:04.158928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.269 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.158979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.269 [2024-11-26 07:42:04.158990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.269 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.159315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.269 [2024-11-26 07:42:04.159326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.269 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.159649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.269 [2024-11-26 07:42:04.159659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.269 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.159983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.269 [2024-11-26 07:42:04.159994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.269 qpair failed and we were unable to recover it. 00:32:20.269 [2024-11-26 07:42:04.160314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.160325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.270 [2024-11-26 07:42:04.160659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.160669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.270 [2024-11-26 07:42:04.161006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.161017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.270 [2024-11-26 07:42:04.161366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.161377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.270 [2024-11-26 07:42:04.161737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.161748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.270 [2024-11-26 07:42:04.161939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.161950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.270 [2024-11-26 07:42:04.162233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.162244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.270 [2024-11-26 07:42:04.162578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.162589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.270 [2024-11-26 07:42:04.162853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.162868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.270 [2024-11-26 07:42:04.163075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.163085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.270 [2024-11-26 07:42:04.163391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.163402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.270 [2024-11-26 07:42:04.163689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.163700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.270 [2024-11-26 07:42:04.164019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.164030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.270 [2024-11-26 07:42:04.164401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.164411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.270 [2024-11-26 07:42:04.164758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.164770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.270 [2024-11-26 07:42:04.164944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.164955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.270 [2024-11-26 07:42:04.165193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.165205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.270 [2024-11-26 07:42:04.165533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.165543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.270 [2024-11-26 07:42:04.165858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.165874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.270 [2024-11-26 07:42:04.166185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.166196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.270 [2024-11-26 07:42:04.166534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.166544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.270 [2024-11-26 07:42:04.166866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.166877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.270 [2024-11-26 07:42:04.167224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.167234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.270 [2024-11-26 07:42:04.167519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.167530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.270 [2024-11-26 07:42:04.167723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.167734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.270 [2024-11-26 07:42:04.168025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.168036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.270 [2024-11-26 07:42:04.168367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.168378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.270 [2024-11-26 07:42:04.168696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.168707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.270 [2024-11-26 07:42:04.168940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.168952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.270 [2024-11-26 07:42:04.169178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.169188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.270 [2024-11-26 07:42:04.169391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.169402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.270 [2024-11-26 07:42:04.169717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.169730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.270 [2024-11-26 07:42:04.170076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.170087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.270 [2024-11-26 07:42:04.170390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.170400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.270 [2024-11-26 07:42:04.170766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.170776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.270 [2024-11-26 07:42:04.171111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.171122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.270 [2024-11-26 07:42:04.171324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.171335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.270 [2024-11-26 07:42:04.171655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.171666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.270 [2024-11-26 07:42:04.171888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.270 [2024-11-26 07:42:04.171900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.270 qpair failed and we were unable to recover it. 00:32:20.271 [2024-11-26 07:42:04.172243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.271 [2024-11-26 07:42:04.172254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.271 qpair failed and we were unable to recover it. 00:32:20.271 [2024-11-26 07:42:04.172565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.271 [2024-11-26 07:42:04.172576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.271 qpair failed and we were unable to recover it. 00:32:20.271 [2024-11-26 07:42:04.172896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.271 [2024-11-26 07:42:04.172907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.271 qpair failed and we were unable to recover it. 00:32:20.271 [2024-11-26 07:42:04.173269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.271 [2024-11-26 07:42:04.173280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.271 qpair failed and we were unable to recover it. 00:32:20.271 [2024-11-26 07:42:04.173546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.271 [2024-11-26 07:42:04.173557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.271 qpair failed and we were unable to recover it. 00:32:20.271 [2024-11-26 07:42:04.173881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.271 [2024-11-26 07:42:04.173892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.271 qpair failed and we were unable to recover it. 00:32:20.271 [2024-11-26 07:42:04.174219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.271 [2024-11-26 07:42:04.174229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.271 qpair failed and we were unable to recover it. 00:32:20.271 [2024-11-26 07:42:04.174529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.271 [2024-11-26 07:42:04.174540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.271 qpair failed and we were unable to recover it. 00:32:20.271 [2024-11-26 07:42:04.174883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.271 [2024-11-26 07:42:04.174894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.271 qpair failed and we were unable to recover it. 00:32:20.271 [2024-11-26 07:42:04.175140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.271 [2024-11-26 07:42:04.175151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.271 qpair failed and we were unable to recover it. 00:32:20.271 [2024-11-26 07:42:04.175418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.271 [2024-11-26 07:42:04.175428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.271 qpair failed and we were unable to recover it. 00:32:20.271 [2024-11-26 07:42:04.175729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.271 [2024-11-26 07:42:04.175739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.271 qpair failed and we were unable to recover it. 00:32:20.271 [2024-11-26 07:42:04.176062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.271 [2024-11-26 07:42:04.176073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.271 qpair failed and we were unable to recover it. 00:32:20.271 [2024-11-26 07:42:04.176390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.271 [2024-11-26 07:42:04.176401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.271 qpair failed and we were unable to recover it. 00:32:20.271 [2024-11-26 07:42:04.176593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.271 [2024-11-26 07:42:04.176605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.271 qpair failed and we were unable to recover it. 00:32:20.271 [2024-11-26 07:42:04.176897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.271 [2024-11-26 07:42:04.176908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.271 qpair failed and we were unable to recover it. 00:32:20.271 [2024-11-26 07:42:04.177200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.271 [2024-11-26 07:42:04.177211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.271 qpair failed and we were unable to recover it. 00:32:20.271 [2024-11-26 07:42:04.177491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.271 [2024-11-26 07:42:04.177501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.271 qpair failed and we were unable to recover it. 00:32:20.271 [2024-11-26 07:42:04.177888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.271 [2024-11-26 07:42:04.177899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.271 qpair failed and we were unable to recover it. 00:32:20.271 [2024-11-26 07:42:04.178315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.271 [2024-11-26 07:42:04.178326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.271 qpair failed and we were unable to recover it. 00:32:20.271 [2024-11-26 07:42:04.178667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.271 [2024-11-26 07:42:04.178679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.271 qpair failed and we were unable to recover it. 00:32:20.271 [2024-11-26 07:42:04.178997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.271 [2024-11-26 07:42:04.179009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.271 qpair failed and we were unable to recover it. 00:32:20.271 [2024-11-26 07:42:04.179362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.271 [2024-11-26 07:42:04.179373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.271 qpair failed and we were unable to recover it. 00:32:20.271 [2024-11-26 07:42:04.179682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.271 [2024-11-26 07:42:04.179693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.271 qpair failed and we were unable to recover it. 00:32:20.271 [2024-11-26 07:42:04.179899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.271 [2024-11-26 07:42:04.179911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.271 qpair failed and we were unable to recover it. 00:32:20.271 [2024-11-26 07:42:04.180240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.271 [2024-11-26 07:42:04.180251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.271 qpair failed and we were unable to recover it. 00:32:20.271 [2024-11-26 07:42:04.180558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.271 [2024-11-26 07:42:04.180569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.271 qpair failed and we were unable to recover it. 00:32:20.271 [2024-11-26 07:42:04.180884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.271 [2024-11-26 07:42:04.180895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.271 qpair failed and we were unable to recover it. 00:32:20.271 [2024-11-26 07:42:04.181216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.271 [2024-11-26 07:42:04.181227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.271 qpair failed and we were unable to recover it. 00:32:20.271 [2024-11-26 07:42:04.181423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.271 [2024-11-26 07:42:04.181434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.271 qpair failed and we were unable to recover it. 00:32:20.271 [2024-11-26 07:42:04.181545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.271 [2024-11-26 07:42:04.181554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.271 qpair failed and we were unable to recover it. 00:32:20.271 [2024-11-26 07:42:04.181737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.271 [2024-11-26 07:42:04.181748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.271 qpair failed and we were unable to recover it. 00:32:20.271 [2024-11-26 07:42:04.182058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.271 [2024-11-26 07:42:04.182069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.271 qpair failed and we were unable to recover it. 00:32:20.272 [2024-11-26 07:42:04.182402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.272 [2024-11-26 07:42:04.182412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.272 qpair failed and we were unable to recover it. 00:32:20.272 [2024-11-26 07:42:04.182743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.272 [2024-11-26 07:42:04.182754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.272 qpair failed and we were unable to recover it. 00:32:20.272 [2024-11-26 07:42:04.182904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.272 [2024-11-26 07:42:04.182915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.272 qpair failed and we were unable to recover it. 00:32:20.272 [2024-11-26 07:42:04.183263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.272 [2024-11-26 07:42:04.183274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.272 qpair failed and we were unable to recover it. 00:32:20.272 [2024-11-26 07:42:04.183548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.272 [2024-11-26 07:42:04.183559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.272 qpair failed and we were unable to recover it. 00:32:20.272 [2024-11-26 07:42:04.183903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.272 [2024-11-26 07:42:04.183915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.272 qpair failed and we were unable to recover it. 00:32:20.272 [2024-11-26 07:42:04.184248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.272 [2024-11-26 07:42:04.184258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.272 qpair failed and we were unable to recover it. 00:32:20.272 [2024-11-26 07:42:04.184460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.272 [2024-11-26 07:42:04.184470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.272 qpair failed and we were unable to recover it. 00:32:20.272 [2024-11-26 07:42:04.184663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.272 [2024-11-26 07:42:04.184675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.272 qpair failed and we were unable to recover it. 00:32:20.272 [2024-11-26 07:42:04.184948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.272 [2024-11-26 07:42:04.184958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.272 qpair failed and we were unable to recover it. 00:32:20.272 [2024-11-26 07:42:04.185268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.272 [2024-11-26 07:42:04.185279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.272 qpair failed and we were unable to recover it. 00:32:20.272 [2024-11-26 07:42:04.185585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.272 [2024-11-26 07:42:04.185596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.272 qpair failed and we were unable to recover it. 00:32:20.272 [2024-11-26 07:42:04.185883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.272 [2024-11-26 07:42:04.185895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.272 qpair failed and we were unable to recover it. 00:32:20.272 [2024-11-26 07:42:04.186050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.272 [2024-11-26 07:42:04.186063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.272 qpair failed and we were unable to recover it. 00:32:20.272 [2024-11-26 07:42:04.186371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.272 [2024-11-26 07:42:04.186383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.272 qpair failed and we were unable to recover it. 00:32:20.272 [2024-11-26 07:42:04.186693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.272 [2024-11-26 07:42:04.186704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.272 qpair failed and we were unable to recover it. 00:32:20.272 [2024-11-26 07:42:04.187009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.272 [2024-11-26 07:42:04.187020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.272 qpair failed and we were unable to recover it. 00:32:20.272 [2024-11-26 07:42:04.187235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.272 [2024-11-26 07:42:04.187246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.272 qpair failed and we were unable to recover it. 00:32:20.272 [2024-11-26 07:42:04.187557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.272 [2024-11-26 07:42:04.187568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.272 qpair failed and we were unable to recover it. 00:32:20.272 [2024-11-26 07:42:04.187877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.272 [2024-11-26 07:42:04.187889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.272 qpair failed and we were unable to recover it. 00:32:20.272 [2024-11-26 07:42:04.188101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.272 [2024-11-26 07:42:04.188111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.272 qpair failed and we were unable to recover it. 00:32:20.272 [2024-11-26 07:42:04.188443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.272 [2024-11-26 07:42:04.188454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.272 qpair failed and we were unable to recover it. 00:32:20.272 [2024-11-26 07:42:04.188759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.272 [2024-11-26 07:42:04.188770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.272 qpair failed and we were unable to recover it. 00:32:20.272 [2024-11-26 07:42:04.189109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.272 [2024-11-26 07:42:04.189120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.272 qpair failed and we were unable to recover it. 00:32:20.272 [2024-11-26 07:42:04.189447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.272 [2024-11-26 07:42:04.189460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.272 qpair failed and we were unable to recover it. 00:32:20.272 [2024-11-26 07:42:04.189773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.272 [2024-11-26 07:42:04.189784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.272 qpair failed and we were unable to recover it. 00:32:20.272 [2024-11-26 07:42:04.189973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.272 [2024-11-26 07:42:04.189984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.272 qpair failed and we were unable to recover it. 00:32:20.272 [2024-11-26 07:42:04.190185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.272 [2024-11-26 07:42:04.190195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.272 qpair failed and we were unable to recover it. 00:32:20.272 [2024-11-26 07:42:04.190525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.272 [2024-11-26 07:42:04.190536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.272 qpair failed and we were unable to recover it. 00:32:20.272 [2024-11-26 07:42:04.190748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.272 [2024-11-26 07:42:04.190759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.272 qpair failed and we were unable to recover it. 00:32:20.272 [2024-11-26 07:42:04.191069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.272 [2024-11-26 07:42:04.191080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.272 qpair failed and we were unable to recover it. 00:32:20.272 [2024-11-26 07:42:04.191389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.272 [2024-11-26 07:42:04.191400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.272 qpair failed and we were unable to recover it. 00:32:20.272 [2024-11-26 07:42:04.191681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.272 [2024-11-26 07:42:04.191693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.272 qpair failed and we were unable to recover it. 00:32:20.272 [2024-11-26 07:42:04.192008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.272 [2024-11-26 07:42:04.192020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.272 qpair failed and we were unable to recover it. 00:32:20.272 [2024-11-26 07:42:04.192337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.272 [2024-11-26 07:42:04.192348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.272 qpair failed and we were unable to recover it. 00:32:20.272 [2024-11-26 07:42:04.192665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.272 [2024-11-26 07:42:04.192676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.272 qpair failed and we were unable to recover it. 00:32:20.272 [2024-11-26 07:42:04.193000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.272 [2024-11-26 07:42:04.193011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.272 qpair failed and we were unable to recover it. 00:32:20.272 [2024-11-26 07:42:04.193371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.272 [2024-11-26 07:42:04.193382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.272 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.193680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.273 [2024-11-26 07:42:04.193691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.273 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.194004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.273 [2024-11-26 07:42:04.194015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.273 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.194215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.273 [2024-11-26 07:42:04.194225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.273 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.194557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.273 [2024-11-26 07:42:04.194573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.273 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.194935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.273 [2024-11-26 07:42:04.194946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.273 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.195155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.273 [2024-11-26 07:42:04.195166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.273 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.195489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.273 [2024-11-26 07:42:04.195500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.273 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.195666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.273 [2024-11-26 07:42:04.195677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.273 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.195986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.273 [2024-11-26 07:42:04.195997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.273 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.196207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.273 [2024-11-26 07:42:04.196217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.273 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.196527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.273 [2024-11-26 07:42:04.196539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.273 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.196848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.273 [2024-11-26 07:42:04.196859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.273 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.197200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.273 [2024-11-26 07:42:04.197212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.273 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.197540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.273 [2024-11-26 07:42:04.197551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.273 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.197889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.273 [2024-11-26 07:42:04.197900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.273 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.198244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.273 [2024-11-26 07:42:04.198254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.273 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.198562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.273 [2024-11-26 07:42:04.198573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.273 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.198888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.273 [2024-11-26 07:42:04.198899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.273 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.199217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.273 [2024-11-26 07:42:04.199227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.273 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.199520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.273 [2024-11-26 07:42:04.199531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.273 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.199710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.273 [2024-11-26 07:42:04.199721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.273 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.200010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.273 [2024-11-26 07:42:04.200022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.273 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.200362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.273 [2024-11-26 07:42:04.200373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.273 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.200659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.273 [2024-11-26 07:42:04.200670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.273 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.200967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.273 [2024-11-26 07:42:04.200978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.273 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.201180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.273 [2024-11-26 07:42:04.201191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.273 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.201504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.273 [2024-11-26 07:42:04.201515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.273 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.201877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.273 [2024-11-26 07:42:04.201889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.273 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.202098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.273 [2024-11-26 07:42:04.202109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.273 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.202384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.273 [2024-11-26 07:42:04.202396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.273 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.202732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.273 [2024-11-26 07:42:04.202743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.273 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.203082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.273 [2024-11-26 07:42:04.203093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.273 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.203406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.273 [2024-11-26 07:42:04.203416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.273 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.203722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.273 [2024-11-26 07:42:04.203732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.273 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.204053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.273 [2024-11-26 07:42:04.204064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.273 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.204266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.273 [2024-11-26 07:42:04.204276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.273 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.204545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.273 [2024-11-26 07:42:04.204555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.273 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.204870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.273 [2024-11-26 07:42:04.204881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.273 qpair failed and we were unable to recover it. 00:32:20.273 [2024-11-26 07:42:04.205173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.205185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.274 qpair failed and we were unable to recover it. 00:32:20.274 [2024-11-26 07:42:04.205485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.205496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.274 qpair failed and we were unable to recover it. 00:32:20.274 [2024-11-26 07:42:04.205676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.205689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.274 qpair failed and we were unable to recover it. 00:32:20.274 [2024-11-26 07:42:04.205998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.206010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.274 qpair failed and we were unable to recover it. 00:32:20.274 [2024-11-26 07:42:04.206326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.206337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.274 qpair failed and we were unable to recover it. 00:32:20.274 [2024-11-26 07:42:04.206651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.206661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.274 qpair failed and we were unable to recover it. 00:32:20.274 [2024-11-26 07:42:04.206871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.206882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.274 qpair failed and we were unable to recover it. 00:32:20.274 [2024-11-26 07:42:04.207151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.207162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.274 qpair failed and we were unable to recover it. 00:32:20.274 [2024-11-26 07:42:04.207494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.207505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.274 qpair failed and we were unable to recover it. 00:32:20.274 [2024-11-26 07:42:04.207812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.207823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.274 qpair failed and we were unable to recover it. 00:32:20.274 [2024-11-26 07:42:04.208132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.208143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.274 qpair failed and we were unable to recover it. 00:32:20.274 [2024-11-26 07:42:04.208478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.208489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.274 qpair failed and we were unable to recover it. 00:32:20.274 [2024-11-26 07:42:04.208780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.208790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.274 qpair failed and we were unable to recover it. 00:32:20.274 [2024-11-26 07:42:04.209106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.209118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.274 qpair failed and we were unable to recover it. 00:32:20.274 [2024-11-26 07:42:04.209464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.209475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.274 qpair failed and we were unable to recover it. 00:32:20.274 [2024-11-26 07:42:04.209777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.209788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.274 qpair failed and we were unable to recover it. 00:32:20.274 [2024-11-26 07:42:04.210105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.210118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.274 qpair failed and we were unable to recover it. 00:32:20.274 [2024-11-26 07:42:04.210452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.210463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.274 qpair failed and we were unable to recover it. 00:32:20.274 [2024-11-26 07:42:04.210806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.210817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.274 qpair failed and we were unable to recover it. 00:32:20.274 [2024-11-26 07:42:04.211057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.211068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.274 qpair failed and we were unable to recover it. 00:32:20.274 [2024-11-26 07:42:04.211366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.211378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.274 qpair failed and we were unable to recover it. 00:32:20.274 [2024-11-26 07:42:04.211681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.211691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.274 qpair failed and we were unable to recover it. 00:32:20.274 [2024-11-26 07:42:04.211771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.211781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.274 qpair failed and we were unable to recover it. 00:32:20.274 [2024-11-26 07:42:04.212050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.212061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.274 qpair failed and we were unable to recover it. 00:32:20.274 [2024-11-26 07:42:04.212384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.212395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.274 qpair failed and we were unable to recover it. 00:32:20.274 [2024-11-26 07:42:04.212678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.212688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.274 qpair failed and we were unable to recover it. 00:32:20.274 [2024-11-26 07:42:04.213016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.213028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.274 qpair failed and we were unable to recover it. 00:32:20.274 [2024-11-26 07:42:04.213339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.213350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.274 qpair failed and we were unable to recover it. 00:32:20.274 [2024-11-26 07:42:04.213648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.213659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.274 qpair failed and we were unable to recover it. 00:32:20.274 [2024-11-26 07:42:04.213972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.213983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.274 qpair failed and we were unable to recover it. 00:32:20.274 [2024-11-26 07:42:04.214295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.214306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.274 qpair failed and we were unable to recover it. 00:32:20.274 [2024-11-26 07:42:04.214614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.214626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.274 qpair failed and we were unable to recover it. 00:32:20.274 [2024-11-26 07:42:04.214901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.214913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.274 qpair failed and we were unable to recover it. 00:32:20.274 [2024-11-26 07:42:04.215274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.215287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.274 qpair failed and we were unable to recover it. 00:32:20.274 [2024-11-26 07:42:04.215593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.215604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.274 qpair failed and we were unable to recover it. 00:32:20.274 [2024-11-26 07:42:04.215918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.215929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.274 qpair failed and we were unable to recover it. 00:32:20.274 [2024-11-26 07:42:04.216249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.216261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.274 qpair failed and we were unable to recover it. 00:32:20.274 [2024-11-26 07:42:04.216561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.216573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.274 qpair failed and we were unable to recover it. 00:32:20.274 [2024-11-26 07:42:04.216879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.274 [2024-11-26 07:42:04.216891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.275 qpair failed and we were unable to recover it. 00:32:20.275 [2024-11-26 07:42:04.217097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.275 [2024-11-26 07:42:04.217107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.275 qpair failed and we were unable to recover it. 00:32:20.275 [2024-11-26 07:42:04.217299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.275 [2024-11-26 07:42:04.217311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.275 qpair failed and we were unable to recover it. 00:32:20.275 [2024-11-26 07:42:04.217615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.275 [2024-11-26 07:42:04.217626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.275 qpair failed and we were unable to recover it. 00:32:20.275 [2024-11-26 07:42:04.217930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.275 [2024-11-26 07:42:04.217941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.275 qpair failed and we were unable to recover it. 00:32:20.275 [2024-11-26 07:42:04.218302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.275 [2024-11-26 07:42:04.218313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.275 qpair failed and we were unable to recover it. 00:32:20.275 [2024-11-26 07:42:04.218655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.275 [2024-11-26 07:42:04.218666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.275 qpair failed and we were unable to recover it. 00:32:20.275 [2024-11-26 07:42:04.218872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.275 [2024-11-26 07:42:04.218883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.275 qpair failed and we were unable to recover it. 00:32:20.275 [2024-11-26 07:42:04.219211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.275 [2024-11-26 07:42:04.219222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.275 qpair failed and we were unable to recover it. 00:32:20.275 [2024-11-26 07:42:04.219566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.275 [2024-11-26 07:42:04.219577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.275 qpair failed and we were unable to recover it. 00:32:20.275 [2024-11-26 07:42:04.219776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.275 [2024-11-26 07:42:04.219787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.275 qpair failed and we were unable to recover it. 00:32:20.275 [2024-11-26 07:42:04.220060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.275 [2024-11-26 07:42:04.220072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.275 qpair failed and we were unable to recover it. 00:32:20.275 [2024-11-26 07:42:04.220375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.275 [2024-11-26 07:42:04.220386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.275 qpair failed and we were unable to recover it. 00:32:20.275 [2024-11-26 07:42:04.220708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.275 [2024-11-26 07:42:04.220718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.275 qpair failed and we were unable to recover it. 00:32:20.275 [2024-11-26 07:42:04.221027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.275 [2024-11-26 07:42:04.221039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.275 qpair failed and we were unable to recover it. 00:32:20.275 [2024-11-26 07:42:04.221319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.275 [2024-11-26 07:42:04.221330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.275 qpair failed and we were unable to recover it. 00:32:20.275 [2024-11-26 07:42:04.221612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.275 [2024-11-26 07:42:04.221624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.275 qpair failed and we were unable to recover it. 00:32:20.275 [2024-11-26 07:42:04.221826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.275 [2024-11-26 07:42:04.221837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.275 qpair failed and we were unable to recover it. 00:32:20.275 [2024-11-26 07:42:04.222150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.275 [2024-11-26 07:42:04.222161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.275 qpair failed and we were unable to recover it. 00:32:20.275 [2024-11-26 07:42:04.222503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.275 [2024-11-26 07:42:04.222514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.275 qpair failed and we were unable to recover it. 00:32:20.275 [2024-11-26 07:42:04.222842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.275 [2024-11-26 07:42:04.222853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.275 qpair failed and we were unable to recover it. 00:32:20.275 [2024-11-26 07:42:04.223159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.275 [2024-11-26 07:42:04.223170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.275 qpair failed and we were unable to recover it. 00:32:20.275 [2024-11-26 07:42:04.223471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.275 [2024-11-26 07:42:04.223485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.275 qpair failed and we were unable to recover it. 00:32:20.275 [2024-11-26 07:42:04.223811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.275 [2024-11-26 07:42:04.223823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.275 qpair failed and we were unable to recover it. 00:32:20.275 [2024-11-26 07:42:04.224271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.275 [2024-11-26 07:42:04.224282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.275 qpair failed and we were unable to recover it. 00:32:20.275 [2024-11-26 07:42:04.224611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.275 [2024-11-26 07:42:04.224622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.275 qpair failed and we were unable to recover it. 00:32:20.275 [2024-11-26 07:42:04.224934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.275 [2024-11-26 07:42:04.224945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.275 qpair failed and we were unable to recover it. 00:32:20.275 [2024-11-26 07:42:04.225267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.275 [2024-11-26 07:42:04.225277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.275 qpair failed and we were unable to recover it. 00:32:20.275 [2024-11-26 07:42:04.225580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.275 [2024-11-26 07:42:04.225592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.275 qpair failed and we were unable to recover it. 00:32:20.275 [2024-11-26 07:42:04.225919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.275 [2024-11-26 07:42:04.225931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.275 qpair failed and we were unable to recover it. 00:32:20.275 [2024-11-26 07:42:04.226243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.275 [2024-11-26 07:42:04.226254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.275 qpair failed and we were unable to recover it. 00:32:20.275 [2024-11-26 07:42:04.226533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.275 [2024-11-26 07:42:04.226544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.275 qpair failed and we were unable to recover it. 00:32:20.275 [2024-11-26 07:42:04.226897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.275 [2024-11-26 07:42:04.226909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.275 qpair failed and we were unable to recover it. 00:32:20.275 [2024-11-26 07:42:04.227221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.275 [2024-11-26 07:42:04.227231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.275 qpair failed and we were unable to recover it. 00:32:20.275 [2024-11-26 07:42:04.227573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.275 [2024-11-26 07:42:04.227584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.275 qpair failed and we were unable to recover it. 00:32:20.275 [2024-11-26 07:42:04.227882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.275 [2024-11-26 07:42:04.227893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.275 qpair failed and we were unable to recover it. 00:32:20.275 [2024-11-26 07:42:04.228235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.275 [2024-11-26 07:42:04.228247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.276 qpair failed and we were unable to recover it. 00:32:20.276 [2024-11-26 07:42:04.228545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.276 [2024-11-26 07:42:04.228556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.276 qpair failed and we were unable to recover it. 00:32:20.276 [2024-11-26 07:42:04.228900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.276 [2024-11-26 07:42:04.228911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.276 qpair failed and we were unable to recover it. 00:32:20.276 [2024-11-26 07:42:04.229113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.276 [2024-11-26 07:42:04.229124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.276 qpair failed and we were unable to recover it. 00:32:20.276 [2024-11-26 07:42:04.229435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.276 [2024-11-26 07:42:04.229446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.276 qpair failed and we were unable to recover it. 00:32:20.276 [2024-11-26 07:42:04.229748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.276 [2024-11-26 07:42:04.229760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.276 qpair failed and we were unable to recover it. 00:32:20.276 [2024-11-26 07:42:04.230034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.276 [2024-11-26 07:42:04.230046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.276 qpair failed and we were unable to recover it. 00:32:20.276 [2024-11-26 07:42:04.230411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.276 [2024-11-26 07:42:04.230422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.276 qpair failed and we were unable to recover it. 00:32:20.276 [2024-11-26 07:42:04.230728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.276 [2024-11-26 07:42:04.230739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.276 qpair failed and we were unable to recover it. 00:32:20.276 [2024-11-26 07:42:04.230938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.276 [2024-11-26 07:42:04.230950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.276 qpair failed and we were unable to recover it. 00:32:20.276 [2024-11-26 07:42:04.231280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.276 [2024-11-26 07:42:04.231291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.276 qpair failed and we were unable to recover it. 00:32:20.276 [2024-11-26 07:42:04.231503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.276 [2024-11-26 07:42:04.231513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.276 qpair failed and we were unable to recover it. 00:32:20.276 [2024-11-26 07:42:04.231789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.276 [2024-11-26 07:42:04.231800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.276 qpair failed and we were unable to recover it. 00:32:20.276 [2024-11-26 07:42:04.232092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.276 [2024-11-26 07:42:04.232103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.276 qpair failed and we were unable to recover it. 00:32:20.276 [2024-11-26 07:42:04.232421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.276 [2024-11-26 07:42:04.232432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.276 qpair failed and we were unable to recover it. 00:32:20.276 [2024-11-26 07:42:04.232634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.276 [2024-11-26 07:42:04.232646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.276 qpair failed and we were unable to recover it. 00:32:20.276 [2024-11-26 07:42:04.232928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.276 [2024-11-26 07:42:04.232939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.276 qpair failed and we were unable to recover it. 00:32:20.276 [2024-11-26 07:42:04.233271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.276 [2024-11-26 07:42:04.233282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.276 qpair failed and we were unable to recover it. 00:32:20.276 [2024-11-26 07:42:04.233572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.276 [2024-11-26 07:42:04.233583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.276 qpair failed and we were unable to recover it. 00:32:20.276 [2024-11-26 07:42:04.233912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.276 [2024-11-26 07:42:04.233923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.276 qpair failed and we were unable to recover it. 00:32:20.276 [2024-11-26 07:42:04.234251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.276 [2024-11-26 07:42:04.234262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.276 qpair failed and we were unable to recover it. 00:32:20.276 [2024-11-26 07:42:04.234580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.276 [2024-11-26 07:42:04.234592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.276 qpair failed and we were unable to recover it. 00:32:20.276 [2024-11-26 07:42:04.234775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.276 [2024-11-26 07:42:04.234788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.276 qpair failed and we were unable to recover it. 00:32:20.277 [2024-11-26 07:42:04.235075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.277 [2024-11-26 07:42:04.235087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.277 qpair failed and we were unable to recover it. 00:32:20.277 [2024-11-26 07:42:04.235474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.277 [2024-11-26 07:42:04.235485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.277 qpair failed and we were unable to recover it. 00:32:20.277 [2024-11-26 07:42:04.235785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.277 [2024-11-26 07:42:04.235796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.277 qpair failed and we were unable to recover it. 00:32:20.277 [2024-11-26 07:42:04.236107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.277 [2024-11-26 07:42:04.236118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.277 qpair failed and we were unable to recover it. 00:32:20.277 [2024-11-26 07:42:04.236457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.277 [2024-11-26 07:42:04.236468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.277 qpair failed and we were unable to recover it. 00:32:20.277 [2024-11-26 07:42:04.236778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.277 [2024-11-26 07:42:04.236789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.277 qpair failed and we were unable to recover it. 00:32:20.277 [2024-11-26 07:42:04.237097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.277 [2024-11-26 07:42:04.237108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.277 qpair failed and we were unable to recover it. 00:32:20.277 [2024-11-26 07:42:04.237409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.277 [2024-11-26 07:42:04.237421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.277 qpair failed and we were unable to recover it. 00:32:20.277 [2024-11-26 07:42:04.237753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.277 [2024-11-26 07:42:04.237764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.277 qpair failed and we were unable to recover it. 00:32:20.277 [2024-11-26 07:42:04.238090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.277 [2024-11-26 07:42:04.238102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.277 qpair failed and we were unable to recover it. 00:32:20.277 [2024-11-26 07:42:04.238413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.277 [2024-11-26 07:42:04.238423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.277 qpair failed and we were unable to recover it. 00:32:20.277 [2024-11-26 07:42:04.238736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.277 [2024-11-26 07:42:04.238747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.277 qpair failed and we were unable to recover it. 00:32:20.277 [2024-11-26 07:42:04.239030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.277 [2024-11-26 07:42:04.239041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.277 qpair failed and we were unable to recover it. 00:32:20.277 [2024-11-26 07:42:04.239336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.277 [2024-11-26 07:42:04.239347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.277 qpair failed and we were unable to recover it. 00:32:20.277 [2024-11-26 07:42:04.239659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.277 [2024-11-26 07:42:04.239670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.277 qpair failed and we were unable to recover it. 00:32:20.277 [2024-11-26 07:42:04.239980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.277 [2024-11-26 07:42:04.239992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.277 qpair failed and we were unable to recover it. 00:32:20.277 [2024-11-26 07:42:04.240326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.277 [2024-11-26 07:42:04.240337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.277 qpair failed and we were unable to recover it. 00:32:20.277 [2024-11-26 07:42:04.240566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.277 [2024-11-26 07:42:04.240578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.277 qpair failed and we were unable to recover it. 00:32:20.277 [2024-11-26 07:42:04.240836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.277 [2024-11-26 07:42:04.240847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.277 qpair failed and we were unable to recover it. 00:32:20.277 [2024-11-26 07:42:04.241209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.277 [2024-11-26 07:42:04.241221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.277 qpair failed and we were unable to recover it. 00:32:20.277 [2024-11-26 07:42:04.241526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.277 [2024-11-26 07:42:04.241538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.277 qpair failed and we were unable to recover it. 00:32:20.277 [2024-11-26 07:42:04.241870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.277 [2024-11-26 07:42:04.241881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.277 qpair failed and we were unable to recover it. 00:32:20.277 [2024-11-26 07:42:04.242194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.277 [2024-11-26 07:42:04.242205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.277 qpair failed and we were unable to recover it. 00:32:20.277 [2024-11-26 07:42:04.242507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.277 [2024-11-26 07:42:04.242518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.277 qpair failed and we were unable to recover it. 00:32:20.277 [2024-11-26 07:42:04.242845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.277 [2024-11-26 07:42:04.242856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.277 qpair failed and we were unable to recover it. 00:32:20.277 [2024-11-26 07:42:04.243166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.277 [2024-11-26 07:42:04.243177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.277 qpair failed and we were unable to recover it. 00:32:20.277 [2024-11-26 07:42:04.243478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.278 [2024-11-26 07:42:04.243490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.278 qpair failed and we were unable to recover it. 00:32:20.278 [2024-11-26 07:42:04.243791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.278 [2024-11-26 07:42:04.243802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.278 qpair failed and we were unable to recover it. 00:32:20.278 [2024-11-26 07:42:04.244097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.278 [2024-11-26 07:42:04.244109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.278 qpair failed and we were unable to recover it. 00:32:20.278 [2024-11-26 07:42:04.244448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.278 [2024-11-26 07:42:04.244459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.278 qpair failed and we were unable to recover it. 00:32:20.278 [2024-11-26 07:42:04.244802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.278 [2024-11-26 07:42:04.244813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.278 qpair failed and we were unable to recover it. 00:32:20.278 [2024-11-26 07:42:04.245124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.278 [2024-11-26 07:42:04.245138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.278 qpair failed and we were unable to recover it. 00:32:20.278 [2024-11-26 07:42:04.245440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.278 [2024-11-26 07:42:04.245451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.278 qpair failed and we were unable to recover it. 00:32:20.278 [2024-11-26 07:42:04.245759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.278 [2024-11-26 07:42:04.245770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.278 qpair failed and we were unable to recover it. 00:32:20.278 [2024-11-26 07:42:04.246083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.278 [2024-11-26 07:42:04.246095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.278 qpair failed and we were unable to recover it. 00:32:20.278 [2024-11-26 07:42:04.246404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.278 [2024-11-26 07:42:04.246415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.278 qpair failed and we were unable to recover it. 00:32:20.278 [2024-11-26 07:42:04.246710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.278 [2024-11-26 07:42:04.246722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.278 qpair failed and we were unable to recover it. 00:32:20.278 [2024-11-26 07:42:04.247064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.278 [2024-11-26 07:42:04.247076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.278 qpair failed and we were unable to recover it. 00:32:20.278 [2024-11-26 07:42:04.247412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.278 [2024-11-26 07:42:04.247423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.278 qpair failed and we were unable to recover it. 00:32:20.278 [2024-11-26 07:42:04.247613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.278 [2024-11-26 07:42:04.247623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.278 qpair failed and we were unable to recover it. 00:32:20.278 [2024-11-26 07:42:04.247941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.278 [2024-11-26 07:42:04.247952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.278 qpair failed and we were unable to recover it. 00:32:20.278 [2024-11-26 07:42:04.248054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.278 [2024-11-26 07:42:04.248064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.278 qpair failed and we were unable to recover it. 00:32:20.278 [2024-11-26 07:42:04.248280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.278 [2024-11-26 07:42:04.248291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.278 qpair failed and we were unable to recover it. 00:32:20.278 [2024-11-26 07:42:04.248593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.278 [2024-11-26 07:42:04.248604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.278 qpair failed and we were unable to recover it. 00:32:20.278 [2024-11-26 07:42:04.248930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.278 [2024-11-26 07:42:04.248941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.278 qpair failed and we were unable to recover it. 00:32:20.278 [2024-11-26 07:42:04.249247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.278 [2024-11-26 07:42:04.249257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.278 qpair failed and we were unable to recover it. 00:32:20.278 [2024-11-26 07:42:04.249581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.278 [2024-11-26 07:42:04.249593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.278 qpair failed and we were unable to recover it. 00:32:20.278 [2024-11-26 07:42:04.249912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.278 [2024-11-26 07:42:04.249923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.278 qpair failed and we were unable to recover it. 00:32:20.278 [2024-11-26 07:42:04.250249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.278 [2024-11-26 07:42:04.250260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.278 qpair failed and we were unable to recover it. 00:32:20.278 [2024-11-26 07:42:04.250553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.278 [2024-11-26 07:42:04.250564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.278 qpair failed and we were unable to recover it. 00:32:20.278 [2024-11-26 07:42:04.250870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.278 [2024-11-26 07:42:04.250881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.278 qpair failed and we were unable to recover it. 00:32:20.278 [2024-11-26 07:42:04.251213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.278 [2024-11-26 07:42:04.251223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.278 qpair failed and we were unable to recover it. 00:32:20.278 [2024-11-26 07:42:04.251403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.278 [2024-11-26 07:42:04.251414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.278 qpair failed and we were unable to recover it. 00:32:20.279 [2024-11-26 07:42:04.251707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.279 [2024-11-26 07:42:04.251717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.279 qpair failed and we were unable to recover it. 00:32:20.279 [2024-11-26 07:42:04.252034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.279 [2024-11-26 07:42:04.252046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.279 qpair failed and we were unable to recover it. 00:32:20.279 [2024-11-26 07:42:04.252397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.279 [2024-11-26 07:42:04.252408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.279 qpair failed and we were unable to recover it. 00:32:20.279 [2024-11-26 07:42:04.252581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.279 [2024-11-26 07:42:04.252594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.279 qpair failed and we were unable to recover it. 00:32:20.279 [2024-11-26 07:42:04.252869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.279 [2024-11-26 07:42:04.252881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.279 qpair failed and we were unable to recover it. 00:32:20.279 [2024-11-26 07:42:04.253001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.279 [2024-11-26 07:42:04.253014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.279 qpair failed and we were unable to recover it. 00:32:20.279 [2024-11-26 07:42:04.253194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.279 [2024-11-26 07:42:04.253205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.279 qpair failed and we were unable to recover it. 00:32:20.279 [2024-11-26 07:42:04.253507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.279 [2024-11-26 07:42:04.253517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.279 qpair failed and we were unable to recover it. 00:32:20.279 [2024-11-26 07:42:04.253815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.279 [2024-11-26 07:42:04.253825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.279 qpair failed and we were unable to recover it. 00:32:20.279 [2024-11-26 07:42:04.254148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.279 [2024-11-26 07:42:04.254159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.279 qpair failed and we were unable to recover it. 00:32:20.279 [2024-11-26 07:42:04.254457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.279 [2024-11-26 07:42:04.254468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.279 qpair failed and we were unable to recover it. 00:32:20.279 [2024-11-26 07:42:04.254777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.279 [2024-11-26 07:42:04.254789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.279 qpair failed and we were unable to recover it. 00:32:20.279 [2024-11-26 07:42:04.254985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.279 [2024-11-26 07:42:04.254996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.279 qpair failed and we were unable to recover it. 00:32:20.279 [2024-11-26 07:42:04.255331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.279 [2024-11-26 07:42:04.255343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.279 qpair failed and we were unable to recover it. 00:32:20.279 [2024-11-26 07:42:04.255544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.279 [2024-11-26 07:42:04.255555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.279 qpair failed and we were unable to recover it. 00:32:20.279 [2024-11-26 07:42:04.255820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.279 [2024-11-26 07:42:04.255831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.279 qpair failed and we were unable to recover it. 00:32:20.279 [2024-11-26 07:42:04.256163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.279 [2024-11-26 07:42:04.256174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.279 qpair failed and we were unable to recover it. 00:32:20.279 [2024-11-26 07:42:04.256487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.279 [2024-11-26 07:42:04.256497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.279 qpair failed and we were unable to recover it. 00:32:20.279 [2024-11-26 07:42:04.256842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.279 [2024-11-26 07:42:04.256853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.279 qpair failed and we were unable to recover it. 00:32:20.279 [2024-11-26 07:42:04.257057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.279 [2024-11-26 07:42:04.257068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.279 qpair failed and we were unable to recover it. 00:32:20.279 [2024-11-26 07:42:04.257394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.279 [2024-11-26 07:42:04.257405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.279 qpair failed and we were unable to recover it. 00:32:20.279 [2024-11-26 07:42:04.257717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.279 [2024-11-26 07:42:04.257729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.279 qpair failed and we were unable to recover it. 00:32:20.279 [2024-11-26 07:42:04.258039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.279 [2024-11-26 07:42:04.258051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.279 qpair failed and we were unable to recover it. 00:32:20.279 [2024-11-26 07:42:04.258242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.279 [2024-11-26 07:42:04.258253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.279 qpair failed and we were unable to recover it. 00:32:20.279 [2024-11-26 07:42:04.258599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.279 [2024-11-26 07:42:04.258610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.279 qpair failed and we were unable to recover it. 00:32:20.279 [2024-11-26 07:42:04.258805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.279 [2024-11-26 07:42:04.258816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.279 qpair failed and we were unable to recover it. 00:32:20.279 [2024-11-26 07:42:04.259121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.280 [2024-11-26 07:42:04.259132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.280 qpair failed and we were unable to recover it. 00:32:20.280 [2024-11-26 07:42:04.259311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.280 [2024-11-26 07:42:04.259324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.280 qpair failed and we were unable to recover it. 00:32:20.280 [2024-11-26 07:42:04.259621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.280 [2024-11-26 07:42:04.259633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.280 qpair failed and we were unable to recover it. 00:32:20.280 [2024-11-26 07:42:04.259928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.280 [2024-11-26 07:42:04.259939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.280 qpair failed and we were unable to recover it. 00:32:20.280 [2024-11-26 07:42:04.260271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.280 [2024-11-26 07:42:04.260282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.280 qpair failed and we were unable to recover it. 00:32:20.280 [2024-11-26 07:42:04.260476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.280 [2024-11-26 07:42:04.260487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.280 qpair failed and we were unable to recover it. 00:32:20.280 [2024-11-26 07:42:04.260818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.280 [2024-11-26 07:42:04.260830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.280 qpair failed and we were unable to recover it. 00:32:20.280 [2024-11-26 07:42:04.261138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.280 [2024-11-26 07:42:04.261149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.280 qpair failed and we were unable to recover it. 00:32:20.280 [2024-11-26 07:42:04.261456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.280 [2024-11-26 07:42:04.261467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.280 qpair failed and we were unable to recover it. 00:32:20.280 [2024-11-26 07:42:04.261783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.280 [2024-11-26 07:42:04.261795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.280 qpair failed and we were unable to recover it. 00:32:20.280 [2024-11-26 07:42:04.262111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.280 [2024-11-26 07:42:04.262122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.280 qpair failed and we were unable to recover it. 00:32:20.280 [2024-11-26 07:42:04.262460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.280 [2024-11-26 07:42:04.262471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.280 qpair failed and we were unable to recover it. 00:32:20.280 [2024-11-26 07:42:04.262696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.280 [2024-11-26 07:42:04.262707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.280 qpair failed and we were unable to recover it. 00:32:20.280 [2024-11-26 07:42:04.263047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.280 [2024-11-26 07:42:04.263058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.280 qpair failed and we were unable to recover it. 00:32:20.280 [2024-11-26 07:42:04.263243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.280 [2024-11-26 07:42:04.263254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.280 qpair failed and we were unable to recover it. 00:32:20.280 [2024-11-26 07:42:04.263574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.280 [2024-11-26 07:42:04.263584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.280 qpair failed and we were unable to recover it. 00:32:20.280 [2024-11-26 07:42:04.263877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.280 [2024-11-26 07:42:04.263888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.280 qpair failed and we were unable to recover it. 00:32:20.280 [2024-11-26 07:42:04.264227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.280 [2024-11-26 07:42:04.264238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.280 qpair failed and we were unable to recover it. 00:32:20.280 [2024-11-26 07:42:04.264525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.280 [2024-11-26 07:42:04.264536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.280 qpair failed and we were unable to recover it. 00:32:20.280 [2024-11-26 07:42:04.264885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.280 [2024-11-26 07:42:04.264896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.280 qpair failed and we were unable to recover it. 00:32:20.280 [2024-11-26 07:42:04.265236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.280 [2024-11-26 07:42:04.265247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.280 qpair failed and we were unable to recover it. 00:32:20.280 [2024-11-26 07:42:04.265578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.280 [2024-11-26 07:42:04.265589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.280 qpair failed and we were unable to recover it. 00:32:20.280 [2024-11-26 07:42:04.265757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.280 [2024-11-26 07:42:04.265770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.280 qpair failed and we were unable to recover it. 00:32:20.280 [2024-11-26 07:42:04.266059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.280 [2024-11-26 07:42:04.266070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.280 qpair failed and we were unable to recover it. 00:32:20.280 [2024-11-26 07:42:04.266383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.280 [2024-11-26 07:42:04.266394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.280 qpair failed and we were unable to recover it. 00:32:20.280 [2024-11-26 07:42:04.266701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.280 [2024-11-26 07:42:04.266712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.280 qpair failed and we were unable to recover it. 00:32:20.280 [2024-11-26 07:42:04.267000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.280 [2024-11-26 07:42:04.267011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.280 qpair failed and we were unable to recover it. 00:32:20.281 [2024-11-26 07:42:04.267355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.281 [2024-11-26 07:42:04.267366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.281 qpair failed and we were unable to recover it. 00:32:20.281 [2024-11-26 07:42:04.267676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.281 [2024-11-26 07:42:04.267687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.281 qpair failed and we were unable to recover it. 00:32:20.281 [2024-11-26 07:42:04.267998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.281 [2024-11-26 07:42:04.268009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.281 qpair failed and we were unable to recover it. 00:32:20.281 [2024-11-26 07:42:04.268348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.281 [2024-11-26 07:42:04.268359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.281 qpair failed and we were unable to recover it. 00:32:20.281 [2024-11-26 07:42:04.268655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.281 [2024-11-26 07:42:04.268667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.281 qpair failed and we were unable to recover it. 00:32:20.281 [2024-11-26 07:42:04.269051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.281 [2024-11-26 07:42:04.269062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.281 qpair failed and we were unable to recover it. 00:32:20.281 [2024-11-26 07:42:04.269354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.281 [2024-11-26 07:42:04.269365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.281 qpair failed and we were unable to recover it. 00:32:20.281 [2024-11-26 07:42:04.269675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.281 [2024-11-26 07:42:04.269686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.281 qpair failed and we were unable to recover it. 00:32:20.281 [2024-11-26 07:42:04.269997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.281 [2024-11-26 07:42:04.270008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.281 qpair failed and we were unable to recover it. 00:32:20.281 [2024-11-26 07:42:04.270243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.281 [2024-11-26 07:42:04.270254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.281 qpair failed and we were unable to recover it. 00:32:20.281 [2024-11-26 07:42:04.270567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.281 [2024-11-26 07:42:04.270578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.281 qpair failed and we were unable to recover it. 00:32:20.281 [2024-11-26 07:42:04.270946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.281 [2024-11-26 07:42:04.270957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.281 qpair failed and we were unable to recover it. 00:32:20.281 [2024-11-26 07:42:04.271212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.281 [2024-11-26 07:42:04.271223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.281 qpair failed and we were unable to recover it. 00:32:20.281 [2024-11-26 07:42:04.271568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.281 [2024-11-26 07:42:04.271579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.281 qpair failed and we were unable to recover it. 00:32:20.281 [2024-11-26 07:42:04.271911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.281 [2024-11-26 07:42:04.271923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.281 qpair failed and we were unable to recover it. 00:32:20.281 [2024-11-26 07:42:04.272230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.281 [2024-11-26 07:42:04.272240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.281 qpair failed and we were unable to recover it. 00:32:20.281 [2024-11-26 07:42:04.272543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.281 [2024-11-26 07:42:04.272554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.281 qpair failed and we were unable to recover it. 00:32:20.281 [2024-11-26 07:42:04.272867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.281 [2024-11-26 07:42:04.272879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.281 qpair failed and we were unable to recover it. 00:32:20.281 [2024-11-26 07:42:04.273210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.281 [2024-11-26 07:42:04.273222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.281 qpair failed and we were unable to recover it. 00:32:20.281 [2024-11-26 07:42:04.273555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.281 [2024-11-26 07:42:04.273566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.281 qpair failed and we were unable to recover it. 00:32:20.281 [2024-11-26 07:42:04.273941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.281 [2024-11-26 07:42:04.273952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.281 qpair failed and we were unable to recover it. 00:32:20.281 [2024-11-26 07:42:04.274212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.281 [2024-11-26 07:42:04.274223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.281 qpair failed and we were unable to recover it. 00:32:20.281 [2024-11-26 07:42:04.274540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.281 [2024-11-26 07:42:04.274551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.281 qpair failed and we were unable to recover it. 00:32:20.281 [2024-11-26 07:42:04.274865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.281 [2024-11-26 07:42:04.274877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.281 qpair failed and we were unable to recover it. 00:32:20.281 [2024-11-26 07:42:04.275206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.281 [2024-11-26 07:42:04.275217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.281 qpair failed and we were unable to recover it. 00:32:20.281 [2024-11-26 07:42:04.275410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.281 [2024-11-26 07:42:04.275422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.281 qpair failed and we were unable to recover it. 00:32:20.281 [2024-11-26 07:42:04.275746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.281 [2024-11-26 07:42:04.275757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.282 qpair failed and we were unable to recover it. 00:32:20.282 [2024-11-26 07:42:04.276102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.282 [2024-11-26 07:42:04.276113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.282 qpair failed and we were unable to recover it. 00:32:20.282 [2024-11-26 07:42:04.276448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.282 [2024-11-26 07:42:04.276458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.282 qpair failed and we were unable to recover it. 00:32:20.282 [2024-11-26 07:42:04.276767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.282 [2024-11-26 07:42:04.276778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.282 qpair failed and we were unable to recover it. 00:32:20.282 [2024-11-26 07:42:04.277082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.282 [2024-11-26 07:42:04.277092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.282 qpair failed and we were unable to recover it. 00:32:20.282 [2024-11-26 07:42:04.277401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.282 [2024-11-26 07:42:04.277413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.282 qpair failed and we were unable to recover it. 00:32:20.282 [2024-11-26 07:42:04.277758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.282 [2024-11-26 07:42:04.277769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.282 qpair failed and we were unable to recover it. 00:32:20.282 [2024-11-26 07:42:04.278096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.282 [2024-11-26 07:42:04.278107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.282 qpair failed and we were unable to recover it. 00:32:20.282 [2024-11-26 07:42:04.278412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.282 [2024-11-26 07:42:04.278423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.282 qpair failed and we were unable to recover it. 00:32:20.282 [2024-11-26 07:42:04.278758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.282 [2024-11-26 07:42:04.278770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.282 qpair failed and we were unable to recover it. 00:32:20.282 [2024-11-26 07:42:04.279099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.282 [2024-11-26 07:42:04.279110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.282 qpair failed and we were unable to recover it. 00:32:20.282 [2024-11-26 07:42:04.279320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.282 [2024-11-26 07:42:04.279331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.282 qpair failed and we were unable to recover it. 00:32:20.282 [2024-11-26 07:42:04.279644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.282 [2024-11-26 07:42:04.279655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.282 qpair failed and we were unable to recover it. 00:32:20.282 [2024-11-26 07:42:04.279992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.282 [2024-11-26 07:42:04.280004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.282 qpair failed and we were unable to recover it. 00:32:20.282 [2024-11-26 07:42:04.280315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.282 [2024-11-26 07:42:04.280326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.282 qpair failed and we were unable to recover it. 00:32:20.282 [2024-11-26 07:42:04.280643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.282 [2024-11-26 07:42:04.280654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.282 qpair failed and we were unable to recover it. 00:32:20.282 [2024-11-26 07:42:04.280978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.282 [2024-11-26 07:42:04.280989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.282 qpair failed and we were unable to recover it. 00:32:20.282 [2024-11-26 07:42:04.281313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.282 [2024-11-26 07:42:04.281324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.282 qpair failed and we were unable to recover it. 00:32:20.282 [2024-11-26 07:42:04.281577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.282 [2024-11-26 07:42:04.281588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.282 qpair failed and we were unable to recover it. 00:32:20.282 [2024-11-26 07:42:04.281878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.282 [2024-11-26 07:42:04.281890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.282 qpair failed and we were unable to recover it. 00:32:20.282 [2024-11-26 07:42:04.282056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.282 [2024-11-26 07:42:04.282068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.282 qpair failed and we were unable to recover it. 00:32:20.282 [2024-11-26 07:42:04.282401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.282 [2024-11-26 07:42:04.282414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.282 qpair failed and we were unable to recover it. 00:32:20.282 [2024-11-26 07:42:04.282746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.282 [2024-11-26 07:42:04.282757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.282 qpair failed and we were unable to recover it. 00:32:20.282 [2024-11-26 07:42:04.283070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.282 [2024-11-26 07:42:04.283080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.282 qpair failed and we were unable to recover it. 00:32:20.282 [2024-11-26 07:42:04.283429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.282 [2024-11-26 07:42:04.283439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.282 qpair failed and we were unable to recover it. 00:32:20.282 [2024-11-26 07:42:04.283725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.282 [2024-11-26 07:42:04.283736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.282 qpair failed and we were unable to recover it. 00:32:20.282 [2024-11-26 07:42:04.283923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.282 [2024-11-26 07:42:04.283935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.282 qpair failed and we were unable to recover it. 00:32:20.282 [2024-11-26 07:42:04.284133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.283 [2024-11-26 07:42:04.284144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.283 qpair failed and we were unable to recover it. 00:32:20.283 [2024-11-26 07:42:04.284457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.283 [2024-11-26 07:42:04.284469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.283 qpair failed and we were unable to recover it. 00:32:20.283 [2024-11-26 07:42:04.284777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.283 [2024-11-26 07:42:04.284788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.283 qpair failed and we were unable to recover it. 00:32:20.283 [2024-11-26 07:42:04.285005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.283 [2024-11-26 07:42:04.285017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.283 qpair failed and we were unable to recover it. 00:32:20.283 [2024-11-26 07:42:04.285212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.283 [2024-11-26 07:42:04.285223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.283 qpair failed and we were unable to recover it. 00:32:20.283 [2024-11-26 07:42:04.285541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.283 [2024-11-26 07:42:04.285552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.283 qpair failed and we were unable to recover it. 00:32:20.283 [2024-11-26 07:42:04.285752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.283 [2024-11-26 07:42:04.285763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.283 qpair failed and we were unable to recover it. 00:32:20.283 [2024-11-26 07:42:04.286041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.283 [2024-11-26 07:42:04.286052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.283 qpair failed and we were unable to recover it. 00:32:20.283 [2024-11-26 07:42:04.286242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.283 [2024-11-26 07:42:04.286253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.283 qpair failed and we were unable to recover it. 00:32:20.283 [2024-11-26 07:42:04.286471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.283 [2024-11-26 07:42:04.286482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.283 qpair failed and we were unable to recover it. 00:32:20.283 [2024-11-26 07:42:04.286790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.283 [2024-11-26 07:42:04.286801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.283 qpair failed and we were unable to recover it. 00:32:20.283 [2024-11-26 07:42:04.287123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.283 [2024-11-26 07:42:04.287135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.283 qpair failed and we were unable to recover it. 00:32:20.283 [2024-11-26 07:42:04.287422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.283 [2024-11-26 07:42:04.287433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.283 qpair failed and we were unable to recover it. 00:32:20.283 [2024-11-26 07:42:04.287747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.283 [2024-11-26 07:42:04.287757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.283 qpair failed and we were unable to recover it. 00:32:20.283 [2024-11-26 07:42:04.288079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.283 [2024-11-26 07:42:04.288090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.283 qpair failed and we were unable to recover it. 00:32:20.283 [2024-11-26 07:42:04.288398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.283 [2024-11-26 07:42:04.288410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.283 qpair failed and we were unable to recover it. 00:32:20.283 [2024-11-26 07:42:04.288753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.283 [2024-11-26 07:42:04.288764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.283 qpair failed and we were unable to recover it. 00:32:20.283 [2024-11-26 07:42:04.288978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.283 [2024-11-26 07:42:04.288989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.283 qpair failed and we were unable to recover it. 00:32:20.283 [2024-11-26 07:42:04.289319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.283 [2024-11-26 07:42:04.289330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.283 qpair failed and we were unable to recover it. 00:32:20.283 [2024-11-26 07:42:04.289643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.283 [2024-11-26 07:42:04.289654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.283 qpair failed and we were unable to recover it. 00:32:20.283 [2024-11-26 07:42:04.289826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.283 [2024-11-26 07:42:04.289837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.283 qpair failed and we were unable to recover it. 00:32:20.283 [2024-11-26 07:42:04.290208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.283 [2024-11-26 07:42:04.290222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.283 qpair failed and we were unable to recover it. 00:32:20.283 [2024-11-26 07:42:04.290510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.284 [2024-11-26 07:42:04.290521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.284 qpair failed and we were unable to recover it. 00:32:20.284 [2024-11-26 07:42:04.290823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.284 [2024-11-26 07:42:04.290833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.284 qpair failed and we were unable to recover it. 00:32:20.284 [2024-11-26 07:42:04.291108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.284 [2024-11-26 07:42:04.291120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.284 qpair failed and we were unable to recover it. 00:32:20.284 [2024-11-26 07:42:04.291428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.284 [2024-11-26 07:42:04.291439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.284 qpair failed and we were unable to recover it. 00:32:20.284 [2024-11-26 07:42:04.291650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.284 [2024-11-26 07:42:04.291661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.284 qpair failed and we were unable to recover it. 00:32:20.284 [2024-11-26 07:42:04.291974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.284 [2024-11-26 07:42:04.291985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.284 qpair failed and we were unable to recover it. 00:32:20.284 [2024-11-26 07:42:04.292304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.284 [2024-11-26 07:42:04.292315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.284 qpair failed and we were unable to recover it. 00:32:20.284 [2024-11-26 07:42:04.292605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.284 [2024-11-26 07:42:04.292616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.284 qpair failed and we were unable to recover it. 00:32:20.284 [2024-11-26 07:42:04.292938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.284 [2024-11-26 07:42:04.292950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.284 qpair failed and we were unable to recover it. 00:32:20.284 [2024-11-26 07:42:04.293284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.284 [2024-11-26 07:42:04.293294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.284 qpair failed and we were unable to recover it. 00:32:20.284 [2024-11-26 07:42:04.293646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.284 [2024-11-26 07:42:04.293658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.284 qpair failed and we were unable to recover it. 00:32:20.284 [2024-11-26 07:42:04.293968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.284 [2024-11-26 07:42:04.293979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.284 qpair failed and we were unable to recover it. 00:32:20.284 [2024-11-26 07:42:04.294270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.284 [2024-11-26 07:42:04.294282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.284 qpair failed and we were unable to recover it. 00:32:20.284 [2024-11-26 07:42:04.294499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.284 [2024-11-26 07:42:04.294511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.284 qpair failed and we were unable to recover it. 00:32:20.284 [2024-11-26 07:42:04.294854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.284 [2024-11-26 07:42:04.294868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.284 qpair failed and we were unable to recover it. 00:32:20.284 [2024-11-26 07:42:04.295156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.284 [2024-11-26 07:42:04.295166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.284 qpair failed and we were unable to recover it. 00:32:20.284 [2024-11-26 07:42:04.295500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.284 [2024-11-26 07:42:04.295511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.284 qpair failed and we were unable to recover it. 00:32:20.284 [2024-11-26 07:42:04.295806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.284 [2024-11-26 07:42:04.295817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.284 qpair failed and we were unable to recover it. 00:32:20.284 [2024-11-26 07:42:04.296147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.284 [2024-11-26 07:42:04.296159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.284 qpair failed and we were unable to recover it. 00:32:20.284 [2024-11-26 07:42:04.296463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.284 [2024-11-26 07:42:04.296474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.284 qpair failed and we were unable to recover it. 00:32:20.284 [2024-11-26 07:42:04.296754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.284 [2024-11-26 07:42:04.296765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.284 qpair failed and we were unable to recover it. 00:32:20.284 [2024-11-26 07:42:04.297086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.284 [2024-11-26 07:42:04.297097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.284 qpair failed and we were unable to recover it. 00:32:20.284 [2024-11-26 07:42:04.297397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.284 [2024-11-26 07:42:04.297409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.284 qpair failed and we were unable to recover it. 00:32:20.284 [2024-11-26 07:42:04.297717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.284 [2024-11-26 07:42:04.297728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.284 qpair failed and we were unable to recover it. 00:32:20.284 [2024-11-26 07:42:04.298053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.284 [2024-11-26 07:42:04.298064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.284 qpair failed and we were unable to recover it. 00:32:20.284 [2024-11-26 07:42:04.298426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.284 [2024-11-26 07:42:04.298437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.284 qpair failed and we were unable to recover it. 00:32:20.284 [2024-11-26 07:42:04.298632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.284 [2024-11-26 07:42:04.298643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.284 qpair failed and we were unable to recover it. 00:32:20.285 [2024-11-26 07:42:04.298955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.285 [2024-11-26 07:42:04.298966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.285 qpair failed and we were unable to recover it. 00:32:20.285 [2024-11-26 07:42:04.299182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.285 [2024-11-26 07:42:04.299192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.285 qpair failed and we were unable to recover it. 00:32:20.285 [2024-11-26 07:42:04.299513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.285 [2024-11-26 07:42:04.299524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.285 qpair failed and we were unable to recover it. 00:32:20.285 [2024-11-26 07:42:04.299865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.285 [2024-11-26 07:42:04.299877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.285 qpair failed and we were unable to recover it. 00:32:20.285 [2024-11-26 07:42:04.300168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.285 [2024-11-26 07:42:04.300180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.285 qpair failed and we were unable to recover it. 00:32:20.285 [2024-11-26 07:42:04.300518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.285 [2024-11-26 07:42:04.300529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.285 qpair failed and we were unable to recover it. 00:32:20.285 [2024-11-26 07:42:04.300914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.285 [2024-11-26 07:42:04.300926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.285 qpair failed and we were unable to recover it. 00:32:20.285 [2024-11-26 07:42:04.301254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.285 [2024-11-26 07:42:04.301265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.285 qpair failed and we were unable to recover it. 00:32:20.285 [2024-11-26 07:42:04.301581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.285 [2024-11-26 07:42:04.301592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.285 qpair failed and we were unable to recover it. 00:32:20.285 [2024-11-26 07:42:04.301928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.285 [2024-11-26 07:42:04.301939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.285 qpair failed and we were unable to recover it. 00:32:20.285 [2024-11-26 07:42:04.302252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.285 [2024-11-26 07:42:04.302264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.285 qpair failed and we were unable to recover it. 00:32:20.285 [2024-11-26 07:42:04.302566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.285 [2024-11-26 07:42:04.302577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.285 qpair failed and we were unable to recover it. 00:32:20.285 [2024-11-26 07:42:04.302889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.285 [2024-11-26 07:42:04.302900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.285 qpair failed and we were unable to recover it. 00:32:20.285 [2024-11-26 07:42:04.303240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.285 [2024-11-26 07:42:04.303250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.285 qpair failed and we were unable to recover it. 00:32:20.285 [2024-11-26 07:42:04.303432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.285 [2024-11-26 07:42:04.303443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.285 qpair failed and we were unable to recover it. 00:32:20.285 [2024-11-26 07:42:04.303645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.285 [2024-11-26 07:42:04.303656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.285 qpair failed and we were unable to recover it. 00:32:20.285 [2024-11-26 07:42:04.303965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.285 [2024-11-26 07:42:04.303976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.285 qpair failed and we were unable to recover it. 00:32:20.285 [2024-11-26 07:42:04.304303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.285 [2024-11-26 07:42:04.304314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.285 qpair failed and we were unable to recover it. 00:32:20.285 [2024-11-26 07:42:04.304626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.285 [2024-11-26 07:42:04.304637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.285 qpair failed and we were unable to recover it. 00:32:20.285 [2024-11-26 07:42:04.304964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.285 [2024-11-26 07:42:04.304975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.285 qpair failed and we were unable to recover it. 00:32:20.285 [2024-11-26 07:42:04.305284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.285 [2024-11-26 07:42:04.305294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.285 qpair failed and we were unable to recover it. 00:32:20.285 [2024-11-26 07:42:04.305572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.285 [2024-11-26 07:42:04.305583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.285 qpair failed and we were unable to recover it. 00:32:20.285 [2024-11-26 07:42:04.305898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.285 [2024-11-26 07:42:04.305909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.285 qpair failed and we were unable to recover it. 00:32:20.285 [2024-11-26 07:42:04.306242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.285 [2024-11-26 07:42:04.306253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.285 qpair failed and we were unable to recover it. 00:32:20.285 [2024-11-26 07:42:04.306592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.285 [2024-11-26 07:42:04.306604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.285 qpair failed and we were unable to recover it. 00:32:20.285 [2024-11-26 07:42:04.306952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.285 [2024-11-26 07:42:04.306964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.285 qpair failed and we were unable to recover it. 00:32:20.285 [2024-11-26 07:42:04.307276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.286 [2024-11-26 07:42:04.307286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.286 qpair failed and we were unable to recover it. 00:32:20.286 [2024-11-26 07:42:04.307591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.286 [2024-11-26 07:42:04.307601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.286 qpair failed and we were unable to recover it. 00:32:20.286 [2024-11-26 07:42:04.307799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.286 [2024-11-26 07:42:04.307810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.286 qpair failed and we were unable to recover it. 00:32:20.286 [2024-11-26 07:42:04.308142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.286 [2024-11-26 07:42:04.308153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.286 qpair failed and we were unable to recover it. 00:32:20.286 [2024-11-26 07:42:04.308455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.286 [2024-11-26 07:42:04.308465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.286 qpair failed and we were unable to recover it. 00:32:20.286 [2024-11-26 07:42:04.308749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.286 [2024-11-26 07:42:04.308760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.286 qpair failed and we were unable to recover it. 00:32:20.286 [2024-11-26 07:42:04.308969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.286 [2024-11-26 07:42:04.308981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.286 qpair failed and we were unable to recover it. 00:32:20.286 [2024-11-26 07:42:04.309342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.286 [2024-11-26 07:42:04.309354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.286 qpair failed and we were unable to recover it. 00:32:20.286 [2024-11-26 07:42:04.309701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.286 [2024-11-26 07:42:04.309712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.286 qpair failed and we were unable to recover it. 00:32:20.286 [2024-11-26 07:42:04.309994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.286 [2024-11-26 07:42:04.310006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.286 qpair failed and we were unable to recover it. 00:32:20.286 [2024-11-26 07:42:04.310317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.286 [2024-11-26 07:42:04.310327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.286 qpair failed and we were unable to recover it. 00:32:20.286 [2024-11-26 07:42:04.310612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.286 [2024-11-26 07:42:04.310623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.286 qpair failed and we were unable to recover it. 00:32:20.286 [2024-11-26 07:42:04.310932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.286 [2024-11-26 07:42:04.310944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.286 qpair failed and we were unable to recover it. 00:32:20.286 [2024-11-26 07:42:04.311116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.286 [2024-11-26 07:42:04.311126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.286 qpair failed and we were unable to recover it. 00:32:20.286 [2024-11-26 07:42:04.311281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.286 [2024-11-26 07:42:04.311294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.286 qpair failed and we were unable to recover it. 00:32:20.286 [2024-11-26 07:42:04.311686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.286 [2024-11-26 07:42:04.311699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.286 qpair failed and we were unable to recover it. 00:32:20.286 [2024-11-26 07:42:04.312003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.286 [2024-11-26 07:42:04.312014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.286 qpair failed and we were unable to recover it. 00:32:20.286 [2024-11-26 07:42:04.312366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.286 [2024-11-26 07:42:04.312378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.286 qpair failed and we were unable to recover it. 00:32:20.286 [2024-11-26 07:42:04.312681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.286 [2024-11-26 07:42:04.312691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.286 qpair failed and we were unable to recover it. 00:32:20.286 [2024-11-26 07:42:04.313023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.286 [2024-11-26 07:42:04.313034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.286 qpair failed and we were unable to recover it. 00:32:20.286 [2024-11-26 07:42:04.313374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.286 [2024-11-26 07:42:04.313386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.286 qpair failed and we were unable to recover it. 00:32:20.286 [2024-11-26 07:42:04.313694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.286 [2024-11-26 07:42:04.313705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.286 qpair failed and we were unable to recover it. 00:32:20.286 [2024-11-26 07:42:04.314020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.286 [2024-11-26 07:42:04.314031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.286 qpair failed and we were unable to recover it. 00:32:20.286 [2024-11-26 07:42:04.314358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.286 [2024-11-26 07:42:04.314368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.286 qpair failed and we were unable to recover it. 00:32:20.286 [2024-11-26 07:42:04.314680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.286 [2024-11-26 07:42:04.314691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.286 qpair failed and we were unable to recover it. 00:32:20.286 [2024-11-26 07:42:04.314997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.286 [2024-11-26 07:42:04.315008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.286 qpair failed and we were unable to recover it. 00:32:20.286 [2024-11-26 07:42:04.315334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.286 [2024-11-26 07:42:04.315345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.286 qpair failed and we were unable to recover it. 00:32:20.286 [2024-11-26 07:42:04.315528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.287 [2024-11-26 07:42:04.315539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.287 qpair failed and we were unable to recover it. 00:32:20.287 [2024-11-26 07:42:04.315734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.287 [2024-11-26 07:42:04.315745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.287 qpair failed and we were unable to recover it. 00:32:20.287 [2024-11-26 07:42:04.316089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.287 [2024-11-26 07:42:04.316101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.287 qpair failed and we were unable to recover it. 00:32:20.287 [2024-11-26 07:42:04.316398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.287 [2024-11-26 07:42:04.316409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.287 qpair failed and we were unable to recover it. 00:32:20.287 [2024-11-26 07:42:04.316718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.287 [2024-11-26 07:42:04.316730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.287 qpair failed and we were unable to recover it. 00:32:20.287 [2024-11-26 07:42:04.317035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.287 [2024-11-26 07:42:04.317046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.287 qpair failed and we were unable to recover it. 00:32:20.287 [2024-11-26 07:42:04.317244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.287 [2024-11-26 07:42:04.317255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.287 qpair failed and we were unable to recover it. 00:32:20.287 [2024-11-26 07:42:04.317576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.287 [2024-11-26 07:42:04.317586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.287 qpair failed and we were unable to recover it. 00:32:20.287 [2024-11-26 07:42:04.317872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.287 [2024-11-26 07:42:04.317884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.287 qpair failed and we were unable to recover it. 00:32:20.287 [2024-11-26 07:42:04.318266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.287 [2024-11-26 07:42:04.318276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.287 qpair failed and we were unable to recover it. 00:32:20.287 [2024-11-26 07:42:04.318591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.287 [2024-11-26 07:42:04.318601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.287 qpair failed and we were unable to recover it. 00:32:20.287 [2024-11-26 07:42:04.318810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.287 [2024-11-26 07:42:04.318822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.287 qpair failed and we were unable to recover it. 00:32:20.287 [2024-11-26 07:42:04.319037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.287 [2024-11-26 07:42:04.319049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.287 qpair failed and we were unable to recover it. 00:32:20.287 [2024-11-26 07:42:04.319435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.287 [2024-11-26 07:42:04.319445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.287 qpair failed and we were unable to recover it. 00:32:20.287 [2024-11-26 07:42:04.319775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.287 [2024-11-26 07:42:04.319790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.287 qpair failed and we were unable to recover it. 00:32:20.287 [2024-11-26 07:42:04.319934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.287 [2024-11-26 07:42:04.319945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.287 qpair failed and we were unable to recover it. 00:32:20.287 [2024-11-26 07:42:04.320312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.287 [2024-11-26 07:42:04.320323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.287 qpair failed and we were unable to recover it. 00:32:20.287 [2024-11-26 07:42:04.320629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.287 [2024-11-26 07:42:04.320639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.287 qpair failed and we were unable to recover it. 00:32:20.287 [2024-11-26 07:42:04.320932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.287 [2024-11-26 07:42:04.320944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.287 qpair failed and we were unable to recover it. 00:32:20.287 [2024-11-26 07:42:04.321241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.287 [2024-11-26 07:42:04.321252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.287 qpair failed and we were unable to recover it. 00:32:20.287 [2024-11-26 07:42:04.321571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.287 [2024-11-26 07:42:04.321583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.287 qpair failed and we were unable to recover it. 00:32:20.287 [2024-11-26 07:42:04.321925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.287 [2024-11-26 07:42:04.321937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.287 qpair failed and we were unable to recover it. 00:32:20.287 [2024-11-26 07:42:04.322284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.287 [2024-11-26 07:42:04.322296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.287 qpair failed and we were unable to recover it. 00:32:20.287 [2024-11-26 07:42:04.322513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.287 [2024-11-26 07:42:04.322524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.287 qpair failed and we were unable to recover it. 00:32:20.287 [2024-11-26 07:42:04.322825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.287 [2024-11-26 07:42:04.322836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.287 qpair failed and we were unable to recover it. 00:32:20.287 [2024-11-26 07:42:04.323163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.287 [2024-11-26 07:42:04.323174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.287 qpair failed and we were unable to recover it. 00:32:20.287 [2024-11-26 07:42:04.323351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.287 [2024-11-26 07:42:04.323363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.287 qpair failed and we were unable to recover it. 00:32:20.288 [2024-11-26 07:42:04.323678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.288 [2024-11-26 07:42:04.323689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.288 qpair failed and we were unable to recover it. 00:32:20.288 [2024-11-26 07:42:04.323985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.288 [2024-11-26 07:42:04.323997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.288 qpair failed and we were unable to recover it. 00:32:20.288 [2024-11-26 07:42:04.324354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.288 [2024-11-26 07:42:04.324365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.288 qpair failed and we were unable to recover it. 00:32:20.288 [2024-11-26 07:42:04.324687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.288 [2024-11-26 07:42:04.324698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.288 qpair failed and we were unable to recover it. 00:32:20.288 [2024-11-26 07:42:04.325010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.288 [2024-11-26 07:42:04.325021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.288 qpair failed and we were unable to recover it. 00:32:20.288 [2024-11-26 07:42:04.325359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.288 [2024-11-26 07:42:04.325370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.288 qpair failed and we were unable to recover it. 00:32:20.288 [2024-11-26 07:42:04.325686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.288 [2024-11-26 07:42:04.325696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.288 qpair failed and we were unable to recover it. 00:32:20.288 [2024-11-26 07:42:04.326009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.288 [2024-11-26 07:42:04.326021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.288 qpair failed and we were unable to recover it. 00:32:20.288 [2024-11-26 07:42:04.326347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.288 [2024-11-26 07:42:04.326357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.288 qpair failed and we were unable to recover it. 00:32:20.288 [2024-11-26 07:42:04.326656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.288 [2024-11-26 07:42:04.326667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.288 qpair failed and we were unable to recover it. 00:32:20.288 [2024-11-26 07:42:04.326760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.288 [2024-11-26 07:42:04.326772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.288 qpair failed and we were unable to recover it. 00:32:20.288 [2024-11-26 07:42:04.327011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.288 [2024-11-26 07:42:04.327023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.288 qpair failed and we were unable to recover it. 00:32:20.288 [2024-11-26 07:42:04.327338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.288 [2024-11-26 07:42:04.327349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.288 qpair failed and we were unable to recover it. 00:32:20.288 [2024-11-26 07:42:04.327674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.288 [2024-11-26 07:42:04.327684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.288 qpair failed and we were unable to recover it. 00:32:20.288 [2024-11-26 07:42:04.328063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.288 [2024-11-26 07:42:04.328076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.288 qpair failed and we were unable to recover it. 00:32:20.288 [2024-11-26 07:42:04.328428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.288 [2024-11-26 07:42:04.328439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.288 qpair failed and we were unable to recover it. 00:32:20.288 [2024-11-26 07:42:04.328741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.288 [2024-11-26 07:42:04.328752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.288 qpair failed and we were unable to recover it. 00:32:20.288 [2024-11-26 07:42:04.329000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.288 [2024-11-26 07:42:04.329011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.288 qpair failed and we were unable to recover it. 00:32:20.288 [2024-11-26 07:42:04.329263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.288 [2024-11-26 07:42:04.329273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.288 qpair failed and we were unable to recover it. 00:32:20.288 [2024-11-26 07:42:04.329586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.288 [2024-11-26 07:42:04.329597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.288 qpair failed and we were unable to recover it. 00:32:20.288 [2024-11-26 07:42:04.329802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.288 [2024-11-26 07:42:04.329814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.288 qpair failed and we were unable to recover it. 00:32:20.288 [2024-11-26 07:42:04.330106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.288 [2024-11-26 07:42:04.330117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.288 qpair failed and we were unable to recover it. 00:32:20.288 [2024-11-26 07:42:04.330319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.288 [2024-11-26 07:42:04.330330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.288 qpair failed and we were unable to recover it. 00:32:20.288 [2024-11-26 07:42:04.330635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.288 [2024-11-26 07:42:04.330646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.288 qpair failed and we were unable to recover it. 00:32:20.288 [2024-11-26 07:42:04.330928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.288 [2024-11-26 07:42:04.330938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.288 qpair failed and we were unable to recover it. 00:32:20.288 [2024-11-26 07:42:04.331259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.288 [2024-11-26 07:42:04.331269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.288 qpair failed and we were unable to recover it. 00:32:20.288 [2024-11-26 07:42:04.331567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.289 [2024-11-26 07:42:04.331579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.289 qpair failed and we were unable to recover it. 00:32:20.289 [2024-11-26 07:42:04.331869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.289 [2024-11-26 07:42:04.331881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.289 qpair failed and we were unable to recover it. 00:32:20.289 [2024-11-26 07:42:04.332199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.289 [2024-11-26 07:42:04.332210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.289 qpair failed and we were unable to recover it. 00:32:20.289 [2024-11-26 07:42:04.332549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.289 [2024-11-26 07:42:04.332560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.289 qpair failed and we were unable to recover it. 00:32:20.289 [2024-11-26 07:42:04.332870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.289 [2024-11-26 07:42:04.332881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.289 qpair failed and we were unable to recover it. 00:32:20.289 [2024-11-26 07:42:04.333117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.289 [2024-11-26 07:42:04.333128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.289 qpair failed and we were unable to recover it. 00:32:20.289 [2024-11-26 07:42:04.333440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.289 [2024-11-26 07:42:04.333450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.289 qpair failed and we were unable to recover it. 00:32:20.289 [2024-11-26 07:42:04.333760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.289 [2024-11-26 07:42:04.333771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.289 qpair failed and we were unable to recover it. 00:32:20.289 [2024-11-26 07:42:04.334091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.289 [2024-11-26 07:42:04.334102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.289 qpair failed and we were unable to recover it. 00:32:20.289 [2024-11-26 07:42:04.334292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.289 [2024-11-26 07:42:04.334302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.289 qpair failed and we were unable to recover it. 00:32:20.289 [2024-11-26 07:42:04.334678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.289 [2024-11-26 07:42:04.334689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.289 qpair failed and we were unable to recover it. 00:32:20.289 [2024-11-26 07:42:04.334955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.289 [2024-11-26 07:42:04.334967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.289 qpair failed and we were unable to recover it. 00:32:20.289 [2024-11-26 07:42:04.335207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.289 [2024-11-26 07:42:04.335218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.289 qpair failed and we were unable to recover it. 00:32:20.289 [2024-11-26 07:42:04.335524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.289 [2024-11-26 07:42:04.335535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.289 qpair failed and we were unable to recover it. 00:32:20.289 [2024-11-26 07:42:04.335730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.289 [2024-11-26 07:42:04.335743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.289 qpair failed and we were unable to recover it. 00:32:20.289 [2024-11-26 07:42:04.336061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.289 [2024-11-26 07:42:04.336072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.289 qpair failed and we were unable to recover it. 00:32:20.289 [2024-11-26 07:42:04.336396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.289 [2024-11-26 07:42:04.336407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.289 qpair failed and we were unable to recover it. 00:32:20.289 [2024-11-26 07:42:04.336708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.289 [2024-11-26 07:42:04.336718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.289 qpair failed and we were unable to recover it. 00:32:20.289 [2024-11-26 07:42:04.337049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.289 [2024-11-26 07:42:04.337060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.289 qpair failed and we were unable to recover it. 00:32:20.289 [2024-11-26 07:42:04.337283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.289 [2024-11-26 07:42:04.337295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.289 qpair failed and we were unable to recover it. 00:32:20.289 [2024-11-26 07:42:04.337609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.289 [2024-11-26 07:42:04.337620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.289 qpair failed and we were unable to recover it. 00:32:20.289 [2024-11-26 07:42:04.337794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.289 [2024-11-26 07:42:04.337805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.289 qpair failed and we were unable to recover it. 00:32:20.289 [2024-11-26 07:42:04.338108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.289 [2024-11-26 07:42:04.338119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.289 qpair failed and we were unable to recover it. 00:32:20.289 [2024-11-26 07:42:04.338423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.289 [2024-11-26 07:42:04.338435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.289 qpair failed and we were unable to recover it. 00:32:20.289 [2024-11-26 07:42:04.338742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.290 [2024-11-26 07:42:04.338753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.290 qpair failed and we were unable to recover it. 00:32:20.290 [2024-11-26 07:42:04.338948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.290 [2024-11-26 07:42:04.338960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.290 qpair failed and we were unable to recover it. 00:32:20.290 [2024-11-26 07:42:04.339218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.290 [2024-11-26 07:42:04.339229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.290 qpair failed and we were unable to recover it. 00:32:20.290 [2024-11-26 07:42:04.339556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.290 [2024-11-26 07:42:04.339567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.290 qpair failed and we were unable to recover it. 00:32:20.290 [2024-11-26 07:42:04.339770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.290 [2024-11-26 07:42:04.339781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.290 qpair failed and we were unable to recover it. 00:32:20.290 [2024-11-26 07:42:04.340138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.290 [2024-11-26 07:42:04.340149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.290 qpair failed and we were unable to recover it. 00:32:20.290 [2024-11-26 07:42:04.340462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.290 [2024-11-26 07:42:04.340473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.290 qpair failed and we were unable to recover it. 00:32:20.290 [2024-11-26 07:42:04.340688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.290 [2024-11-26 07:42:04.340698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.290 qpair failed and we were unable to recover it. 00:32:20.290 [2024-11-26 07:42:04.341016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.290 [2024-11-26 07:42:04.341028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.290 qpair failed and we were unable to recover it. 00:32:20.290 [2024-11-26 07:42:04.341252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.290 [2024-11-26 07:42:04.341263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.290 qpair failed and we were unable to recover it. 00:32:20.290 [2024-11-26 07:42:04.341462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.290 [2024-11-26 07:42:04.341472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.290 qpair failed and we were unable to recover it. 00:32:20.290 [2024-11-26 07:42:04.341798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.290 [2024-11-26 07:42:04.341808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.290 qpair failed and we were unable to recover it. 00:32:20.290 [2024-11-26 07:42:04.342076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.290 [2024-11-26 07:42:04.342087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.290 qpair failed and we were unable to recover it. 00:32:20.290 [2024-11-26 07:42:04.342391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.290 [2024-11-26 07:42:04.342402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.290 qpair failed and we were unable to recover it. 00:32:20.290 [2024-11-26 07:42:04.342703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.290 [2024-11-26 07:42:04.342715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.290 qpair failed and we were unable to recover it. 00:32:20.290 [2024-11-26 07:42:04.343026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.290 [2024-11-26 07:42:04.343037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.290 qpair failed and we were unable to recover it. 00:32:20.290 [2024-11-26 07:42:04.343351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.290 [2024-11-26 07:42:04.343362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.290 qpair failed and we were unable to recover it. 00:32:20.290 [2024-11-26 07:42:04.343668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.290 [2024-11-26 07:42:04.343679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.290 qpair failed and we were unable to recover it. 00:32:20.290 [2024-11-26 07:42:04.344007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.290 [2024-11-26 07:42:04.344019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.290 qpair failed and we were unable to recover it. 00:32:20.290 [2024-11-26 07:42:04.344354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.290 [2024-11-26 07:42:04.344366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.290 qpair failed and we were unable to recover it. 00:32:20.290 [2024-11-26 07:42:04.344665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.290 [2024-11-26 07:42:04.344676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.290 qpair failed and we were unable to recover it. 00:32:20.290 [2024-11-26 07:42:04.345011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.290 [2024-11-26 07:42:04.345022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.290 qpair failed and we were unable to recover it. 00:32:20.290 [2024-11-26 07:42:04.345333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.290 [2024-11-26 07:42:04.345344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.290 qpair failed and we were unable to recover it. 00:32:20.290 [2024-11-26 07:42:04.345663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.290 [2024-11-26 07:42:04.345674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.290 qpair failed and we were unable to recover it. 00:32:20.290 [2024-11-26 07:42:04.345994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.290 [2024-11-26 07:42:04.346006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.290 qpair failed and we were unable to recover it. 00:32:20.291 [2024-11-26 07:42:04.346384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.291 [2024-11-26 07:42:04.346394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.291 qpair failed and we were unable to recover it. 00:32:20.291 [2024-11-26 07:42:04.346670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.291 [2024-11-26 07:42:04.346681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.291 qpair failed and we were unable to recover it. 00:32:20.291 [2024-11-26 07:42:04.346997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.291 [2024-11-26 07:42:04.347009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.291 qpair failed and we were unable to recover it. 00:32:20.291 [2024-11-26 07:42:04.347319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.291 [2024-11-26 07:42:04.347330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.291 qpair failed and we were unable to recover it. 00:32:20.291 [2024-11-26 07:42:04.347537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.291 [2024-11-26 07:42:04.347548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.291 qpair failed and we were unable to recover it. 00:32:20.291 [2024-11-26 07:42:04.347871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.291 [2024-11-26 07:42:04.347882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.291 qpair failed and we were unable to recover it. 00:32:20.291 [2024-11-26 07:42:04.348086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.291 [2024-11-26 07:42:04.348097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.291 qpair failed and we were unable to recover it. 00:32:20.291 [2024-11-26 07:42:04.348407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.291 [2024-11-26 07:42:04.348420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.291 qpair failed and we were unable to recover it. 00:32:20.291 [2024-11-26 07:42:04.348729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.291 [2024-11-26 07:42:04.348740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.291 qpair failed and we were unable to recover it. 00:32:20.291 [2024-11-26 07:42:04.349082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.291 [2024-11-26 07:42:04.349093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.291 qpair failed and we were unable to recover it. 00:32:20.291 [2024-11-26 07:42:04.349311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.291 [2024-11-26 07:42:04.349322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.291 qpair failed and we were unable to recover it. 00:32:20.291 [2024-11-26 07:42:04.349699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.291 [2024-11-26 07:42:04.349710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.291 qpair failed and we were unable to recover it. 00:32:20.291 [2024-11-26 07:42:04.349914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.291 [2024-11-26 07:42:04.349926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.291 qpair failed and we were unable to recover it. 00:32:20.291 [2024-11-26 07:42:04.350288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.291 [2024-11-26 07:42:04.350299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.291 qpair failed and we were unable to recover it. 00:32:20.291 [2024-11-26 07:42:04.350616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.291 [2024-11-26 07:42:04.350626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.291 qpair failed and we were unable to recover it. 00:32:20.291 [2024-11-26 07:42:04.350994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.291 [2024-11-26 07:42:04.351005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.291 qpair failed and we were unable to recover it. 00:32:20.291 [2024-11-26 07:42:04.351336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.291 [2024-11-26 07:42:04.351347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.291 qpair failed and we were unable to recover it. 00:32:20.291 [2024-11-26 07:42:04.351695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.291 [2024-11-26 07:42:04.351706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.291 qpair failed and we were unable to recover it. 00:32:20.291 [2024-11-26 07:42:04.351925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.291 [2024-11-26 07:42:04.351937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.291 qpair failed and we were unable to recover it. 00:32:20.291 [2024-11-26 07:42:04.352335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.291 [2024-11-26 07:42:04.352346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.291 qpair failed and we were unable to recover it. 00:32:20.291 [2024-11-26 07:42:04.352685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.291 [2024-11-26 07:42:04.352697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.291 qpair failed and we were unable to recover it. 00:32:20.291 [2024-11-26 07:42:04.353039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.291 [2024-11-26 07:42:04.353050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.291 qpair failed and we were unable to recover it. 00:32:20.291 [2024-11-26 07:42:04.353372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.291 [2024-11-26 07:42:04.353382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.291 qpair failed and we were unable to recover it. 00:32:20.291 [2024-11-26 07:42:04.353679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.291 [2024-11-26 07:42:04.353689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.291 qpair failed and we were unable to recover it. 00:32:20.291 [2024-11-26 07:42:04.353986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.291 [2024-11-26 07:42:04.353998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.291 qpair failed and we were unable to recover it. 00:32:20.291 [2024-11-26 07:42:04.354330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.291 [2024-11-26 07:42:04.354341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.291 qpair failed and we were unable to recover it. 00:32:20.291 [2024-11-26 07:42:04.354647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.292 [2024-11-26 07:42:04.354658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.292 qpair failed and we were unable to recover it. 00:32:20.292 [2024-11-26 07:42:04.354978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.292 [2024-11-26 07:42:04.354989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.292 qpair failed and we were unable to recover it. 00:32:20.292 [2024-11-26 07:42:04.355322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.292 [2024-11-26 07:42:04.355333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.292 qpair failed and we were unable to recover it. 00:32:20.292 [2024-11-26 07:42:04.355635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.292 [2024-11-26 07:42:04.355646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.292 qpair failed and we were unable to recover it. 00:32:20.292 [2024-11-26 07:42:04.355981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.292 [2024-11-26 07:42:04.355992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.292 qpair failed and we were unable to recover it. 00:32:20.292 [2024-11-26 07:42:04.356310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.292 [2024-11-26 07:42:04.356321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.292 qpair failed and we were unable to recover it. 00:32:20.292 [2024-11-26 07:42:04.356702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.292 [2024-11-26 07:42:04.356714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.292 qpair failed and we were unable to recover it. 00:32:20.292 [2024-11-26 07:42:04.357033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.292 [2024-11-26 07:42:04.357045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.292 qpair failed and we were unable to recover it. 00:32:20.292 [2024-11-26 07:42:04.357391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.292 [2024-11-26 07:42:04.357404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.292 qpair failed and we were unable to recover it. 00:32:20.292 [2024-11-26 07:42:04.357711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.292 [2024-11-26 07:42:04.357722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.292 qpair failed and we were unable to recover it. 00:32:20.292 [2024-11-26 07:42:04.358034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.292 [2024-11-26 07:42:04.358045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.292 qpair failed and we were unable to recover it. 00:32:20.292 [2024-11-26 07:42:04.358358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.292 [2024-11-26 07:42:04.358369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.292 qpair failed and we were unable to recover it. 00:32:20.292 [2024-11-26 07:42:04.358569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.292 [2024-11-26 07:42:04.358581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.292 qpair failed and we were unable to recover it. 00:32:20.292 [2024-11-26 07:42:04.358882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.292 [2024-11-26 07:42:04.358893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.292 qpair failed and we were unable to recover it. 00:32:20.292 [2024-11-26 07:42:04.359128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.292 [2024-11-26 07:42:04.359139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.292 qpair failed and we were unable to recover it. 00:32:20.292 [2024-11-26 07:42:04.359343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.292 [2024-11-26 07:42:04.359354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.292 qpair failed and we were unable to recover it. 00:32:20.292 [2024-11-26 07:42:04.359681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.292 [2024-11-26 07:42:04.359692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.292 qpair failed and we were unable to recover it. 00:32:20.292 [2024-11-26 07:42:04.360004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.292 [2024-11-26 07:42:04.360015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.292 qpair failed and we were unable to recover it. 00:32:20.292 [2024-11-26 07:42:04.360314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.292 [2024-11-26 07:42:04.360325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.292 qpair failed and we were unable to recover it. 00:32:20.292 [2024-11-26 07:42:04.360669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.292 [2024-11-26 07:42:04.360680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.292 qpair failed and we were unable to recover it. 00:32:20.292 [2024-11-26 07:42:04.360880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.292 [2024-11-26 07:42:04.360892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.292 qpair failed and we were unable to recover it. 00:32:20.292 [2024-11-26 07:42:04.361260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.292 [2024-11-26 07:42:04.361271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.292 qpair failed and we were unable to recover it. 00:32:20.292 [2024-11-26 07:42:04.361611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.292 [2024-11-26 07:42:04.361622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.292 qpair failed and we were unable to recover it. 00:32:20.292 [2024-11-26 07:42:04.361938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.292 [2024-11-26 07:42:04.361950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.292 qpair failed and we were unable to recover it. 00:32:20.292 [2024-11-26 07:42:04.362264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.292 [2024-11-26 07:42:04.362275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.292 qpair failed and we were unable to recover it. 00:32:20.292 [2024-11-26 07:42:04.362593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.292 [2024-11-26 07:42:04.362605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.292 qpair failed and we were unable to recover it. 00:32:20.292 [2024-11-26 07:42:04.362920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.293 [2024-11-26 07:42:04.362931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.293 qpair failed and we were unable to recover it. 00:32:20.293 [2024-11-26 07:42:04.363257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.293 [2024-11-26 07:42:04.363269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.293 qpair failed and we were unable to recover it. 00:32:20.293 [2024-11-26 07:42:04.363569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.293 [2024-11-26 07:42:04.363581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.293 qpair failed and we were unable to recover it. 00:32:20.293 [2024-11-26 07:42:04.363753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.293 [2024-11-26 07:42:04.363766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.293 qpair failed and we were unable to recover it. 00:32:20.293 [2024-11-26 07:42:04.363948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.293 [2024-11-26 07:42:04.363961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.293 qpair failed and we were unable to recover it. 00:32:20.293 [2024-11-26 07:42:04.364249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.293 [2024-11-26 07:42:04.364260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.293 qpair failed and we were unable to recover it. 00:32:20.293 [2024-11-26 07:42:04.364588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.293 [2024-11-26 07:42:04.364599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.293 qpair failed and we were unable to recover it. 00:32:20.293 [2024-11-26 07:42:04.364910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.293 [2024-11-26 07:42:04.364921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.293 qpair failed and we were unable to recover it. 00:32:20.293 [2024-11-26 07:42:04.365320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.293 [2024-11-26 07:42:04.365331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.293 qpair failed and we were unable to recover it. 00:32:20.293 [2024-11-26 07:42:04.365547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.293 [2024-11-26 07:42:04.365558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.293 qpair failed and we were unable to recover it. 00:32:20.293 [2024-11-26 07:42:04.365873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.293 [2024-11-26 07:42:04.365885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.293 qpair failed and we were unable to recover it. 00:32:20.293 [2024-11-26 07:42:04.366156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.293 [2024-11-26 07:42:04.366167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.293 qpair failed and we were unable to recover it. 00:32:20.293 [2024-11-26 07:42:04.366465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.293 [2024-11-26 07:42:04.366476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.293 qpair failed and we were unable to recover it. 00:32:20.293 [2024-11-26 07:42:04.366793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.293 [2024-11-26 07:42:04.366804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.293 qpair failed and we were unable to recover it. 00:32:20.293 [2024-11-26 07:42:04.367102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.293 [2024-11-26 07:42:04.367113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.293 qpair failed and we were unable to recover it. 00:32:20.293 [2024-11-26 07:42:04.367387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.293 [2024-11-26 07:42:04.367397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.293 qpair failed and we were unable to recover it. 00:32:20.293 [2024-11-26 07:42:04.367734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.293 [2024-11-26 07:42:04.367746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.293 qpair failed and we were unable to recover it. 00:32:20.293 [2024-11-26 07:42:04.368057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.293 [2024-11-26 07:42:04.368068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.293 qpair failed and we were unable to recover it. 00:32:20.293 [2024-11-26 07:42:04.368388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.293 [2024-11-26 07:42:04.368399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.293 qpair failed and we were unable to recover it. 00:32:20.293 [2024-11-26 07:42:04.368704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.293 [2024-11-26 07:42:04.368715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.293 qpair failed and we were unable to recover it. 00:32:20.293 [2024-11-26 07:42:04.368938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.293 [2024-11-26 07:42:04.368948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.293 qpair failed and we were unable to recover it. 00:32:20.293 [2024-11-26 07:42:04.369223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.293 [2024-11-26 07:42:04.369234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.293 qpair failed and we were unable to recover it. 00:32:20.293 [2024-11-26 07:42:04.369559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.293 [2024-11-26 07:42:04.369571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.293 qpair failed and we were unable to recover it. 00:32:20.293 [2024-11-26 07:42:04.369789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.293 [2024-11-26 07:42:04.369800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.293 qpair failed and we were unable to recover it. 00:32:20.293 [2024-11-26 07:42:04.370018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.293 [2024-11-26 07:42:04.370029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.293 qpair failed and we were unable to recover it. 00:32:20.293 [2024-11-26 07:42:04.370362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.293 [2024-11-26 07:42:04.370373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.293 qpair failed and we were unable to recover it. 00:32:20.293 [2024-11-26 07:42:04.370677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.293 [2024-11-26 07:42:04.370687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.294 qpair failed and we were unable to recover it. 00:32:20.294 [2024-11-26 07:42:04.371037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.294 [2024-11-26 07:42:04.371048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.294 qpair failed and we were unable to recover it. 00:32:20.294 [2024-11-26 07:42:04.371344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.294 [2024-11-26 07:42:04.371354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.294 qpair failed and we were unable to recover it. 00:32:20.294 [2024-11-26 07:42:04.371655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.294 [2024-11-26 07:42:04.371666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.294 qpair failed and we were unable to recover it. 00:32:20.294 [2024-11-26 07:42:04.371885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.294 [2024-11-26 07:42:04.371896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.294 qpair failed and we were unable to recover it. 00:32:20.294 [2024-11-26 07:42:04.372262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.294 [2024-11-26 07:42:04.372273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.294 qpair failed and we were unable to recover it. 00:32:20.294 [2024-11-26 07:42:04.372607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.294 [2024-11-26 07:42:04.372618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.294 qpair failed and we were unable to recover it. 00:32:20.294 [2024-11-26 07:42:04.372956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.294 [2024-11-26 07:42:04.372968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.294 qpair failed and we were unable to recover it. 00:32:20.294 [2024-11-26 07:42:04.373339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.294 [2024-11-26 07:42:04.373351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.294 qpair failed and we were unable to recover it. 00:32:20.294 [2024-11-26 07:42:04.373730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.294 [2024-11-26 07:42:04.373742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.294 qpair failed and we were unable to recover it. 00:32:20.294 [2024-11-26 07:42:04.374077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.294 [2024-11-26 07:42:04.374088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.294 qpair failed and we were unable to recover it. 00:32:20.294 [2024-11-26 07:42:04.374403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.294 [2024-11-26 07:42:04.374414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.294 qpair failed and we were unable to recover it. 00:32:20.294 [2024-11-26 07:42:04.374637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.294 [2024-11-26 07:42:04.374647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.294 qpair failed and we were unable to recover it. 00:32:20.294 [2024-11-26 07:42:04.374967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.294 [2024-11-26 07:42:04.374978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.294 qpair failed and we were unable to recover it. 00:32:20.294 [2024-11-26 07:42:04.375305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.294 [2024-11-26 07:42:04.375315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.294 qpair failed and we were unable to recover it. 00:32:20.294 [2024-11-26 07:42:04.375631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.294 [2024-11-26 07:42:04.375641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.294 qpair failed and we were unable to recover it. 00:32:20.294 [2024-11-26 07:42:04.375941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.294 [2024-11-26 07:42:04.375952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.294 qpair failed and we were unable to recover it. 00:32:20.294 [2024-11-26 07:42:04.376285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.294 [2024-11-26 07:42:04.376297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.294 qpair failed and we were unable to recover it. 00:32:20.294 [2024-11-26 07:42:04.376630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.294 [2024-11-26 07:42:04.376642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.294 qpair failed and we were unable to recover it. 00:32:20.294 [2024-11-26 07:42:04.376824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.294 [2024-11-26 07:42:04.376835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.294 qpair failed and we were unable to recover it. 00:32:20.294 [2024-11-26 07:42:04.377218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.294 [2024-11-26 07:42:04.377230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.294 qpair failed and we were unable to recover it. 00:32:20.294 [2024-11-26 07:42:04.377548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.294 [2024-11-26 07:42:04.377557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.294 qpair failed and we were unable to recover it. 00:32:20.572 [2024-11-26 07:42:04.377858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.572 [2024-11-26 07:42:04.377875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.572 qpair failed and we were unable to recover it. 00:32:20.572 [2024-11-26 07:42:04.378168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.572 [2024-11-26 07:42:04.378179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.572 qpair failed and we were unable to recover it. 00:32:20.572 [2024-11-26 07:42:04.378499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.572 [2024-11-26 07:42:04.378512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.572 qpair failed and we were unable to recover it. 00:32:20.572 [2024-11-26 07:42:04.378720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.572 [2024-11-26 07:42:04.378731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.572 qpair failed and we were unable to recover it. 00:32:20.572 [2024-11-26 07:42:04.378911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.572 [2024-11-26 07:42:04.378922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.572 qpair failed and we were unable to recover it. 00:32:20.572 [2024-11-26 07:42:04.379289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.572 [2024-11-26 07:42:04.379300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.572 qpair failed and we were unable to recover it. 00:32:20.572 [2024-11-26 07:42:04.379725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.572 [2024-11-26 07:42:04.379736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.572 qpair failed and we were unable to recover it. 00:32:20.572 [2024-11-26 07:42:04.380104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.572 [2024-11-26 07:42:04.380115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.572 qpair failed and we were unable to recover it. 00:32:20.572 [2024-11-26 07:42:04.380339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.572 [2024-11-26 07:42:04.380351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.572 qpair failed and we were unable to recover it. 00:32:20.572 [2024-11-26 07:42:04.380681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.572 [2024-11-26 07:42:04.380692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.572 qpair failed and we were unable to recover it. 00:32:20.572 [2024-11-26 07:42:04.381093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.572 [2024-11-26 07:42:04.381103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.572 qpair failed and we were unable to recover it. 00:32:20.572 [2024-11-26 07:42:04.381309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.572 [2024-11-26 07:42:04.381320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.572 qpair failed and we were unable to recover it. 00:32:20.572 [2024-11-26 07:42:04.381605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.572 [2024-11-26 07:42:04.381616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.572 qpair failed and we were unable to recover it. 00:32:20.572 [2024-11-26 07:42:04.381902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.572 [2024-11-26 07:42:04.381913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.572 qpair failed and we were unable to recover it. 00:32:20.572 [2024-11-26 07:42:04.382237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.572 [2024-11-26 07:42:04.382248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.572 qpair failed and we were unable to recover it. 00:32:20.572 [2024-11-26 07:42:04.382554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.572 [2024-11-26 07:42:04.382565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.572 qpair failed and we were unable to recover it. 00:32:20.572 [2024-11-26 07:42:04.382878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.572 [2024-11-26 07:42:04.382890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.572 qpair failed and we were unable to recover it. 00:32:20.572 [2024-11-26 07:42:04.383183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.572 [2024-11-26 07:42:04.383193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.572 qpair failed and we were unable to recover it. 00:32:20.572 [2024-11-26 07:42:04.383503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.572 [2024-11-26 07:42:04.383515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.572 qpair failed and we were unable to recover it. 00:32:20.572 [2024-11-26 07:42:04.383828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.572 [2024-11-26 07:42:04.383839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.572 qpair failed and we were unable to recover it. 00:32:20.572 [2024-11-26 07:42:04.384193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.572 [2024-11-26 07:42:04.384204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.572 qpair failed and we were unable to recover it. 00:32:20.572 [2024-11-26 07:42:04.384514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.572 [2024-11-26 07:42:04.384526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.572 qpair failed and we were unable to recover it. 00:32:20.572 [2024-11-26 07:42:04.384939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.572 [2024-11-26 07:42:04.384950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.572 qpair failed and we were unable to recover it. 00:32:20.572 [2024-11-26 07:42:04.385277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.572 [2024-11-26 07:42:04.385288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.572 qpair failed and we were unable to recover it. 00:32:20.572 [2024-11-26 07:42:04.385599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.572 [2024-11-26 07:42:04.385610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.572 qpair failed and we were unable to recover it. 00:32:20.572 [2024-11-26 07:42:04.385917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.572 [2024-11-26 07:42:04.385929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.572 qpair failed and we were unable to recover it. 00:32:20.572 [2024-11-26 07:42:04.386189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.572 [2024-11-26 07:42:04.386200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.572 qpair failed and we were unable to recover it. 00:32:20.572 [2024-11-26 07:42:04.386514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.572 [2024-11-26 07:42:04.386525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.572 qpair failed and we were unable to recover it. 00:32:20.572 [2024-11-26 07:42:04.386876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.573 [2024-11-26 07:42:04.386888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.573 qpair failed and we were unable to recover it. 00:32:20.573 [2024-11-26 07:42:04.387218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.573 [2024-11-26 07:42:04.387230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.573 qpair failed and we were unable to recover it. 00:32:20.573 [2024-11-26 07:42:04.387541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.573 [2024-11-26 07:42:04.387552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.573 qpair failed and we were unable to recover it. 00:32:20.573 [2024-11-26 07:42:04.387867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.573 [2024-11-26 07:42:04.387878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.573 qpair failed and we were unable to recover it. 00:32:20.573 [2024-11-26 07:42:04.388184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.573 [2024-11-26 07:42:04.388194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.573 qpair failed and we were unable to recover it. 00:32:20.573 [2024-11-26 07:42:04.388395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.573 [2024-11-26 07:42:04.388407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.573 qpair failed and we were unable to recover it. 00:32:20.573 [2024-11-26 07:42:04.388621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.573 [2024-11-26 07:42:04.388631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.573 qpair failed and we were unable to recover it. 00:32:20.573 [2024-11-26 07:42:04.388683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.573 [2024-11-26 07:42:04.388694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.573 qpair failed and we were unable to recover it. 00:32:20.573 [2024-11-26 07:42:04.388888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.573 [2024-11-26 07:42:04.388900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.573 qpair failed and we were unable to recover it. 00:32:20.573 [2024-11-26 07:42:04.389269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.573 [2024-11-26 07:42:04.389280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.573 qpair failed and we were unable to recover it. 00:32:20.573 [2024-11-26 07:42:04.389470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.573 [2024-11-26 07:42:04.389481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.573 qpair failed and we were unable to recover it. 00:32:20.573 [2024-11-26 07:42:04.389798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.573 [2024-11-26 07:42:04.389808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.573 qpair failed and we were unable to recover it. 00:32:20.573 [2024-11-26 07:42:04.389928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.573 [2024-11-26 07:42:04.389939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.573 qpair failed and we were unable to recover it. 00:32:20.573 [2024-11-26 07:42:04.390241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.573 [2024-11-26 07:42:04.390252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.573 qpair failed and we were unable to recover it. 00:32:20.573 [2024-11-26 07:42:04.390588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.573 [2024-11-26 07:42:04.390599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.573 qpair failed and we were unable to recover it. 00:32:20.573 [2024-11-26 07:42:04.390912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.573 [2024-11-26 07:42:04.390923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.573 qpair failed and we were unable to recover it. 00:32:20.573 [2024-11-26 07:42:04.391147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.573 [2024-11-26 07:42:04.391158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.573 qpair failed and we were unable to recover it. 00:32:20.573 [2024-11-26 07:42:04.391429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.573 [2024-11-26 07:42:04.391439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.573 qpair failed and we were unable to recover it. 00:32:20.573 [2024-11-26 07:42:04.391766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.573 [2024-11-26 07:42:04.391777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.573 qpair failed and we were unable to recover it. 00:32:20.573 [2024-11-26 07:42:04.392055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.573 [2024-11-26 07:42:04.392067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.573 qpair failed and we were unable to recover it. 00:32:20.573 [2024-11-26 07:42:04.392377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.573 [2024-11-26 07:42:04.392387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.573 qpair failed and we were unable to recover it. 00:32:20.573 [2024-11-26 07:42:04.392694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.573 [2024-11-26 07:42:04.392705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.573 qpair failed and we were unable to recover it. 00:32:20.573 [2024-11-26 07:42:04.393058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.573 [2024-11-26 07:42:04.393070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.573 qpair failed and we were unable to recover it. 00:32:20.573 [2024-11-26 07:42:04.393396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.573 [2024-11-26 07:42:04.393408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.573 qpair failed and we were unable to recover it. 00:32:20.573 [2024-11-26 07:42:04.393493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.573 [2024-11-26 07:42:04.393504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.573 qpair failed and we were unable to recover it. 00:32:20.573 [2024-11-26 07:42:04.393820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.573 [2024-11-26 07:42:04.393830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.573 qpair failed and we were unable to recover it. 00:32:20.573 [2024-11-26 07:42:04.394166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.573 [2024-11-26 07:42:04.394177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.573 qpair failed and we were unable to recover it. 00:32:20.573 [2024-11-26 07:42:04.394499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.573 [2024-11-26 07:42:04.394510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.573 qpair failed and we were unable to recover it. 00:32:20.573 [2024-11-26 07:42:04.394699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.573 [2024-11-26 07:42:04.394713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.573 qpair failed and we were unable to recover it. 00:32:20.573 [2024-11-26 07:42:04.395002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.573 [2024-11-26 07:42:04.395013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.573 qpair failed and we were unable to recover it. 00:32:20.573 [2024-11-26 07:42:04.395217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.573 [2024-11-26 07:42:04.395228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.573 qpair failed and we were unable to recover it. 00:32:20.573 [2024-11-26 07:42:04.395501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.573 [2024-11-26 07:42:04.395511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.573 qpair failed and we were unable to recover it. 00:32:20.573 [2024-11-26 07:42:04.395827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.573 [2024-11-26 07:42:04.395837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.573 qpair failed and we were unable to recover it. 00:32:20.573 [2024-11-26 07:42:04.395990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.573 [2024-11-26 07:42:04.396002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.573 qpair failed and we were unable to recover it. 00:32:20.573 [2024-11-26 07:42:04.396273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.573 [2024-11-26 07:42:04.396284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.573 qpair failed and we were unable to recover it. 00:32:20.573 [2024-11-26 07:42:04.396585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.573 [2024-11-26 07:42:04.396596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.573 qpair failed and we were unable to recover it. 00:32:20.573 [2024-11-26 07:42:04.396936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.573 [2024-11-26 07:42:04.396947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.573 qpair failed and we were unable to recover it. 00:32:20.573 [2024-11-26 07:42:04.397135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.573 [2024-11-26 07:42:04.397147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.573 qpair failed and we were unable to recover it. 00:32:20.573 [2024-11-26 07:42:04.397439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.397449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.574 qpair failed and we were unable to recover it. 00:32:20.574 [2024-11-26 07:42:04.397737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.397747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.574 qpair failed and we were unable to recover it. 00:32:20.574 [2024-11-26 07:42:04.398076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.398088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.574 qpair failed and we were unable to recover it. 00:32:20.574 [2024-11-26 07:42:04.398164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.398173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.574 qpair failed and we were unable to recover it. 00:32:20.574 [2024-11-26 07:42:04.398404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.398416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.574 qpair failed and we were unable to recover it. 00:32:20.574 [2024-11-26 07:42:04.398758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.398770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.574 qpair failed and we were unable to recover it. 00:32:20.574 [2024-11-26 07:42:04.399093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.399105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.574 qpair failed and we were unable to recover it. 00:32:20.574 [2024-11-26 07:42:04.399433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.399444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.574 qpair failed and we were unable to recover it. 00:32:20.574 [2024-11-26 07:42:04.399766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.399776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.574 qpair failed and we were unable to recover it. 00:32:20.574 [2024-11-26 07:42:04.400100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.400111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.574 qpair failed and we were unable to recover it. 00:32:20.574 [2024-11-26 07:42:04.400400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.400411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.574 qpair failed and we were unable to recover it. 00:32:20.574 [2024-11-26 07:42:04.400604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.400615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.574 qpair failed and we were unable to recover it. 00:32:20.574 [2024-11-26 07:42:04.400799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.400811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.574 qpair failed and we were unable to recover it. 00:32:20.574 [2024-11-26 07:42:04.401109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.401121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.574 qpair failed and we were unable to recover it. 00:32:20.574 [2024-11-26 07:42:04.401435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.401446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.574 qpair failed and we were unable to recover it. 00:32:20.574 [2024-11-26 07:42:04.401782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.401793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.574 qpair failed and we were unable to recover it. 00:32:20.574 [2024-11-26 07:42:04.402212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.402226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.574 qpair failed and we were unable to recover it. 00:32:20.574 [2024-11-26 07:42:04.402526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.402536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.574 qpair failed and we were unable to recover it. 00:32:20.574 [2024-11-26 07:42:04.402738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.402749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.574 qpair failed and we were unable to recover it. 00:32:20.574 [2024-11-26 07:42:04.403059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.403072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.574 qpair failed and we were unable to recover it. 00:32:20.574 [2024-11-26 07:42:04.403427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.403437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.574 qpair failed and we were unable to recover it. 00:32:20.574 [2024-11-26 07:42:04.403783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.403793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.574 qpair failed and we were unable to recover it. 00:32:20.574 [2024-11-26 07:42:04.404065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.404075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.574 qpair failed and we were unable to recover it. 00:32:20.574 [2024-11-26 07:42:04.404439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.404448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.574 qpair failed and we were unable to recover it. 00:32:20.574 [2024-11-26 07:42:04.404647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.404656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.574 qpair failed and we were unable to recover it. 00:32:20.574 [2024-11-26 07:42:04.404855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.404869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.574 qpair failed and we were unable to recover it. 00:32:20.574 [2024-11-26 07:42:04.405229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.405239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.574 qpair failed and we were unable to recover it. 00:32:20.574 [2024-11-26 07:42:04.405559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.405569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.574 qpair failed and we were unable to recover it. 00:32:20.574 [2024-11-26 07:42:04.405911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.405921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.574 qpair failed and we were unable to recover it. 00:32:20.574 [2024-11-26 07:42:04.406133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.406142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.574 qpair failed and we were unable to recover it. 00:32:20.574 [2024-11-26 07:42:04.406468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.406477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.574 qpair failed and we were unable to recover it. 00:32:20.574 [2024-11-26 07:42:04.406668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.406678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.574 qpair failed and we were unable to recover it. 00:32:20.574 [2024-11-26 07:42:04.407092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.407103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.574 qpair failed and we were unable to recover it. 00:32:20.574 [2024-11-26 07:42:04.407432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.407442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.574 qpair failed and we were unable to recover it. 00:32:20.574 [2024-11-26 07:42:04.407766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.407776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.574 qpair failed and we were unable to recover it. 00:32:20.574 [2024-11-26 07:42:04.408144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.408155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.574 qpair failed and we were unable to recover it. 00:32:20.574 [2024-11-26 07:42:04.408496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.408506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.574 qpair failed and we were unable to recover it. 00:32:20.574 [2024-11-26 07:42:04.408931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.408941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.574 qpair failed and we were unable to recover it. 00:32:20.574 [2024-11-26 07:42:04.409239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.574 [2024-11-26 07:42:04.409248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.575 qpair failed and we were unable to recover it. 00:32:20.575 [2024-11-26 07:42:04.409568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.575 [2024-11-26 07:42:04.409578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.575 qpair failed and we were unable to recover it. 00:32:20.575 [2024-11-26 07:42:04.409926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.575 [2024-11-26 07:42:04.409936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.575 qpair failed and we were unable to recover it. 00:32:20.575 [2024-11-26 07:42:04.410048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.575 [2024-11-26 07:42:04.410057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476490 with addr=10.0.0.2, port=4420 00:32:20.575 qpair failed and we were unable to recover it. 00:32:20.575 [2024-11-26 07:42:04.410254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473020 is same with the state(6) to be set 00:32:20.575 [2024-11-26 07:42:04.410663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.575 [2024-11-26 07:42:04.410687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.575 qpair failed and we were unable to recover it. 00:32:20.575 [2024-11-26 07:42:04.411097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.575 [2024-11-26 07:42:04.411114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.575 qpair failed and we were unable to recover it. 00:32:20.575 [2024-11-26 07:42:04.411419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.575 [2024-11-26 07:42:04.411427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.575 qpair failed and we were unable to recover it. 00:32:20.575 [2024-11-26 07:42:04.411756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.575 [2024-11-26 07:42:04.411764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.575 qpair failed and we were unable to recover it. 00:32:20.575 [2024-11-26 07:42:04.411974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.575 [2024-11-26 07:42:04.411981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.575 qpair failed and we were unable to recover it. 00:32:20.575 [2024-11-26 07:42:04.412295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.575 [2024-11-26 07:42:04.412302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.575 qpair failed and we were unable to recover it. 00:32:20.575 [2024-11-26 07:42:04.412626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.575 [2024-11-26 07:42:04.412634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.575 qpair failed and we were unable to recover it. 00:32:20.575 [2024-11-26 07:42:04.412935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.575 [2024-11-26 07:42:04.412943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.575 qpair failed and we were unable to recover it. 00:32:20.575 [2024-11-26 07:42:04.413203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.575 [2024-11-26 07:42:04.413211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.575 qpair failed and we were unable to recover it. 00:32:20.575 [2024-11-26 07:42:04.413509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.575 [2024-11-26 07:42:04.413516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.575 qpair failed and we were unable to recover it. 00:32:20.575 [2024-11-26 07:42:04.413711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.575 [2024-11-26 07:42:04.413719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.575 qpair failed and we were unable to recover it. 00:32:20.575 [2024-11-26 07:42:04.414042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.575 [2024-11-26 07:42:04.414049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.575 qpair failed and we were unable to recover it. 00:32:20.575 [2024-11-26 07:42:04.414361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.575 [2024-11-26 07:42:04.414368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.575 qpair failed and we were unable to recover it. 00:32:20.575 [2024-11-26 07:42:04.414555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.575 [2024-11-26 07:42:04.414562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.575 qpair failed and we were unable to recover it. 00:32:20.575 [2024-11-26 07:42:04.414937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.575 [2024-11-26 07:42:04.414945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.575 qpair failed and we were unable to recover it. 00:32:20.575 [2024-11-26 07:42:04.415158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.575 [2024-11-26 07:42:04.415174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.575 qpair failed and we were unable to recover it. 00:32:20.575 [2024-11-26 07:42:04.415525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.575 [2024-11-26 07:42:04.415532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.575 qpair failed and we were unable to recover it. 00:32:20.575 [2024-11-26 07:42:04.415848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.575 [2024-11-26 07:42:04.415856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.575 qpair failed and we were unable to recover it. 00:32:20.575 [2024-11-26 07:42:04.416188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.575 [2024-11-26 07:42:04.416196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.575 qpair failed and we were unable to recover it. 00:32:20.575 [2024-11-26 07:42:04.416519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.575 [2024-11-26 07:42:04.416527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.575 qpair failed and we were unable to recover it. 00:32:20.575 [2024-11-26 07:42:04.416829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.575 [2024-11-26 07:42:04.416837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.575 qpair failed and we were unable to recover it. 00:32:20.575 [2024-11-26 07:42:04.417207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.575 [2024-11-26 07:42:04.417215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.575 qpair failed and we were unable to recover it. 00:32:20.575 [2024-11-26 07:42:04.417428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.575 [2024-11-26 07:42:04.417436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.575 qpair failed and we were unable to recover it. 00:32:20.575 [2024-11-26 07:42:04.417605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.575 [2024-11-26 07:42:04.417612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.575 qpair failed and we were unable to recover it. 00:32:20.575 [2024-11-26 07:42:04.417934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.575 [2024-11-26 07:42:04.417941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.575 qpair failed and we were unable to recover it. 00:32:20.575 [2024-11-26 07:42:04.418266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.575 [2024-11-26 07:42:04.418274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.575 qpair failed and we were unable to recover it. 00:32:20.575 [2024-11-26 07:42:04.418593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.575 [2024-11-26 07:42:04.418601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.575 qpair failed and we were unable to recover it. 00:32:20.575 [2024-11-26 07:42:04.418793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.575 [2024-11-26 07:42:04.418801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.575 qpair failed and we were unable to recover it. 00:32:20.575 [2024-11-26 07:42:04.419116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.575 [2024-11-26 07:42:04.419123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.575 qpair failed and we were unable to recover it. 00:32:20.575 [2024-11-26 07:42:04.419299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.575 [2024-11-26 07:42:04.419309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.575 qpair failed and we were unable to recover it. 00:32:20.575 [2024-11-26 07:42:04.419616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.575 [2024-11-26 07:42:04.419623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.575 qpair failed and we were unable to recover it. 00:32:20.575 [2024-11-26 07:42:04.419933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.575 [2024-11-26 07:42:04.419940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.575 qpair failed and we were unable to recover it. 00:32:20.575 [2024-11-26 07:42:04.420237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.575 [2024-11-26 07:42:04.420244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.575 qpair failed and we were unable to recover it. 00:32:20.575 [2024-11-26 07:42:04.420609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.575 [2024-11-26 07:42:04.420616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.575 qpair failed and we were unable to recover it. 00:32:20.576 [2024-11-26 07:42:04.420923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.576 [2024-11-26 07:42:04.420930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.576 qpair failed and we were unable to recover it. 00:32:20.576 [2024-11-26 07:42:04.421242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.576 [2024-11-26 07:42:04.421249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.576 qpair failed and we were unable to recover it. 00:32:20.576 [2024-11-26 07:42:04.421571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.576 [2024-11-26 07:42:04.421578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.576 qpair failed and we were unable to recover it. 00:32:20.576 [2024-11-26 07:42:04.421911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.576 [2024-11-26 07:42:04.421919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.576 qpair failed and we were unable to recover it. 00:32:20.576 [2024-11-26 07:42:04.422004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.576 [2024-11-26 07:42:04.422011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.576 qpair failed and we were unable to recover it. 00:32:20.576 [2024-11-26 07:42:04.422332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.576 [2024-11-26 07:42:04.422339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.576 qpair failed and we were unable to recover it. 00:32:20.576 [2024-11-26 07:42:04.422655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.576 [2024-11-26 07:42:04.422663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.576 qpair failed and we were unable to recover it. 00:32:20.576 [2024-11-26 07:42:04.422979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.576 [2024-11-26 07:42:04.422986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.576 qpair failed and we were unable to recover it. 00:32:20.576 [2024-11-26 07:42:04.423293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.576 [2024-11-26 07:42:04.423302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.576 qpair failed and we were unable to recover it. 00:32:20.576 [2024-11-26 07:42:04.423615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.576 [2024-11-26 07:42:04.423622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.576 qpair failed and we were unable to recover it. 00:32:20.576 [2024-11-26 07:42:04.423970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.576 [2024-11-26 07:42:04.423977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.576 qpair failed and we were unable to recover it. 00:32:20.576 [2024-11-26 07:42:04.424306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.576 [2024-11-26 07:42:04.424313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.576 qpair failed and we were unable to recover it. 00:32:20.576 [2024-11-26 07:42:04.424694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.576 [2024-11-26 07:42:04.424701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.576 qpair failed and we were unable to recover it. 00:32:20.576 [2024-11-26 07:42:04.425049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.576 [2024-11-26 07:42:04.425056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.576 qpair failed and we were unable to recover it. 00:32:20.576 [2024-11-26 07:42:04.425250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.576 [2024-11-26 07:42:04.425263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.576 qpair failed and we were unable to recover it. 00:32:20.576 [2024-11-26 07:42:04.425549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.576 [2024-11-26 07:42:04.425556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.576 qpair failed and we were unable to recover it. 00:32:20.576 [2024-11-26 07:42:04.425790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.576 [2024-11-26 07:42:04.425796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.576 qpair failed and we were unable to recover it. 00:32:20.576 [2024-11-26 07:42:04.426096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.576 [2024-11-26 07:42:04.426103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.576 qpair failed and we were unable to recover it. 00:32:20.576 [2024-11-26 07:42:04.426433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.576 [2024-11-26 07:42:04.426440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.576 qpair failed and we were unable to recover it. 00:32:20.576 [2024-11-26 07:42:04.426776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.576 [2024-11-26 07:42:04.426783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.576 qpair failed and we were unable to recover it. 00:32:20.576 [2024-11-26 07:42:04.427046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.576 [2024-11-26 07:42:04.427053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.576 qpair failed and we were unable to recover it. 00:32:20.576 [2024-11-26 07:42:04.427386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.576 [2024-11-26 07:42:04.427393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.576 qpair failed and we were unable to recover it. 00:32:20.576 [2024-11-26 07:42:04.427711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.576 [2024-11-26 07:42:04.427718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.576 qpair failed and we were unable to recover it. 00:32:20.576 [2024-11-26 07:42:04.428045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.576 [2024-11-26 07:42:04.428052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.576 qpair failed and we were unable to recover it. 00:32:20.576 [2024-11-26 07:42:04.428370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.576 [2024-11-26 07:42:04.428377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.576 qpair failed and we were unable to recover it. 00:32:20.576 [2024-11-26 07:42:04.428550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.576 [2024-11-26 07:42:04.428557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.576 qpair failed and we were unable to recover it. 00:32:20.576 [2024-11-26 07:42:04.428837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.576 [2024-11-26 07:42:04.428844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.576 qpair failed and we were unable to recover it. 00:32:20.576 [2024-11-26 07:42:04.429160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.576 [2024-11-26 07:42:04.429167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.576 qpair failed and we were unable to recover it. 00:32:20.576 [2024-11-26 07:42:04.429497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.576 [2024-11-26 07:42:04.429504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.576 qpair failed and we were unable to recover it. 00:32:20.576 [2024-11-26 07:42:04.429794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.576 [2024-11-26 07:42:04.429801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.576 qpair failed and we were unable to recover it. 00:32:20.576 [2024-11-26 07:42:04.430111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.576 [2024-11-26 07:42:04.430118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.576 qpair failed and we were unable to recover it. 00:32:20.577 [2024-11-26 07:42:04.430475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.577 [2024-11-26 07:42:04.430483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.577 qpair failed and we were unable to recover it. 00:32:20.577 [2024-11-26 07:42:04.430686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.577 [2024-11-26 07:42:04.430693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.577 qpair failed and we were unable to recover it. 00:32:20.577 [2024-11-26 07:42:04.431006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.577 [2024-11-26 07:42:04.431013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.577 qpair failed and we were unable to recover it. 00:32:20.577 [2024-11-26 07:42:04.431310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.577 [2024-11-26 07:42:04.431317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.577 qpair failed and we were unable to recover it. 00:32:20.577 [2024-11-26 07:42:04.431628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.577 [2024-11-26 07:42:04.431634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.577 qpair failed and we were unable to recover it. 00:32:20.577 [2024-11-26 07:42:04.431932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.577 [2024-11-26 07:42:04.431939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.577 qpair failed and we were unable to recover it. 00:32:20.577 [2024-11-26 07:42:04.432012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.577 [2024-11-26 07:42:04.432019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.577 qpair failed and we were unable to recover it. 00:32:20.577 [2024-11-26 07:42:04.432350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.577 [2024-11-26 07:42:04.432356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.577 qpair failed and we were unable to recover it. 00:32:20.577 [2024-11-26 07:42:04.432668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.577 [2024-11-26 07:42:04.432675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.577 qpair failed and we were unable to recover it. 00:32:20.577 [2024-11-26 07:42:04.432880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.577 [2024-11-26 07:42:04.432888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.577 qpair failed and we were unable to recover it. 00:32:20.577 [2024-11-26 07:42:04.433188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.577 [2024-11-26 07:42:04.433196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.577 qpair failed and we were unable to recover it. 00:32:20.577 [2024-11-26 07:42:04.433507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.577 [2024-11-26 07:42:04.433515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.577 qpair failed and we were unable to recover it. 00:32:20.577 [2024-11-26 07:42:04.433820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.577 [2024-11-26 07:42:04.433827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.577 qpair failed and we were unable to recover it. 00:32:20.577 [2024-11-26 07:42:04.434023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.577 [2024-11-26 07:42:04.434031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.577 qpair failed and we were unable to recover it. 00:32:20.577 [2024-11-26 07:42:04.434274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.577 [2024-11-26 07:42:04.434282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.577 qpair failed and we were unable to recover it. 00:32:20.577 [2024-11-26 07:42:04.434576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.577 [2024-11-26 07:42:04.434583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.577 qpair failed and we were unable to recover it. 00:32:20.577 [2024-11-26 07:42:04.434913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.577 [2024-11-26 07:42:04.434920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.577 qpair failed and we were unable to recover it. 00:32:20.577 [2024-11-26 07:42:04.435133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.577 [2024-11-26 07:42:04.435143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.577 qpair failed and we were unable to recover it. 00:32:20.577 [2024-11-26 07:42:04.435343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.577 [2024-11-26 07:42:04.435350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.577 qpair failed and we were unable to recover it. 00:32:20.577 [2024-11-26 07:42:04.435610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.577 [2024-11-26 07:42:04.435617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.577 qpair failed and we were unable to recover it. 00:32:20.577 [2024-11-26 07:42:04.435921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.577 [2024-11-26 07:42:04.435928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.577 qpair failed and we were unable to recover it. 00:32:20.577 [2024-11-26 07:42:04.436224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.577 [2024-11-26 07:42:04.436231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.577 qpair failed and we were unable to recover it. 00:32:20.577 [2024-11-26 07:42:04.436530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.577 [2024-11-26 07:42:04.436537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.577 qpair failed and we were unable to recover it. 00:32:20.577 [2024-11-26 07:42:04.436850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.577 [2024-11-26 07:42:04.436857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.577 qpair failed and we were unable to recover it. 00:32:20.577 [2024-11-26 07:42:04.437007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.577 [2024-11-26 07:42:04.437015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.577 qpair failed and we were unable to recover it. 00:32:20.577 [2024-11-26 07:42:04.437281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.577 [2024-11-26 07:42:04.437288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.577 qpair failed and we were unable to recover it. 00:32:20.577 [2024-11-26 07:42:04.437619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.577 [2024-11-26 07:42:04.437626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.577 qpair failed and we were unable to recover it. 00:32:20.577 [2024-11-26 07:42:04.437929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.577 [2024-11-26 07:42:04.437936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.577 qpair failed and we were unable to recover it. 00:32:20.577 [2024-11-26 07:42:04.438264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.577 [2024-11-26 07:42:04.438271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.577 qpair failed and we were unable to recover it. 00:32:20.577 [2024-11-26 07:42:04.438610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.577 [2024-11-26 07:42:04.438617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.577 qpair failed and we were unable to recover it. 00:32:20.577 [2024-11-26 07:42:04.438799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.577 [2024-11-26 07:42:04.438806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.577 qpair failed and we were unable to recover it. 00:32:20.577 [2024-11-26 07:42:04.439208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.577 [2024-11-26 07:42:04.439216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.577 qpair failed and we were unable to recover it. 00:32:20.577 [2024-11-26 07:42:04.439523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.577 [2024-11-26 07:42:04.439529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.577 qpair failed and we were unable to recover it. 00:32:20.577 [2024-11-26 07:42:04.439693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.577 [2024-11-26 07:42:04.439701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.577 qpair failed and we were unable to recover it. 00:32:20.577 [2024-11-26 07:42:04.439961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.577 [2024-11-26 07:42:04.439969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.577 qpair failed and we were unable to recover it. 00:32:20.577 [2024-11-26 07:42:04.440329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.577 [2024-11-26 07:42:04.440336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.577 qpair failed and we were unable to recover it. 00:32:20.577 [2024-11-26 07:42:04.440678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.577 [2024-11-26 07:42:04.440684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.577 qpair failed and we were unable to recover it. 00:32:20.577 [2024-11-26 07:42:04.440999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.578 [2024-11-26 07:42:04.441006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.578 qpair failed and we were unable to recover it. 00:32:20.578 [2024-11-26 07:42:04.441343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.578 [2024-11-26 07:42:04.441349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.578 qpair failed and we were unable to recover it. 00:32:20.578 [2024-11-26 07:42:04.441685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.578 [2024-11-26 07:42:04.441691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.578 qpair failed and we were unable to recover it. 00:32:20.578 [2024-11-26 07:42:04.442003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.578 [2024-11-26 07:42:04.442010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.578 qpair failed and we were unable to recover it. 00:32:20.578 [2024-11-26 07:42:04.442337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.578 [2024-11-26 07:42:04.442344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.578 qpair failed and we were unable to recover it. 00:32:20.578 [2024-11-26 07:42:04.442673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.578 [2024-11-26 07:42:04.442680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.578 qpair failed and we were unable to recover it. 00:32:20.578 [2024-11-26 07:42:04.442960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.578 [2024-11-26 07:42:04.442967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.578 qpair failed and we were unable to recover it. 00:32:20.578 [2024-11-26 07:42:04.443184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.578 [2024-11-26 07:42:04.443190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.578 qpair failed and we were unable to recover it. 00:32:20.578 [2024-11-26 07:42:04.443475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.578 [2024-11-26 07:42:04.443482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.578 qpair failed and we were unable to recover it. 00:32:20.578 [2024-11-26 07:42:04.443768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.578 [2024-11-26 07:42:04.443774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.578 qpair failed and we were unable to recover it. 00:32:20.578 [2024-11-26 07:42:04.444082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.578 [2024-11-26 07:42:04.444090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.578 qpair failed and we were unable to recover it. 00:32:20.578 [2024-11-26 07:42:04.444420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.578 [2024-11-26 07:42:04.444427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.578 qpair failed and we were unable to recover it. 00:32:20.578 [2024-11-26 07:42:04.444640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.578 [2024-11-26 07:42:04.444647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.578 qpair failed and we were unable to recover it. 00:32:20.578 [2024-11-26 07:42:04.444962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.578 [2024-11-26 07:42:04.444970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.578 qpair failed and we were unable to recover it. 00:32:20.578 [2024-11-26 07:42:04.445364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.578 [2024-11-26 07:42:04.445372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.578 qpair failed and we were unable to recover it. 00:32:20.578 [2024-11-26 07:42:04.445659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.578 [2024-11-26 07:42:04.445666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.578 qpair failed and we were unable to recover it. 00:32:20.578 [2024-11-26 07:42:04.445826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.578 [2024-11-26 07:42:04.445835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.578 qpair failed and we were unable to recover it. 00:32:20.578 [2024-11-26 07:42:04.446205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.578 [2024-11-26 07:42:04.446212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.578 qpair failed and we were unable to recover it. 00:32:20.578 [2024-11-26 07:42:04.446521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.578 [2024-11-26 07:42:04.446528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.578 qpair failed and we were unable to recover it. 00:32:20.578 [2024-11-26 07:42:04.446835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.578 [2024-11-26 07:42:04.446843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.578 qpair failed and we were unable to recover it. 00:32:20.578 [2024-11-26 07:42:04.447160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.578 [2024-11-26 07:42:04.447170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.578 qpair failed and we were unable to recover it. 00:32:20.578 [2024-11-26 07:42:04.447343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.578 [2024-11-26 07:42:04.447351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.578 qpair failed and we were unable to recover it. 00:32:20.578 [2024-11-26 07:42:04.447665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.578 [2024-11-26 07:42:04.447672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.578 qpair failed and we were unable to recover it. 00:32:20.578 [2024-11-26 07:42:04.447946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.578 [2024-11-26 07:42:04.447953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.578 qpair failed and we were unable to recover it. 00:32:20.578 [2024-11-26 07:42:04.448374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.578 [2024-11-26 07:42:04.448380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.578 qpair failed and we were unable to recover it. 00:32:20.578 [2024-11-26 07:42:04.448671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.578 [2024-11-26 07:42:04.448677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.578 qpair failed and we were unable to recover it. 00:32:20.578 [2024-11-26 07:42:04.448992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.578 [2024-11-26 07:42:04.448999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.578 qpair failed and we were unable to recover it. 00:32:20.578 [2024-11-26 07:42:04.449315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.578 [2024-11-26 07:42:04.449321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.578 qpair failed and we were unable to recover it. 00:32:20.578 [2024-11-26 07:42:04.449622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.578 [2024-11-26 07:42:04.449630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.578 qpair failed and we were unable to recover it. 00:32:20.578 [2024-11-26 07:42:04.449821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.578 [2024-11-26 07:42:04.449829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.578 qpair failed and we were unable to recover it. 00:32:20.578 [2024-11-26 07:42:04.450126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.578 [2024-11-26 07:42:04.450134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.578 qpair failed and we were unable to recover it. 00:32:20.578 [2024-11-26 07:42:04.450320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.578 [2024-11-26 07:42:04.450328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.578 qpair failed and we were unable to recover it. 00:32:20.578 [2024-11-26 07:42:04.450643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.578 [2024-11-26 07:42:04.450651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.578 qpair failed and we were unable to recover it. 00:32:20.578 [2024-11-26 07:42:04.450968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.578 [2024-11-26 07:42:04.450976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.578 qpair failed and we were unable to recover it. 00:32:20.578 [2024-11-26 07:42:04.451337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.578 [2024-11-26 07:42:04.451344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.578 qpair failed and we were unable to recover it. 00:32:20.578 [2024-11-26 07:42:04.451541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.578 [2024-11-26 07:42:04.451548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.578 qpair failed and we were unable to recover it. 00:32:20.578 [2024-11-26 07:42:04.451878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.578 [2024-11-26 07:42:04.451887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.578 qpair failed and we were unable to recover it. 00:32:20.578 [2024-11-26 07:42:04.451955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.578 [2024-11-26 07:42:04.451964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.578 qpair failed and we were unable to recover it. 00:32:20.578 [2024-11-26 07:42:04.452241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.579 [2024-11-26 07:42:04.452249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.579 qpair failed and we were unable to recover it. 00:32:20.579 [2024-11-26 07:42:04.452591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.579 [2024-11-26 07:42:04.452599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.579 qpair failed and we were unable to recover it. 00:32:20.579 [2024-11-26 07:42:04.452894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.579 [2024-11-26 07:42:04.452902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.579 qpair failed and we were unable to recover it. 00:32:20.579 [2024-11-26 07:42:04.453099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.579 [2024-11-26 07:42:04.453107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.579 qpair failed and we were unable to recover it. 00:32:20.579 [2024-11-26 07:42:04.453407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.579 [2024-11-26 07:42:04.453415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.579 qpair failed and we were unable to recover it. 00:32:20.579 [2024-11-26 07:42:04.453721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.579 [2024-11-26 07:42:04.453728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.579 qpair failed and we were unable to recover it. 00:32:20.579 [2024-11-26 07:42:04.454041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.579 [2024-11-26 07:42:04.454048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.579 qpair failed and we were unable to recover it. 00:32:20.579 [2024-11-26 07:42:04.454382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.579 [2024-11-26 07:42:04.454388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.579 qpair failed and we were unable to recover it. 00:32:20.579 [2024-11-26 07:42:04.454576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.579 [2024-11-26 07:42:04.454582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.579 qpair failed and we were unable to recover it. 00:32:20.579 [2024-11-26 07:42:04.454775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.579 [2024-11-26 07:42:04.454783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.579 qpair failed and we were unable to recover it. 00:32:20.579 [2024-11-26 07:42:04.455121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.579 [2024-11-26 07:42:04.455128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.579 qpair failed and we were unable to recover it. 00:32:20.579 [2024-11-26 07:42:04.455328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.579 [2024-11-26 07:42:04.455346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.579 qpair failed and we were unable to recover it. 00:32:20.579 [2024-11-26 07:42:04.455624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.579 [2024-11-26 07:42:04.455631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.579 qpair failed and we were unable to recover it. 00:32:20.579 [2024-11-26 07:42:04.455946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.579 [2024-11-26 07:42:04.455953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.579 qpair failed and we were unable to recover it. 00:32:20.579 [2024-11-26 07:42:04.456273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.579 [2024-11-26 07:42:04.456280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.579 qpair failed and we were unable to recover it. 00:32:20.579 [2024-11-26 07:42:04.456609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.579 [2024-11-26 07:42:04.456616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.579 qpair failed and we were unable to recover it. 00:32:20.579 [2024-11-26 07:42:04.456934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.579 [2024-11-26 07:42:04.456941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.579 qpair failed and we were unable to recover it. 00:32:20.579 [2024-11-26 07:42:04.457266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.579 [2024-11-26 07:42:04.457273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.579 qpair failed and we were unable to recover it. 00:32:20.579 [2024-11-26 07:42:04.460257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.579 [2024-11-26 07:42:04.460285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.579 qpair failed and we were unable to recover it. 00:32:20.579 [2024-11-26 07:42:04.460582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.579 [2024-11-26 07:42:04.460591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.579 qpair failed and we were unable to recover it. 00:32:20.579 [2024-11-26 07:42:04.460784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.579 [2024-11-26 07:42:04.460792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.579 qpair failed and we were unable to recover it. 00:32:20.579 [2024-11-26 07:42:04.460984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.579 [2024-11-26 07:42:04.460992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.579 qpair failed and we were unable to recover it. 00:32:20.579 [2024-11-26 07:42:04.461179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.579 [2024-11-26 07:42:04.461190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.579 qpair failed and we were unable to recover it. 00:32:20.579 [2024-11-26 07:42:04.461399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.579 [2024-11-26 07:42:04.461407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.579 qpair failed and we were unable to recover it. 00:32:20.579 [2024-11-26 07:42:04.461818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.579 [2024-11-26 07:42:04.461826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.579 qpair failed and we were unable to recover it. 00:32:20.579 [2024-11-26 07:42:04.462015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.579 [2024-11-26 07:42:04.462024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.579 qpair failed and we were unable to recover it. 00:32:20.579 [2024-11-26 07:42:04.462377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.579 [2024-11-26 07:42:04.462384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.579 qpair failed and we were unable to recover it. 00:32:20.579 [2024-11-26 07:42:04.462704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.579 [2024-11-26 07:42:04.462712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.579 qpair failed and we were unable to recover it. 00:32:20.579 [2024-11-26 07:42:04.463017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.579 [2024-11-26 07:42:04.463024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.579 qpair failed and we were unable to recover it. 00:32:20.579 [2024-11-26 07:42:04.463314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.579 [2024-11-26 07:42:04.463320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.579 qpair failed and we were unable to recover it. 00:32:20.579 [2024-11-26 07:42:04.463622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.579 [2024-11-26 07:42:04.463629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.579 qpair failed and we were unable to recover it. 00:32:20.579 [2024-11-26 07:42:04.463942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.579 [2024-11-26 07:42:04.463949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.579 qpair failed and we were unable to recover it. 00:32:20.579 [2024-11-26 07:42:04.464226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.579 [2024-11-26 07:42:04.464234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.579 qpair failed and we were unable to recover it. 00:32:20.579 [2024-11-26 07:42:04.464525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.579 [2024-11-26 07:42:04.464532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.579 qpair failed and we were unable to recover it. 00:32:20.579 [2024-11-26 07:42:04.464817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.579 [2024-11-26 07:42:04.464824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.579 qpair failed and we were unable to recover it. 00:32:20.579 [2024-11-26 07:42:04.465121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.579 [2024-11-26 07:42:04.465128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.579 qpair failed and we were unable to recover it. 00:32:20.579 [2024-11-26 07:42:04.465494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.579 [2024-11-26 07:42:04.465500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.579 qpair failed and we were unable to recover it. 00:32:20.579 [2024-11-26 07:42:04.465819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.579 [2024-11-26 07:42:04.465826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.579 qpair failed and we were unable to recover it. 00:32:20.579 [2024-11-26 07:42:04.466122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.580 [2024-11-26 07:42:04.466129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.580 qpair failed and we were unable to recover it. 00:32:20.580 [2024-11-26 07:42:04.466480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.580 [2024-11-26 07:42:04.466486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.580 qpair failed and we were unable to recover it. 00:32:20.580 [2024-11-26 07:42:04.466689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.580 [2024-11-26 07:42:04.466696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.580 qpair failed and we were unable to recover it. 00:32:20.580 [2024-11-26 07:42:04.467037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.580 [2024-11-26 07:42:04.467044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.580 qpair failed and we were unable to recover it. 00:32:20.580 [2024-11-26 07:42:04.467347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.580 [2024-11-26 07:42:04.467355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.580 qpair failed and we were unable to recover it. 00:32:20.580 [2024-11-26 07:42:04.467676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.580 [2024-11-26 07:42:04.467684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.580 qpair failed and we were unable to recover it. 00:32:20.580 [2024-11-26 07:42:04.467983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.580 [2024-11-26 07:42:04.467990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.580 qpair failed and we were unable to recover it. 00:32:20.580 [2024-11-26 07:42:04.468158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.580 [2024-11-26 07:42:04.468165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.580 qpair failed and we were unable to recover it. 00:32:20.580 [2024-11-26 07:42:04.468436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.580 [2024-11-26 07:42:04.468443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.580 qpair failed and we were unable to recover it. 00:32:20.580 [2024-11-26 07:42:04.468744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.580 [2024-11-26 07:42:04.468751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.580 qpair failed and we were unable to recover it. 00:32:20.580 [2024-11-26 07:42:04.469052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.580 [2024-11-26 07:42:04.469059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.580 qpair failed and we were unable to recover it. 00:32:20.580 [2024-11-26 07:42:04.469292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.580 [2024-11-26 07:42:04.469299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.580 qpair failed and we were unable to recover it. 00:32:20.580 [2024-11-26 07:42:04.469506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.580 [2024-11-26 07:42:04.469512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.580 qpair failed and we were unable to recover it. 00:32:20.580 [2024-11-26 07:42:04.469823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.580 [2024-11-26 07:42:04.469830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.580 qpair failed and we were unable to recover it. 00:32:20.580 [2024-11-26 07:42:04.470188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.580 [2024-11-26 07:42:04.470195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.580 qpair failed and we were unable to recover it. 00:32:20.580 [2024-11-26 07:42:04.470495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.580 [2024-11-26 07:42:04.470501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.580 qpair failed and we were unable to recover it. 00:32:20.580 [2024-11-26 07:42:04.470805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.580 [2024-11-26 07:42:04.470811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.580 qpair failed and we were unable to recover it. 00:32:20.580 [2024-11-26 07:42:04.471006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.580 [2024-11-26 07:42:04.471014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.580 qpair failed and we were unable to recover it. 00:32:20.580 [2024-11-26 07:42:04.471102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.580 [2024-11-26 07:42:04.471109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.580 qpair failed and we were unable to recover it. 00:32:20.580 [2024-11-26 07:42:04.471289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.580 [2024-11-26 07:42:04.471296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.580 qpair failed and we were unable to recover it. 00:32:20.580 [2024-11-26 07:42:04.471626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.580 [2024-11-26 07:42:04.471633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.580 qpair failed and we were unable to recover it. 00:32:20.580 [2024-11-26 07:42:04.471924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.580 [2024-11-26 07:42:04.471932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.580 qpair failed and we were unable to recover it. 00:32:20.580 [2024-11-26 07:42:04.472221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.580 [2024-11-26 07:42:04.472229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.580 qpair failed and we were unable to recover it. 00:32:20.580 [2024-11-26 07:42:04.472539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.580 [2024-11-26 07:42:04.472546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.580 qpair failed and we were unable to recover it. 00:32:20.580 [2024-11-26 07:42:04.472727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.580 [2024-11-26 07:42:04.472737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.580 qpair failed and we were unable to recover it. 00:32:20.580 [2024-11-26 07:42:04.473046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.580 [2024-11-26 07:42:04.473053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.580 qpair failed and we were unable to recover it. 00:32:20.580 [2024-11-26 07:42:04.473335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.580 [2024-11-26 07:42:04.473341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.580 qpair failed and we were unable to recover it. 00:32:20.580 [2024-11-26 07:42:04.473494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.580 [2024-11-26 07:42:04.473501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.580 qpair failed and we were unable to recover it. 00:32:20.580 [2024-11-26 07:42:04.473689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.580 [2024-11-26 07:42:04.473695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.580 qpair failed and we were unable to recover it. 00:32:20.580 [2024-11-26 07:42:04.473906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.580 [2024-11-26 07:42:04.473913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.580 qpair failed and we were unable to recover it. 00:32:20.580 [2024-11-26 07:42:04.474225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.580 [2024-11-26 07:42:04.474232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.580 qpair failed and we were unable to recover it. 00:32:20.580 [2024-11-26 07:42:04.474536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.580 [2024-11-26 07:42:04.474543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.580 qpair failed and we were unable to recover it. 00:32:20.580 [2024-11-26 07:42:04.474855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.580 [2024-11-26 07:42:04.474866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.580 qpair failed and we were unable to recover it. 00:32:20.580 [2024-11-26 07:42:04.475185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.580 [2024-11-26 07:42:04.475193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.580 qpair failed and we were unable to recover it. 00:32:20.580 [2024-11-26 07:42:04.475530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.580 [2024-11-26 07:42:04.475537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.580 qpair failed and we were unable to recover it. 00:32:20.580 [2024-11-26 07:42:04.475821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.580 [2024-11-26 07:42:04.475827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.580 qpair failed and we were unable to recover it. 00:32:20.580 [2024-11-26 07:42:04.476173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.580 [2024-11-26 07:42:04.476180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.580 qpair failed and we were unable to recover it. 00:32:20.580 [2024-11-26 07:42:04.476503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.580 [2024-11-26 07:42:04.476510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.580 qpair failed and we were unable to recover it. 00:32:20.581 [2024-11-26 07:42:04.476832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.581 [2024-11-26 07:42:04.476840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.581 qpair failed and we were unable to recover it. 00:32:20.581 [2024-11-26 07:42:04.477161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.581 [2024-11-26 07:42:04.477169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.581 qpair failed and we were unable to recover it. 00:32:20.581 [2024-11-26 07:42:04.477491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.581 [2024-11-26 07:42:04.477499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.581 qpair failed and we were unable to recover it. 00:32:20.581 [2024-11-26 07:42:04.477811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.581 [2024-11-26 07:42:04.477817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.581 qpair failed and we were unable to recover it. 00:32:20.581 [2024-11-26 07:42:04.477985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.581 [2024-11-26 07:42:04.477993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.581 qpair failed and we were unable to recover it. 00:32:20.581 [2024-11-26 07:42:04.478257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.581 [2024-11-26 07:42:04.478266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.581 qpair failed and we were unable to recover it. 00:32:20.581 [2024-11-26 07:42:04.478673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.581 [2024-11-26 07:42:04.478680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.581 qpair failed and we were unable to recover it. 00:32:20.581 [2024-11-26 07:42:04.478968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.581 [2024-11-26 07:42:04.478976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.581 qpair failed and we were unable to recover it. 00:32:20.581 [2024-11-26 07:42:04.479295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.581 [2024-11-26 07:42:04.479301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.581 qpair failed and we were unable to recover it. 00:32:20.581 [2024-11-26 07:42:04.479546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.581 [2024-11-26 07:42:04.479553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.581 qpair failed and we were unable to recover it. 00:32:20.581 [2024-11-26 07:42:04.479877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.581 [2024-11-26 07:42:04.479884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.581 qpair failed and we were unable to recover it. 00:32:20.581 [2024-11-26 07:42:04.480165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.581 [2024-11-26 07:42:04.480173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.581 qpair failed and we were unable to recover it. 00:32:20.581 [2024-11-26 07:42:04.480385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.581 [2024-11-26 07:42:04.480392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.581 qpair failed and we were unable to recover it. 00:32:20.581 [2024-11-26 07:42:04.480671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.581 [2024-11-26 07:42:04.480678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.581 qpair failed and we were unable to recover it. 00:32:20.581 [2024-11-26 07:42:04.480857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.581 [2024-11-26 07:42:04.480868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.581 qpair failed and we were unable to recover it. 00:32:20.581 [2024-11-26 07:42:04.481162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.581 [2024-11-26 07:42:04.481168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.581 qpair failed and we were unable to recover it. 00:32:20.581 [2024-11-26 07:42:04.481357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.581 [2024-11-26 07:42:04.481364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.581 qpair failed and we were unable to recover it. 00:32:20.581 [2024-11-26 07:42:04.481749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.581 [2024-11-26 07:42:04.481756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.581 qpair failed and we were unable to recover it. 00:32:20.581 [2024-11-26 07:42:04.482056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.581 [2024-11-26 07:42:04.482063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.581 qpair failed and we were unable to recover it. 00:32:20.581 [2024-11-26 07:42:04.482386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.581 [2024-11-26 07:42:04.482393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.581 qpair failed and we were unable to recover it. 00:32:20.581 [2024-11-26 07:42:04.482731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.581 [2024-11-26 07:42:04.482738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.581 qpair failed and we were unable to recover it. 00:32:20.581 [2024-11-26 07:42:04.483047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.581 [2024-11-26 07:42:04.483054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.581 qpair failed and we were unable to recover it. 00:32:20.581 [2024-11-26 07:42:04.483265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.581 [2024-11-26 07:42:04.483272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.581 qpair failed and we were unable to recover it. 00:32:20.581 [2024-11-26 07:42:04.483625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.581 [2024-11-26 07:42:04.483631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.581 qpair failed and we were unable to recover it. 00:32:20.581 [2024-11-26 07:42:04.483962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.581 [2024-11-26 07:42:04.483969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.581 qpair failed and we were unable to recover it. 00:32:20.581 [2024-11-26 07:42:04.484258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.581 [2024-11-26 07:42:04.484265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.581 qpair failed and we were unable to recover it. 00:32:20.581 [2024-11-26 07:42:04.484441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.581 [2024-11-26 07:42:04.484451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.581 qpair failed and we were unable to recover it. 00:32:20.581 [2024-11-26 07:42:04.484634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.581 [2024-11-26 07:42:04.484641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.581 qpair failed and we were unable to recover it. 00:32:20.581 [2024-11-26 07:42:04.484825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.581 [2024-11-26 07:42:04.484840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.581 qpair failed and we were unable to recover it. 00:32:20.581 [2024-11-26 07:42:04.485203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.581 [2024-11-26 07:42:04.485210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.581 qpair failed and we were unable to recover it. 00:32:20.581 [2024-11-26 07:42:04.485514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.582 [2024-11-26 07:42:04.485520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.582 qpair failed and we were unable to recover it. 00:32:20.582 [2024-11-26 07:42:04.485866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.582 [2024-11-26 07:42:04.485873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.582 qpair failed and we were unable to recover it. 00:32:20.582 [2024-11-26 07:42:04.486189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.582 [2024-11-26 07:42:04.486196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.582 qpair failed and we were unable to recover it. 00:32:20.582 [2024-11-26 07:42:04.486522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.582 [2024-11-26 07:42:04.486529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.582 qpair failed and we were unable to recover it. 00:32:20.582 [2024-11-26 07:42:04.486738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.582 [2024-11-26 07:42:04.486745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.582 qpair failed and we were unable to recover it. 00:32:20.582 [2024-11-26 07:42:04.487035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.582 [2024-11-26 07:42:04.487042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.582 qpair failed and we were unable to recover it. 00:32:20.582 [2024-11-26 07:42:04.487366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.582 [2024-11-26 07:42:04.487373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.582 qpair failed and we were unable to recover it. 00:32:20.582 [2024-11-26 07:42:04.487671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.582 [2024-11-26 07:42:04.487678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.582 qpair failed and we were unable to recover it. 00:32:20.582 [2024-11-26 07:42:04.487768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.582 [2024-11-26 07:42:04.487775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.582 qpair failed and we were unable to recover it. 00:32:20.582 [2024-11-26 07:42:04.488005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.582 [2024-11-26 07:42:04.488013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.582 qpair failed and we were unable to recover it. 00:32:20.582 [2024-11-26 07:42:04.488124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.582 [2024-11-26 07:42:04.488131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.582 qpair failed and we were unable to recover it. 00:32:20.582 [2024-11-26 07:42:04.488399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.582 [2024-11-26 07:42:04.488405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.582 qpair failed and we were unable to recover it. 00:32:20.582 [2024-11-26 07:42:04.488725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.582 [2024-11-26 07:42:04.488732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.582 qpair failed and we were unable to recover it. 00:32:20.582 [2024-11-26 07:42:04.489034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.582 [2024-11-26 07:42:04.489041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.582 qpair failed and we were unable to recover it. 00:32:20.582 [2024-11-26 07:42:04.489212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.582 [2024-11-26 07:42:04.489219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.582 qpair failed and we were unable to recover it. 00:32:20.582 [2024-11-26 07:42:04.489627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.582 [2024-11-26 07:42:04.489634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.582 qpair failed and we were unable to recover it. 00:32:20.582 [2024-11-26 07:42:04.489798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.582 [2024-11-26 07:42:04.489806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.582 qpair failed and we were unable to recover it. 00:32:20.582 [2024-11-26 07:42:04.490135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.582 [2024-11-26 07:42:04.490142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.582 qpair failed and we were unable to recover it. 00:32:20.582 [2024-11-26 07:42:04.490532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.582 [2024-11-26 07:42:04.490539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.582 qpair failed and we were unable to recover it. 00:32:20.582 [2024-11-26 07:42:04.490730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.582 [2024-11-26 07:42:04.490737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.582 qpair failed and we were unable to recover it. 00:32:20.582 [2024-11-26 07:42:04.491089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.582 [2024-11-26 07:42:04.491096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.582 qpair failed and we were unable to recover it. 00:32:20.582 [2024-11-26 07:42:04.491423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.582 [2024-11-26 07:42:04.491431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.582 qpair failed and we were unable to recover it. 00:32:20.582 [2024-11-26 07:42:04.491768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.582 [2024-11-26 07:42:04.491775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.582 qpair failed and we were unable to recover it. 00:32:20.582 [2024-11-26 07:42:04.492107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.582 [2024-11-26 07:42:04.492114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.582 qpair failed and we were unable to recover it. 00:32:20.582 [2024-11-26 07:42:04.492321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.582 [2024-11-26 07:42:04.492328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.582 qpair failed and we were unable to recover it. 00:32:20.582 [2024-11-26 07:42:04.492692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.582 [2024-11-26 07:42:04.492698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.582 qpair failed and we were unable to recover it. 00:32:20.582 [2024-11-26 07:42:04.493001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.582 [2024-11-26 07:42:04.493008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.582 qpair failed and we were unable to recover it. 00:32:20.582 [2024-11-26 07:42:04.493337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.582 [2024-11-26 07:42:04.493344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.582 qpair failed and we were unable to recover it. 00:32:20.582 [2024-11-26 07:42:04.493676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.582 [2024-11-26 07:42:04.493684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.582 qpair failed and we were unable to recover it. 00:32:20.582 [2024-11-26 07:42:04.494017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.582 [2024-11-26 07:42:04.494025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.582 qpair failed and we were unable to recover it. 00:32:20.582 [2024-11-26 07:42:04.494260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.582 [2024-11-26 07:42:04.494267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.582 qpair failed and we were unable to recover it. 00:32:20.582 [2024-11-26 07:42:04.494450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.582 [2024-11-26 07:42:04.494456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.582 qpair failed and we were unable to recover it. 00:32:20.582 [2024-11-26 07:42:04.494745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.582 [2024-11-26 07:42:04.494751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.582 qpair failed and we were unable to recover it. 00:32:20.582 [2024-11-26 07:42:04.494967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.582 [2024-11-26 07:42:04.494974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.582 qpair failed and we were unable to recover it. 00:32:20.582 [2024-11-26 07:42:04.495309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.582 [2024-11-26 07:42:04.495316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.582 qpair failed and we were unable to recover it. 00:32:20.582 [2024-11-26 07:42:04.495627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.582 [2024-11-26 07:42:04.495633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.582 qpair failed and we were unable to recover it. 00:32:20.582 [2024-11-26 07:42:04.495931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.582 [2024-11-26 07:42:04.495940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.582 qpair failed and we were unable to recover it. 00:32:20.582 [2024-11-26 07:42:04.496260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.582 [2024-11-26 07:42:04.496267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.582 qpair failed and we were unable to recover it. 00:32:20.583 [2024-11-26 07:42:04.496568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.583 [2024-11-26 07:42:04.496575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.583 qpair failed and we were unable to recover it. 00:32:20.583 [2024-11-26 07:42:04.496986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.583 [2024-11-26 07:42:04.496993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.583 qpair failed and we were unable to recover it. 00:32:20.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2316839 Killed "${NVMF_APP[@]}" "$@" 00:32:20.583 [2024-11-26 07:42:04.497290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.583 [2024-11-26 07:42:04.497298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.583 qpair failed and we were unable to recover it. 00:32:20.583 [2024-11-26 07:42:04.497364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.583 [2024-11-26 07:42:04.497372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.583 qpair failed and we were unable to recover it. 00:32:20.583 [2024-11-26 07:42:04.497584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.583 [2024-11-26 07:42:04.497592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.583 qpair failed and we were unable to recover it. 00:32:20.583 07:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:32:20.583 [2024-11-26 07:42:04.497922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.583 [2024-11-26 07:42:04.497930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.583 qpair failed and we were unable to recover it. 00:32:20.583 07:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:20.583 [2024-11-26 07:42:04.498236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.583 [2024-11-26 07:42:04.498243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.583 qpair failed and we were unable to recover it. 00:32:20.583 [2024-11-26 07:42:04.498462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.583 [2024-11-26 07:42:04.498469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.583 qpair failed and we were unable to recover it. 00:32:20.583 07:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:20.583 07:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:20.583 [2024-11-26 07:42:04.498791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.583 [2024-11-26 07:42:04.498798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.583 qpair failed and we were unable to recover it. 00:32:20.583 07:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:20.583 [2024-11-26 07:42:04.499145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.583 [2024-11-26 07:42:04.499153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.583 qpair failed and we were unable to recover it. 00:32:20.583 [2024-11-26 07:42:04.499445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.583 [2024-11-26 07:42:04.499452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.583 qpair failed and we were unable to recover it. 00:32:20.583 [2024-11-26 07:42:04.499776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.583 [2024-11-26 07:42:04.499782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.583 qpair failed and we were unable to recover it. 00:32:20.583 [2024-11-26 07:42:04.499834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.583 [2024-11-26 07:42:04.499841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.583 qpair failed and we were unable to recover it. 00:32:20.583 [2024-11-26 07:42:04.500198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.583 [2024-11-26 07:42:04.500205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.583 qpair failed and we were unable to recover it. 00:32:20.583 [2024-11-26 07:42:04.500500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.583 [2024-11-26 07:42:04.500507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.583 qpair failed and we were unable to recover it. 00:32:20.583 [2024-11-26 07:42:04.500795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.583 [2024-11-26 07:42:04.500803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.583 qpair failed and we were unable to recover it. 00:32:20.583 [2024-11-26 07:42:04.501118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.583 [2024-11-26 07:42:04.501126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.583 qpair failed and we were unable to recover it. 00:32:20.583 [2024-11-26 07:42:04.501401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.583 [2024-11-26 07:42:04.501409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.583 qpair failed and we were unable to recover it. 00:32:20.583 [2024-11-26 07:42:04.501717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.583 [2024-11-26 07:42:04.501724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.583 qpair failed and we were unable to recover it. 00:32:20.583 [2024-11-26 07:42:04.502021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.583 [2024-11-26 07:42:04.502028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.583 qpair failed and we were unable to recover it. 00:32:20.583 [2024-11-26 07:42:04.502360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.583 [2024-11-26 07:42:04.502367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.583 qpair failed and we were unable to recover it. 00:32:20.583 [2024-11-26 07:42:04.502658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.583 [2024-11-26 07:42:04.502665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.583 qpair failed and we were unable to recover it. 00:32:20.583 [2024-11-26 07:42:04.502882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.583 [2024-11-26 07:42:04.502892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.583 qpair failed and we were unable to recover it. 00:32:20.583 [2024-11-26 07:42:04.503114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.583 [2024-11-26 07:42:04.503121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.583 qpair failed and we were unable to recover it. 00:32:20.583 [2024-11-26 07:42:04.503507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.583 [2024-11-26 07:42:04.503514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.583 qpair failed and we were unable to recover it. 00:32:20.583 [2024-11-26 07:42:04.503693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.583 [2024-11-26 07:42:04.503701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.583 qpair failed and we were unable to recover it. 00:32:20.583 [2024-11-26 07:42:04.504029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.583 [2024-11-26 07:42:04.504037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.583 qpair failed and we were unable to recover it. 00:32:20.583 [2024-11-26 07:42:04.504412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.583 [2024-11-26 07:42:04.504419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.583 qpair failed and we were unable to recover it. 00:32:20.583 [2024-11-26 07:42:04.504786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.583 [2024-11-26 07:42:04.504793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.583 qpair failed and we were unable to recover it. 00:32:20.583 [2024-11-26 07:42:04.504977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.583 [2024-11-26 07:42:04.504983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.583 qpair failed and we were unable to recover it. 00:32:20.583 [2024-11-26 07:42:04.505304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.583 [2024-11-26 07:42:04.505312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.583 qpair failed and we were unable to recover it. 00:32:20.583 [2024-11-26 07:42:04.505608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.583 [2024-11-26 07:42:04.505616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.583 qpair failed and we were unable to recover it. 00:32:20.583 [2024-11-26 07:42:04.505809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.583 [2024-11-26 07:42:04.505817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.583 qpair failed and we were unable to recover it. 00:32:20.583 [2024-11-26 07:42:04.506186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.583 [2024-11-26 07:42:04.506194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.583 qpair failed and we were unable to recover it. 00:32:20.583 [2024-11-26 07:42:04.506364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.583 [2024-11-26 07:42:04.506372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.583 qpair failed and we were unable to recover it. 00:32:20.584 [2024-11-26 07:42:04.506647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.584 [2024-11-26 07:42:04.506655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.584 qpair failed and we were unable to recover it. 00:32:20.584 07:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2317968 00:32:20.584 [2024-11-26 07:42:04.506973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.584 [2024-11-26 07:42:04.506982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.584 qpair failed and we were unable to recover it. 00:32:20.584 07:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2317968 00:32:20.584 [2024-11-26 07:42:04.507307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.584 [2024-11-26 07:42:04.507315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.584 qpair failed and we were unable to recover it. 00:32:20.584 07:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:20.584 07:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2317968 ']' 00:32:20.584 [2024-11-26 07:42:04.507629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.584 [2024-11-26 07:42:04.507637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.584 qpair failed and we were unable to recover it. 00:32:20.584 [2024-11-26 07:42:04.507767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.584 [2024-11-26 07:42:04.507774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.584 qpair failed and we were unable to recover it. 00:32:20.584 07:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:20.584 07:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:20.584 [2024-11-26 07:42:04.508094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.584 [2024-11-26 07:42:04.508102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.584 qpair failed and we were unable to recover it. 00:32:20.584 07:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:20.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:20.584 [2024-11-26 07:42:04.508409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.584 [2024-11-26 07:42:04.508417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.584 qpair failed and we were unable to recover it. 00:32:20.584 07:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:20.584 [2024-11-26 07:42:04.508614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.584 [2024-11-26 07:42:04.508622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.584 qpair failed and we were unable to recover it. 00:32:20.584 07:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:20.584 [2024-11-26 07:42:04.508822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.584 [2024-11-26 07:42:04.508830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.584 qpair failed and we were unable to recover it. 00:32:20.584 [2024-11-26 07:42:04.509219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.584 [2024-11-26 07:42:04.509229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.584 qpair failed and we were unable to recover it. 00:32:20.584 [2024-11-26 07:42:04.509543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.584 [2024-11-26 07:42:04.509551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.584 qpair failed and we were unable to recover it. 00:32:20.584 [2024-11-26 07:42:04.509856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.584 [2024-11-26 07:42:04.509868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.584 qpair failed and we were unable to recover it. 00:32:20.584 [2024-11-26 07:42:04.510258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.584 [2024-11-26 07:42:04.510266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.584 qpair failed and we were unable to recover it. 00:32:20.584 [2024-11-26 07:42:04.510622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.584 [2024-11-26 07:42:04.510629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.584 qpair failed and we were unable to recover it. 00:32:20.584 [2024-11-26 07:42:04.510811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.584 [2024-11-26 07:42:04.510819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.584 qpair failed and we were unable to recover it. 00:32:20.584 [2024-11-26 07:42:04.511101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.584 [2024-11-26 07:42:04.511109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.584 qpair failed and we were unable to recover it. 00:32:20.584 [2024-11-26 07:42:04.511171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.584 [2024-11-26 07:42:04.511191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.584 qpair failed and we were unable to recover it. 00:32:20.584 [2024-11-26 07:42:04.511467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.584 [2024-11-26 07:42:04.511475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.584 qpair failed and we were unable to recover it. 00:32:20.584 [2024-11-26 07:42:04.511785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.584 [2024-11-26 07:42:04.511794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.584 qpair failed and we were unable to recover it. 00:32:20.584 [2024-11-26 07:42:04.512191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.584 [2024-11-26 07:42:04.512198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.584 qpair failed and we were unable to recover it. 00:32:20.584 [2024-11-26 07:42:04.512487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.584 [2024-11-26 07:42:04.512494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.584 qpair failed and we were unable to recover it. 00:32:20.584 [2024-11-26 07:42:04.512825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.584 [2024-11-26 07:42:04.512832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.584 qpair failed and we were unable to recover it. 00:32:20.584 [2024-11-26 07:42:04.513202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.584 [2024-11-26 07:42:04.513210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.584 qpair failed and we were unable to recover it. 00:32:20.584 [2024-11-26 07:42:04.513519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.584 [2024-11-26 07:42:04.513526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.584 qpair failed and we were unable to recover it. 00:32:20.584 [2024-11-26 07:42:04.513858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.584 [2024-11-26 07:42:04.513868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.584 qpair failed and we were unable to recover it. 00:32:20.584 [2024-11-26 07:42:04.514208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.584 [2024-11-26 07:42:04.514215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.584 qpair failed and we were unable to recover it. 00:32:20.584 [2024-11-26 07:42:04.514387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.584 [2024-11-26 07:42:04.514395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.584 qpair failed and we were unable to recover it. 00:32:20.584 [2024-11-26 07:42:04.514681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.584 [2024-11-26 07:42:04.514688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.584 qpair failed and we were unable to recover it. 00:32:20.584 [2024-11-26 07:42:04.515013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.584 [2024-11-26 07:42:04.515021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.584 qpair failed and we were unable to recover it. 00:32:20.584 [2024-11-26 07:42:04.515346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.584 [2024-11-26 07:42:04.515353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.584 qpair failed and we were unable to recover it. 00:32:20.584 [2024-11-26 07:42:04.515686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.584 [2024-11-26 07:42:04.515694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.584 qpair failed and we were unable to recover it. 00:32:20.584 [2024-11-26 07:42:04.515911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.584 [2024-11-26 07:42:04.515919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.584 qpair failed and we were unable to recover it. 00:32:20.584 [2024-11-26 07:42:04.516244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.584 [2024-11-26 07:42:04.516250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.584 qpair failed and we were unable to recover it. 00:32:20.584 [2024-11-26 07:42:04.516543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.584 [2024-11-26 07:42:04.516551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.585 qpair failed and we were unable to recover it. 00:32:20.585 [2024-11-26 07:42:04.516887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.585 [2024-11-26 07:42:04.516895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.585 qpair failed and we were unable to recover it. 00:32:20.585 [2024-11-26 07:42:04.517135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.585 [2024-11-26 07:42:04.517142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.585 qpair failed and we were unable to recover it. 00:32:20.585 [2024-11-26 07:42:04.517454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.585 [2024-11-26 07:42:04.517460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.585 qpair failed and we were unable to recover it. 00:32:20.585 [2024-11-26 07:42:04.517753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.585 [2024-11-26 07:42:04.517760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.585 qpair failed and we were unable to recover it. 00:32:20.585 [2024-11-26 07:42:04.517966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.585 [2024-11-26 07:42:04.517975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.585 qpair failed and we were unable to recover it. 00:32:20.585 [2024-11-26 07:42:04.518349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.585 [2024-11-26 07:42:04.518356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.585 qpair failed and we were unable to recover it. 00:32:20.585 [2024-11-26 07:42:04.518662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.585 [2024-11-26 07:42:04.518668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.585 qpair failed and we were unable to recover it. 00:32:20.585 [2024-11-26 07:42:04.518987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.585 [2024-11-26 07:42:04.518994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.585 qpair failed and we were unable to recover it. 00:32:20.585 [2024-11-26 07:42:04.519301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.585 [2024-11-26 07:42:04.519308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.585 qpair failed and we were unable to recover it. 00:32:20.585 [2024-11-26 07:42:04.519471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.585 [2024-11-26 07:42:04.519479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.585 qpair failed and we were unable to recover it. 00:32:20.585 [2024-11-26 07:42:04.519766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.585 [2024-11-26 07:42:04.519773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.585 qpair failed and we were unable to recover it. 00:32:20.585 [2024-11-26 07:42:04.519938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.585 [2024-11-26 07:42:04.519946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.585 qpair failed and we were unable to recover it. 00:32:20.585 [2024-11-26 07:42:04.520300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.585 [2024-11-26 07:42:04.520307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.585 qpair failed and we were unable to recover it. 00:32:20.585 [2024-11-26 07:42:04.520629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.585 [2024-11-26 07:42:04.520637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.585 qpair failed and we were unable to recover it. 00:32:20.585 [2024-11-26 07:42:04.520932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.585 [2024-11-26 07:42:04.520939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.585 qpair failed and we were unable to recover it. 00:32:20.585 [2024-11-26 07:42:04.521125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.585 [2024-11-26 07:42:04.521135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.585 qpair failed and we were unable to recover it. 00:32:20.585 [2024-11-26 07:42:04.521458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.585 [2024-11-26 07:42:04.521465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.585 qpair failed and we were unable to recover it. 00:32:20.585 [2024-11-26 07:42:04.521765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.585 [2024-11-26 07:42:04.521771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.585 qpair failed and we were unable to recover it. 00:32:20.585 [2024-11-26 07:42:04.522185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.585 [2024-11-26 07:42:04.522193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.585 qpair failed and we were unable to recover it. 00:32:20.585 [2024-11-26 07:42:04.522379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.585 [2024-11-26 07:42:04.522387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.585 qpair failed and we were unable to recover it. 00:32:20.585 [2024-11-26 07:42:04.522688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.585 [2024-11-26 07:42:04.522696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.585 qpair failed and we were unable to recover it. 00:32:20.585 [2024-11-26 07:42:04.523003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.585 [2024-11-26 07:42:04.523010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.585 qpair failed and we were unable to recover it. 00:32:20.585 [2024-11-26 07:42:04.523183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.585 [2024-11-26 07:42:04.523190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.585 qpair failed and we were unable to recover it. 00:32:20.585 [2024-11-26 07:42:04.523354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.585 [2024-11-26 07:42:04.523361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.585 qpair failed and we were unable to recover it. 00:32:20.585 [2024-11-26 07:42:04.523652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.585 [2024-11-26 07:42:04.523659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.585 qpair failed and we were unable to recover it. 00:32:20.585 [2024-11-26 07:42:04.524058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.585 [2024-11-26 07:42:04.524066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.585 qpair failed and we were unable to recover it. 00:32:20.585 [2024-11-26 07:42:04.524311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.585 [2024-11-26 07:42:04.524318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.585 qpair failed and we were unable to recover it. 00:32:20.585 [2024-11-26 07:42:04.524679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.585 [2024-11-26 07:42:04.524685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.585 qpair failed and we were unable to recover it. 00:32:20.585 [2024-11-26 07:42:04.524985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.585 [2024-11-26 07:42:04.524993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.585 qpair failed and we were unable to recover it. 00:32:20.585 [2024-11-26 07:42:04.525263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.585 [2024-11-26 07:42:04.525270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.585 qpair failed and we were unable to recover it. 00:32:20.585 [2024-11-26 07:42:04.525559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.585 [2024-11-26 07:42:04.525566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.585 qpair failed and we were unable to recover it. 00:32:20.585 [2024-11-26 07:42:04.525758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.585 [2024-11-26 07:42:04.525765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.585 qpair failed and we were unable to recover it. 00:32:20.585 [2024-11-26 07:42:04.526170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.585 [2024-11-26 07:42:04.526177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.585 qpair failed and we were unable to recover it. 00:32:20.585 [2024-11-26 07:42:04.526468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.585 [2024-11-26 07:42:04.526475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.585 qpair failed and we were unable to recover it. 00:32:20.585 [2024-11-26 07:42:04.526693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.585 [2024-11-26 07:42:04.526700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.585 qpair failed and we were unable to recover it. 00:32:20.585 [2024-11-26 07:42:04.527003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.585 [2024-11-26 07:42:04.527011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.585 qpair failed and we were unable to recover it. 00:32:20.585 [2024-11-26 07:42:04.527315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.585 [2024-11-26 07:42:04.527322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.585 qpair failed and we were unable to recover it. 00:32:20.585 [2024-11-26 07:42:04.527639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.586 [2024-11-26 07:42:04.527646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.586 qpair failed and we were unable to recover it. 00:32:20.586 [2024-11-26 07:42:04.527844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.586 [2024-11-26 07:42:04.527851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.586 qpair failed and we were unable to recover it. 00:32:20.586 [2024-11-26 07:42:04.528226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.586 [2024-11-26 07:42:04.528234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.586 qpair failed and we were unable to recover it. 00:32:20.586 [2024-11-26 07:42:04.528447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.586 [2024-11-26 07:42:04.528455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.586 qpair failed and we were unable to recover it. 00:32:20.586 [2024-11-26 07:42:04.528546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.586 [2024-11-26 07:42:04.528552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.586 qpair failed and we were unable to recover it. 00:32:20.586 [2024-11-26 07:42:04.528723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.586 [2024-11-26 07:42:04.528730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.586 qpair failed and we were unable to recover it. 00:32:20.586 [2024-11-26 07:42:04.529053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.586 [2024-11-26 07:42:04.529060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.586 qpair failed and we were unable to recover it. 00:32:20.586 [2024-11-26 07:42:04.529237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.586 [2024-11-26 07:42:04.529245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.586 qpair failed and we were unable to recover it. 00:32:20.586 [2024-11-26 07:42:04.529532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.586 [2024-11-26 07:42:04.529539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.586 qpair failed and we were unable to recover it. 00:32:20.586 [2024-11-26 07:42:04.529855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.586 [2024-11-26 07:42:04.529865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.586 qpair failed and we were unable to recover it. 00:32:20.586 [2024-11-26 07:42:04.530190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.586 [2024-11-26 07:42:04.530197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.586 qpair failed and we were unable to recover it. 00:32:20.586 [2024-11-26 07:42:04.530527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.586 [2024-11-26 07:42:04.530534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.586 qpair failed and we were unable to recover it. 00:32:20.586 [2024-11-26 07:42:04.530842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.586 [2024-11-26 07:42:04.530849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.586 qpair failed and we were unable to recover it. 00:32:20.586 [2024-11-26 07:42:04.531172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.586 [2024-11-26 07:42:04.531179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.586 qpair failed and we were unable to recover it. 00:32:20.586 [2024-11-26 07:42:04.531511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.586 [2024-11-26 07:42:04.531518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.586 qpair failed and we were unable to recover it. 00:32:20.586 [2024-11-26 07:42:04.531834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.586 [2024-11-26 07:42:04.531840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.586 qpair failed and we were unable to recover it. 00:32:20.586 [2024-11-26 07:42:04.532144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.586 [2024-11-26 07:42:04.532152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.586 qpair failed and we were unable to recover it. 00:32:20.586 [2024-11-26 07:42:04.532359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.586 [2024-11-26 07:42:04.532366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.586 qpair failed and we were unable to recover it. 00:32:20.586 [2024-11-26 07:42:04.532668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.586 [2024-11-26 07:42:04.532677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.586 qpair failed and we were unable to recover it. 00:32:20.586 [2024-11-26 07:42:04.532981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.586 [2024-11-26 07:42:04.532988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.586 qpair failed and we were unable to recover it. 00:32:20.586 [2024-11-26 07:42:04.533295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.586 [2024-11-26 07:42:04.533302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.586 qpair failed and we were unable to recover it. 00:32:20.586 [2024-11-26 07:42:04.533649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.586 [2024-11-26 07:42:04.533656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.586 qpair failed and we were unable to recover it. 00:32:20.586 [2024-11-26 07:42:04.533981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.586 [2024-11-26 07:42:04.533989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.586 qpair failed and we were unable to recover it. 00:32:20.586 [2024-11-26 07:42:04.534161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.586 [2024-11-26 07:42:04.534168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.586 qpair failed and we were unable to recover it. 00:32:20.586 [2024-11-26 07:42:04.534259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.586 [2024-11-26 07:42:04.534266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.586 qpair failed and we were unable to recover it. 00:32:20.586 [2024-11-26 07:42:04.534549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.586 [2024-11-26 07:42:04.534556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.586 qpair failed and we were unable to recover it. 00:32:20.586 [2024-11-26 07:42:04.534810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.586 [2024-11-26 07:42:04.534817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.586 qpair failed and we were unable to recover it. 00:32:20.586 [2024-11-26 07:42:04.535145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.586 [2024-11-26 07:42:04.535153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.586 qpair failed and we were unable to recover it. 00:32:20.586 [2024-11-26 07:42:04.535357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.586 [2024-11-26 07:42:04.535364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.586 qpair failed and we were unable to recover it. 00:32:20.586 [2024-11-26 07:42:04.535682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.586 [2024-11-26 07:42:04.535689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.586 qpair failed and we were unable to recover it. 00:32:20.586 [2024-11-26 07:42:04.535973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.586 [2024-11-26 07:42:04.535982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.586 qpair failed and we were unable to recover it. 00:32:20.586 [2024-11-26 07:42:04.536279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.586 [2024-11-26 07:42:04.536286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.586 qpair failed and we were unable to recover it. 00:32:20.586 [2024-11-26 07:42:04.536602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.586 [2024-11-26 07:42:04.536609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.586 qpair failed and we were unable to recover it. 00:32:20.586 [2024-11-26 07:42:04.536774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.587 [2024-11-26 07:42:04.536782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.587 qpair failed and we were unable to recover it. 00:32:20.587 [2024-11-26 07:42:04.537111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.587 [2024-11-26 07:42:04.537119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.587 qpair failed and we were unable to recover it. 00:32:20.587 [2024-11-26 07:42:04.537430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.587 [2024-11-26 07:42:04.537438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.587 qpair failed and we were unable to recover it. 00:32:20.587 [2024-11-26 07:42:04.537750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.587 [2024-11-26 07:42:04.537757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.587 qpair failed and we were unable to recover it. 00:32:20.587 [2024-11-26 07:42:04.538071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.587 [2024-11-26 07:42:04.538078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.587 qpair failed and we were unable to recover it. 00:32:20.587 [2024-11-26 07:42:04.538286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.587 [2024-11-26 07:42:04.538293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.587 qpair failed and we were unable to recover it. 00:32:20.587 [2024-11-26 07:42:04.538523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.587 [2024-11-26 07:42:04.538530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.587 qpair failed and we were unable to recover it. 00:32:20.587 [2024-11-26 07:42:04.538732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.587 [2024-11-26 07:42:04.538739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.587 qpair failed and we were unable to recover it. 00:32:20.587 [2024-11-26 07:42:04.539012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.587 [2024-11-26 07:42:04.539020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.587 qpair failed and we were unable to recover it. 00:32:20.587 [2024-11-26 07:42:04.539333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.587 [2024-11-26 07:42:04.539340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.587 qpair failed and we were unable to recover it. 00:32:20.587 [2024-11-26 07:42:04.539632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.587 [2024-11-26 07:42:04.539640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.587 qpair failed and we were unable to recover it. 00:32:20.587 [2024-11-26 07:42:04.539814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.587 [2024-11-26 07:42:04.539822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.587 qpair failed and we were unable to recover it. 00:32:20.587 [2024-11-26 07:42:04.540065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.587 [2024-11-26 07:42:04.540073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.587 qpair failed and we were unable to recover it. 00:32:20.587 [2024-11-26 07:42:04.540421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.587 [2024-11-26 07:42:04.540429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.587 qpair failed and we were unable to recover it. 00:32:20.587 [2024-11-26 07:42:04.540745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.587 [2024-11-26 07:42:04.540752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.587 qpair failed and we were unable to recover it. 00:32:20.587 [2024-11-26 07:42:04.540949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.587 [2024-11-26 07:42:04.540961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.587 qpair failed and we were unable to recover it. 00:32:20.587 [2024-11-26 07:42:04.541332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.587 [2024-11-26 07:42:04.541339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.587 qpair failed and we were unable to recover it. 00:32:20.587 [2024-11-26 07:42:04.541642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.587 [2024-11-26 07:42:04.541648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.587 qpair failed and we were unable to recover it. 00:32:20.587 [2024-11-26 07:42:04.541876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.587 [2024-11-26 07:42:04.541883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.587 qpair failed and we were unable to recover it. 00:32:20.587 [2024-11-26 07:42:04.542197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.587 [2024-11-26 07:42:04.542204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.587 qpair failed and we were unable to recover it. 00:32:20.587 [2024-11-26 07:42:04.542524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.587 [2024-11-26 07:42:04.542531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.587 qpair failed and we were unable to recover it. 00:32:20.587 [2024-11-26 07:42:04.542711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.587 [2024-11-26 07:42:04.542718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.587 qpair failed and we were unable to recover it. 00:32:20.587 [2024-11-26 07:42:04.542997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.587 [2024-11-26 07:42:04.543004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.587 qpair failed and we were unable to recover it. 00:32:20.587 [2024-11-26 07:42:04.543310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.587 [2024-11-26 07:42:04.543317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.587 qpair failed and we were unable to recover it. 00:32:20.587 [2024-11-26 07:42:04.543606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.587 [2024-11-26 07:42:04.543613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.587 qpair failed and we were unable to recover it. 00:32:20.587 [2024-11-26 07:42:04.543928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.587 [2024-11-26 07:42:04.543938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.587 qpair failed and we were unable to recover it. 00:32:20.587 [2024-11-26 07:42:04.544258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.587 [2024-11-26 07:42:04.544265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.587 qpair failed and we were unable to recover it. 00:32:20.587 [2024-11-26 07:42:04.544428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.587 [2024-11-26 07:42:04.544436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.587 qpair failed and we were unable to recover it. 00:32:20.587 [2024-11-26 07:42:04.544804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.587 [2024-11-26 07:42:04.544810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.587 qpair failed and we were unable to recover it. 00:32:20.587 [2024-11-26 07:42:04.545115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.587 [2024-11-26 07:42:04.545122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.587 qpair failed and we were unable to recover it. 00:32:20.587 [2024-11-26 07:42:04.545446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.587 [2024-11-26 07:42:04.545453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.587 qpair failed and we were unable to recover it. 00:32:20.587 [2024-11-26 07:42:04.545747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.587 [2024-11-26 07:42:04.545753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.587 qpair failed and we were unable to recover it. 00:32:20.587 [2024-11-26 07:42:04.546121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.587 [2024-11-26 07:42:04.546129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.587 qpair failed and we were unable to recover it. 00:32:20.587 [2024-11-26 07:42:04.546431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.587 [2024-11-26 07:42:04.546438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.587 qpair failed and we were unable to recover it. 00:32:20.587 [2024-11-26 07:42:04.546771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.587 [2024-11-26 07:42:04.546778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.587 qpair failed and we were unable to recover it. 00:32:20.587 [2024-11-26 07:42:04.547081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.587 [2024-11-26 07:42:04.547089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.587 qpair failed and we were unable to recover it. 00:32:20.587 [2024-11-26 07:42:04.547402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.587 [2024-11-26 07:42:04.547410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.587 qpair failed and we were unable to recover it. 00:32:20.587 [2024-11-26 07:42:04.547619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.587 [2024-11-26 07:42:04.547626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.587 qpair failed and we were unable to recover it. 00:32:20.588 [2024-11-26 07:42:04.547834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.588 [2024-11-26 07:42:04.547841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.588 qpair failed and we were unable to recover it. 00:32:20.588 [2024-11-26 07:42:04.548058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.588 [2024-11-26 07:42:04.548065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.588 qpair failed and we were unable to recover it. 00:32:20.588 [2024-11-26 07:42:04.548344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.588 [2024-11-26 07:42:04.548350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.588 qpair failed and we were unable to recover it. 00:32:20.588 [2024-11-26 07:42:04.548562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.588 [2024-11-26 07:42:04.548569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.588 qpair failed and we were unable to recover it. 00:32:20.588 [2024-11-26 07:42:04.548912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.588 [2024-11-26 07:42:04.548919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.588 qpair failed and we were unable to recover it. 00:32:20.588 [2024-11-26 07:42:04.549346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.588 [2024-11-26 07:42:04.549353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.588 qpair failed and we were unable to recover it. 00:32:20.588 [2024-11-26 07:42:04.549677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.588 [2024-11-26 07:42:04.549685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.588 qpair failed and we were unable to recover it. 00:32:20.588 [2024-11-26 07:42:04.549882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.588 [2024-11-26 07:42:04.549889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.588 qpair failed and we were unable to recover it. 00:32:20.588 [2024-11-26 07:42:04.550211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.588 [2024-11-26 07:42:04.550218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.588 qpair failed and we were unable to recover it. 00:32:20.588 [2024-11-26 07:42:04.550546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.588 [2024-11-26 07:42:04.550552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.588 qpair failed and we were unable to recover it. 00:32:20.588 [2024-11-26 07:42:04.550741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.588 [2024-11-26 07:42:04.550748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.588 qpair failed and we were unable to recover it. 00:32:20.588 [2024-11-26 07:42:04.551021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.588 [2024-11-26 07:42:04.551029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.588 qpair failed and we were unable to recover it. 00:32:20.588 [2024-11-26 07:42:04.551349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.588 [2024-11-26 07:42:04.551356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.588 qpair failed and we were unable to recover it. 00:32:20.588 [2024-11-26 07:42:04.551550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.588 [2024-11-26 07:42:04.551558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.588 qpair failed and we were unable to recover it. 00:32:20.588 [2024-11-26 07:42:04.551896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.588 [2024-11-26 07:42:04.551903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.588 qpair failed and we were unable to recover it. 00:32:20.588 [2024-11-26 07:42:04.552229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.588 [2024-11-26 07:42:04.552236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.588 qpair failed and we were unable to recover it. 00:32:20.588 [2024-11-26 07:42:04.552414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.588 [2024-11-26 07:42:04.552422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.588 qpair failed and we were unable to recover it. 00:32:20.588 [2024-11-26 07:42:04.552754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.588 [2024-11-26 07:42:04.552761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.588 qpair failed and we were unable to recover it. 00:32:20.588 [2024-11-26 07:42:04.553127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.588 [2024-11-26 07:42:04.553135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.588 qpair failed and we were unable to recover it. 00:32:20.588 [2024-11-26 07:42:04.553505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.588 [2024-11-26 07:42:04.553512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.588 qpair failed and we were unable to recover it. 00:32:20.588 [2024-11-26 07:42:04.553809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.588 [2024-11-26 07:42:04.553816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.588 qpair failed and we were unable to recover it. 00:32:20.588 [2024-11-26 07:42:04.554120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.588 [2024-11-26 07:42:04.554128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.588 qpair failed and we were unable to recover it. 00:32:20.588 [2024-11-26 07:42:04.554428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.588 [2024-11-26 07:42:04.554435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.588 qpair failed and we were unable to recover it. 00:32:20.588 [2024-11-26 07:42:04.554592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.588 [2024-11-26 07:42:04.554599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.588 qpair failed and we were unable to recover it. 00:32:20.588 [2024-11-26 07:42:04.555016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.588 [2024-11-26 07:42:04.555024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.588 qpair failed and we were unable to recover it. 00:32:20.588 [2024-11-26 07:42:04.555206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.588 [2024-11-26 07:42:04.555214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.588 qpair failed and we were unable to recover it. 00:32:20.588 [2024-11-26 07:42:04.555423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.588 [2024-11-26 07:42:04.555430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.588 qpair failed and we were unable to recover it. 00:32:20.588 [2024-11-26 07:42:04.555801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.588 [2024-11-26 07:42:04.555810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.588 qpair failed and we were unable to recover it. 00:32:20.588 [2024-11-26 07:42:04.556107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.588 [2024-11-26 07:42:04.556115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.588 qpair failed and we were unable to recover it. 00:32:20.588 [2024-11-26 07:42:04.556453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.588 [2024-11-26 07:42:04.556461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.588 qpair failed and we were unable to recover it. 00:32:20.588 [2024-11-26 07:42:04.556690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.588 [2024-11-26 07:42:04.556699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.588 qpair failed and we were unable to recover it. 00:32:20.588 [2024-11-26 07:42:04.557003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.588 [2024-11-26 07:42:04.557011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.588 qpair failed and we were unable to recover it. 00:32:20.588 [2024-11-26 07:42:04.557337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.588 [2024-11-26 07:42:04.557344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.588 qpair failed and we were unable to recover it. 00:32:20.588 [2024-11-26 07:42:04.557545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.588 [2024-11-26 07:42:04.557552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.588 qpair failed and we were unable to recover it. 00:32:20.588 [2024-11-26 07:42:04.557779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.588 [2024-11-26 07:42:04.557788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.588 qpair failed and we were unable to recover it. 00:32:20.588 [2024-11-26 07:42:04.558063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.588 [2024-11-26 07:42:04.558070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.588 qpair failed and we were unable to recover it. 00:32:20.588 [2024-11-26 07:42:04.558388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.588 [2024-11-26 07:42:04.558395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.588 qpair failed and we were unable to recover it. 00:32:20.588 [2024-11-26 07:42:04.558566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.589 [2024-11-26 07:42:04.558573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.589 qpair failed and we were unable to recover it. 00:32:20.589 [2024-11-26 07:42:04.558791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.589 [2024-11-26 07:42:04.558799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.589 qpair failed and we were unable to recover it. 00:32:20.589 [2024-11-26 07:42:04.559101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.589 [2024-11-26 07:42:04.559109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.589 qpair failed and we were unable to recover it. 00:32:20.589 [2024-11-26 07:42:04.559435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.589 [2024-11-26 07:42:04.559443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.589 qpair failed and we were unable to recover it. 00:32:20.589 [2024-11-26 07:42:04.559624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.589 [2024-11-26 07:42:04.559631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.589 qpair failed and we were unable to recover it. 00:32:20.589 [2024-11-26 07:42:04.559973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.589 [2024-11-26 07:42:04.559981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.589 qpair failed and we were unable to recover it. 00:32:20.589 [2024-11-26 07:42:04.560312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.589 [2024-11-26 07:42:04.560320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.589 qpair failed and we were unable to recover it. 00:32:20.589 [2024-11-26 07:42:04.560530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.589 [2024-11-26 07:42:04.560537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.589 qpair failed and we were unable to recover it. 00:32:20.589 [2024-11-26 07:42:04.560831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.589 [2024-11-26 07:42:04.560837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.589 qpair failed and we were unable to recover it. 00:32:20.589 [2024-11-26 07:42:04.561162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.589 [2024-11-26 07:42:04.561170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.589 qpair failed and we were unable to recover it. 00:32:20.589 [2024-11-26 07:42:04.561482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.589 [2024-11-26 07:42:04.561489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.589 qpair failed and we were unable to recover it. 00:32:20.589 [2024-11-26 07:42:04.561786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.589 [2024-11-26 07:42:04.561793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.589 qpair failed and we were unable to recover it. 00:32:20.589 [2024-11-26 07:42:04.561991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.589 [2024-11-26 07:42:04.561998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.589 qpair failed and we were unable to recover it. 00:32:20.589 [2024-11-26 07:42:04.562287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.589 [2024-11-26 07:42:04.562293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.589 qpair failed and we were unable to recover it. 00:32:20.589 [2024-11-26 07:42:04.562665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.589 [2024-11-26 07:42:04.562672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.589 qpair failed and we were unable to recover it. 00:32:20.589 [2024-11-26 07:42:04.562972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.589 [2024-11-26 07:42:04.562979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.589 qpair failed and we were unable to recover it. 00:32:20.589 [2024-11-26 07:42:04.563142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.589 [2024-11-26 07:42:04.563149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.589 qpair failed and we were unable to recover it. 00:32:20.589 [2024-11-26 07:42:04.563438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.589 [2024-11-26 07:42:04.563445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.589 qpair failed and we were unable to recover it. 00:32:20.589 [2024-11-26 07:42:04.563659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.589 [2024-11-26 07:42:04.563666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.589 qpair failed and we were unable to recover it. 00:32:20.589 [2024-11-26 07:42:04.563987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.589 [2024-11-26 07:42:04.563995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.589 qpair failed and we were unable to recover it. 00:32:20.589 [2024-11-26 07:42:04.564311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.589 [2024-11-26 07:42:04.564318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.589 qpair failed and we were unable to recover it. 00:32:20.589 [2024-11-26 07:42:04.564611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.589 [2024-11-26 07:42:04.564618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.589 qpair failed and we were unable to recover it. 00:32:20.589 [2024-11-26 07:42:04.564934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.589 [2024-11-26 07:42:04.564942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.589 qpair failed and we were unable to recover it. 00:32:20.589 [2024-11-26 07:42:04.565093] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:32:20.589 [2024-11-26 07:42:04.565140] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:20.589 [2024-11-26 07:42:04.565148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.589 [2024-11-26 07:42:04.565155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.589 qpair failed and we were unable to recover it. 00:32:20.589 [2024-11-26 07:42:04.565576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.589 [2024-11-26 07:42:04.565583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.589 qpair failed and we were unable to recover it. 00:32:20.589 [2024-11-26 07:42:04.565923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.589 [2024-11-26 07:42:04.565931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.589 qpair failed and we were unable to recover it. 00:32:20.589 [2024-11-26 07:42:04.566091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.589 [2024-11-26 07:42:04.566099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.589 qpair failed and we were unable to recover it. 00:32:20.589 [2024-11-26 07:42:04.566396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.589 [2024-11-26 07:42:04.566405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.589 qpair failed and we were unable to recover it. 00:32:20.589 [2024-11-26 07:42:04.566597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.589 [2024-11-26 07:42:04.566605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.589 qpair failed and we were unable to recover it. 00:32:20.589 [2024-11-26 07:42:04.566883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.589 [2024-11-26 07:42:04.566894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.589 qpair failed and we were unable to recover it. 00:32:20.589 [2024-11-26 07:42:04.567189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.589 [2024-11-26 07:42:04.567196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.589 qpair failed and we were unable to recover it. 00:32:20.589 [2024-11-26 07:42:04.567531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.589 [2024-11-26 07:42:04.567538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.589 qpair failed and we were unable to recover it. 00:32:20.589 [2024-11-26 07:42:04.567693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.589 [2024-11-26 07:42:04.567701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.589 qpair failed and we were unable to recover it. 00:32:20.589 [2024-11-26 07:42:04.568035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.589 [2024-11-26 07:42:04.568043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.589 qpair failed and we were unable to recover it. 00:32:20.589 [2024-11-26 07:42:04.568397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.589 [2024-11-26 07:42:04.568404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.589 qpair failed and we were unable to recover it. 00:32:20.589 [2024-11-26 07:42:04.568695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.589 [2024-11-26 07:42:04.568703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.589 qpair failed and we were unable to recover it. 00:32:20.589 [2024-11-26 07:42:04.569021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.589 [2024-11-26 07:42:04.569029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.589 qpair failed and we were unable to recover it. 00:32:20.590 [2024-11-26 07:42:04.569360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.590 [2024-11-26 07:42:04.569369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.590 qpair failed and we were unable to recover it. 00:32:20.590 [2024-11-26 07:42:04.569714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.590 [2024-11-26 07:42:04.569721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.590 qpair failed and we were unable to recover it. 00:32:20.590 [2024-11-26 07:42:04.569933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.590 [2024-11-26 07:42:04.569941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.590 qpair failed and we were unable to recover it. 00:32:20.590 [2024-11-26 07:42:04.570241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.590 [2024-11-26 07:42:04.570249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.590 qpair failed and we were unable to recover it. 00:32:20.590 [2024-11-26 07:42:04.570562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.590 [2024-11-26 07:42:04.570570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.590 qpair failed and we were unable to recover it. 00:32:20.590 [2024-11-26 07:42:04.570885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.590 [2024-11-26 07:42:04.570894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.590 qpair failed and we were unable to recover it. 00:32:20.590 [2024-11-26 07:42:04.571257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.590 [2024-11-26 07:42:04.571265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.590 qpair failed and we were unable to recover it. 00:32:20.590 [2024-11-26 07:42:04.571651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.590 [2024-11-26 07:42:04.571659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.590 qpair failed and we were unable to recover it. 00:32:20.590 [2024-11-26 07:42:04.571833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.590 [2024-11-26 07:42:04.571841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.590 qpair failed and we were unable to recover it. 00:32:20.590 [2024-11-26 07:42:04.572117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.590 [2024-11-26 07:42:04.572125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.590 qpair failed and we were unable to recover it. 00:32:20.590 [2024-11-26 07:42:04.572457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.590 [2024-11-26 07:42:04.572465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.590 qpair failed and we were unable to recover it. 00:32:20.590 [2024-11-26 07:42:04.572786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.590 [2024-11-26 07:42:04.572793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.590 qpair failed and we were unable to recover it. 00:32:20.590 [2024-11-26 07:42:04.573116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.590 [2024-11-26 07:42:04.573124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.590 qpair failed and we were unable to recover it. 00:32:20.590 [2024-11-26 07:42:04.573421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.590 [2024-11-26 07:42:04.573429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.590 qpair failed and we were unable to recover it. 00:32:20.590 [2024-11-26 07:42:04.573747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.590 [2024-11-26 07:42:04.573755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.590 qpair failed and we were unable to recover it. 00:32:20.590 [2024-11-26 07:42:04.574031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.590 [2024-11-26 07:42:04.574039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.590 qpair failed and we were unable to recover it. 00:32:20.590 [2024-11-26 07:42:04.574231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.590 [2024-11-26 07:42:04.574239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.590 qpair failed and we were unable to recover it. 00:32:20.590 [2024-11-26 07:42:04.574516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.590 [2024-11-26 07:42:04.574524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.590 qpair failed and we were unable to recover it. 00:32:20.590 [2024-11-26 07:42:04.574878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.590 [2024-11-26 07:42:04.574886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.590 qpair failed and we were unable to recover it. 00:32:20.590 [2024-11-26 07:42:04.575198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.590 [2024-11-26 07:42:04.575206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.590 qpair failed and we were unable to recover it. 00:32:20.590 [2024-11-26 07:42:04.575393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.590 [2024-11-26 07:42:04.575401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.590 qpair failed and we were unable to recover it. 00:32:20.590 [2024-11-26 07:42:04.575601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.590 [2024-11-26 07:42:04.575608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.590 qpair failed and we were unable to recover it. 00:32:20.590 [2024-11-26 07:42:04.575898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.590 [2024-11-26 07:42:04.575905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.590 qpair failed and we were unable to recover it. 00:32:20.590 [2024-11-26 07:42:04.576217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.590 [2024-11-26 07:42:04.576225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.590 qpair failed and we were unable to recover it. 00:32:20.590 [2024-11-26 07:42:04.576424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.590 [2024-11-26 07:42:04.576432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.590 qpair failed and we were unable to recover it. 00:32:20.590 [2024-11-26 07:42:04.576758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.590 [2024-11-26 07:42:04.576765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.590 qpair failed and we were unable to recover it. 00:32:20.590 [2024-11-26 07:42:04.577096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.590 [2024-11-26 07:42:04.577104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.590 qpair failed and we were unable to recover it. 00:32:20.590 [2024-11-26 07:42:04.577433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.590 [2024-11-26 07:42:04.577441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.590 qpair failed and we were unable to recover it. 00:32:20.590 [2024-11-26 07:42:04.577787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.590 [2024-11-26 07:42:04.577795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.590 qpair failed and we were unable to recover it. 00:32:20.590 [2024-11-26 07:42:04.578106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.590 [2024-11-26 07:42:04.578114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.590 qpair failed and we were unable to recover it. 00:32:20.590 [2024-11-26 07:42:04.578438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.590 [2024-11-26 07:42:04.578445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.590 qpair failed and we were unable to recover it. 00:32:20.590 [2024-11-26 07:42:04.578761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.590 [2024-11-26 07:42:04.578768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.590 qpair failed and we were unable to recover it. 00:32:20.590 [2024-11-26 07:42:04.578980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.590 [2024-11-26 07:42:04.578990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.590 qpair failed and we were unable to recover it. 00:32:20.590 [2024-11-26 07:42:04.579322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.591 [2024-11-26 07:42:04.579330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.591 qpair failed and we were unable to recover it. 00:32:20.591 [2024-11-26 07:42:04.579670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.591 [2024-11-26 07:42:04.579678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.591 qpair failed and we were unable to recover it. 00:32:20.591 [2024-11-26 07:42:04.580006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.591 [2024-11-26 07:42:04.580015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.591 qpair failed and we were unable to recover it. 00:32:20.591 [2024-11-26 07:42:04.580241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.591 [2024-11-26 07:42:04.580249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.591 qpair failed and we were unable to recover it. 00:32:20.591 [2024-11-26 07:42:04.580458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.591 [2024-11-26 07:42:04.580466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.591 qpair failed and we were unable to recover it. 00:32:20.591 [2024-11-26 07:42:04.580836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.591 [2024-11-26 07:42:04.580844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.591 qpair failed and we were unable to recover it. 00:32:20.591 [2024-11-26 07:42:04.581027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.591 [2024-11-26 07:42:04.581035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.591 qpair failed and we were unable to recover it. 00:32:20.591 [2024-11-26 07:42:04.581398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.591 [2024-11-26 07:42:04.581406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.591 qpair failed and we were unable to recover it. 00:32:20.591 [2024-11-26 07:42:04.581726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.591 [2024-11-26 07:42:04.581733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.591 qpair failed and we were unable to recover it. 00:32:20.591 [2024-11-26 07:42:04.581931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.591 [2024-11-26 07:42:04.581939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.591 qpair failed and we were unable to recover it. 00:32:20.591 [2024-11-26 07:42:04.582256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.591 [2024-11-26 07:42:04.582263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.591 qpair failed and we were unable to recover it. 00:32:20.591 [2024-11-26 07:42:04.582440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.591 [2024-11-26 07:42:04.582447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.591 qpair failed and we were unable to recover it. 00:32:20.591 [2024-11-26 07:42:04.582741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.591 [2024-11-26 07:42:04.582748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.591 qpair failed and we were unable to recover it. 00:32:20.591 [2024-11-26 07:42:04.582951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.591 [2024-11-26 07:42:04.582964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.591 qpair failed and we were unable to recover it. 00:32:20.591 [2024-11-26 07:42:04.583170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.591 [2024-11-26 07:42:04.583176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.591 qpair failed and we were unable to recover it. 00:32:20.591 [2024-11-26 07:42:04.583463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.591 [2024-11-26 07:42:04.583470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.591 qpair failed and we were unable to recover it. 00:32:20.591 [2024-11-26 07:42:04.583692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.591 [2024-11-26 07:42:04.583699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.591 qpair failed and we were unable to recover it. 00:32:20.591 [2024-11-26 07:42:04.584036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.591 [2024-11-26 07:42:04.584044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.591 qpair failed and we were unable to recover it. 00:32:20.591 [2024-11-26 07:42:04.584317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.591 [2024-11-26 07:42:04.584324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.591 qpair failed and we were unable to recover it. 00:32:20.591 [2024-11-26 07:42:04.584520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.591 [2024-11-26 07:42:04.584526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.591 qpair failed and we were unable to recover it. 00:32:20.591 [2024-11-26 07:42:04.584846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.591 [2024-11-26 07:42:04.584854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.591 qpair failed and we were unable to recover it. 00:32:20.591 [2024-11-26 07:42:04.585232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.591 [2024-11-26 07:42:04.585240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.591 qpair failed and we were unable to recover it. 00:32:20.591 [2024-11-26 07:42:04.585436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.591 [2024-11-26 07:42:04.585443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.591 qpair failed and we were unable to recover it. 00:32:20.591 [2024-11-26 07:42:04.585620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.591 [2024-11-26 07:42:04.585628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.591 qpair failed and we were unable to recover it. 00:32:20.591 [2024-11-26 07:42:04.585793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.591 [2024-11-26 07:42:04.585800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.591 qpair failed and we were unable to recover it. 00:32:20.591 [2024-11-26 07:42:04.585982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.591 [2024-11-26 07:42:04.585989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.591 qpair failed and we were unable to recover it. 00:32:20.591 [2024-11-26 07:42:04.586190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.591 [2024-11-26 07:42:04.586198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.591 qpair failed and we were unable to recover it. 00:32:20.591 [2024-11-26 07:42:04.586378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.591 [2024-11-26 07:42:04.586386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.591 qpair failed and we were unable to recover it. 00:32:20.591 [2024-11-26 07:42:04.586751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.591 [2024-11-26 07:42:04.586758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.591 qpair failed and we were unable to recover it. 00:32:20.591 [2024-11-26 07:42:04.587054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.591 [2024-11-26 07:42:04.587062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.591 qpair failed and we were unable to recover it. 00:32:20.591 [2024-11-26 07:42:04.587365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.591 [2024-11-26 07:42:04.587371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.591 qpair failed and we were unable to recover it. 00:32:20.591 [2024-11-26 07:42:04.587562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.591 [2024-11-26 07:42:04.587569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.591 qpair failed and we were unable to recover it. 00:32:20.591 [2024-11-26 07:42:04.587778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.591 [2024-11-26 07:42:04.587785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.591 qpair failed and we were unable to recover it. 00:32:20.591 [2024-11-26 07:42:04.587987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.591 [2024-11-26 07:42:04.587995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.591 qpair failed and we were unable to recover it. 00:32:20.591 [2024-11-26 07:42:04.588331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.591 [2024-11-26 07:42:04.588338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.591 qpair failed and we were unable to recover it. 00:32:20.591 [2024-11-26 07:42:04.588630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.591 [2024-11-26 07:42:04.588637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.591 qpair failed and we were unable to recover it. 00:32:20.591 [2024-11-26 07:42:04.588933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.591 [2024-11-26 07:42:04.588940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.591 qpair failed and we were unable to recover it. 00:32:20.591 [2024-11-26 07:42:04.589257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.591 [2024-11-26 07:42:04.589264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.591 qpair failed and we were unable to recover it. 00:32:20.591 [2024-11-26 07:42:04.589463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.592 [2024-11-26 07:42:04.589470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.592 qpair failed and we were unable to recover it. 00:32:20.592 [2024-11-26 07:42:04.589543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.592 [2024-11-26 07:42:04.589553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.592 qpair failed and we were unable to recover it. 00:32:20.592 [2024-11-26 07:42:04.589883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.592 [2024-11-26 07:42:04.589890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.592 qpair failed and we were unable to recover it. 00:32:20.592 [2024-11-26 07:42:04.590188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.592 [2024-11-26 07:42:04.590195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.592 qpair failed and we were unable to recover it. 00:32:20.592 [2024-11-26 07:42:04.590522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.592 [2024-11-26 07:42:04.590530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.592 qpair failed and we were unable to recover it. 00:32:20.592 [2024-11-26 07:42:04.590886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.592 [2024-11-26 07:42:04.590894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.592 qpair failed and we were unable to recover it. 00:32:20.592 [2024-11-26 07:42:04.591326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.592 [2024-11-26 07:42:04.591333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.592 qpair failed and we were unable to recover it. 00:32:20.592 [2024-11-26 07:42:04.591624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.592 [2024-11-26 07:42:04.591631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.592 qpair failed and we were unable to recover it. 00:32:20.592 [2024-11-26 07:42:04.591975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.592 [2024-11-26 07:42:04.591982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.592 qpair failed and we were unable to recover it. 00:32:20.592 [2024-11-26 07:42:04.592386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.592 [2024-11-26 07:42:04.592392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.592 qpair failed and we were unable to recover it. 00:32:20.592 [2024-11-26 07:42:04.592605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.592 [2024-11-26 07:42:04.592612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.592 qpair failed and we were unable to recover it. 00:32:20.592 [2024-11-26 07:42:04.592944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.592 [2024-11-26 07:42:04.592951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.592 qpair failed and we were unable to recover it. 00:32:20.592 [2024-11-26 07:42:04.593144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.592 [2024-11-26 07:42:04.593152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.592 qpair failed and we were unable to recover it. 00:32:20.592 [2024-11-26 07:42:04.593497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.592 [2024-11-26 07:42:04.593504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.592 qpair failed and we were unable to recover it. 00:32:20.592 [2024-11-26 07:42:04.593798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.592 [2024-11-26 07:42:04.593804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.592 qpair failed and we were unable to recover it. 00:32:20.592 [2024-11-26 07:42:04.594175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.592 [2024-11-26 07:42:04.594183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.592 qpair failed and we were unable to recover it. 00:32:20.592 [2024-11-26 07:42:04.594504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.592 [2024-11-26 07:42:04.594510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.592 qpair failed and we were unable to recover it. 00:32:20.592 [2024-11-26 07:42:04.594825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.592 [2024-11-26 07:42:04.594832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.592 qpair failed and we were unable to recover it. 00:32:20.592 [2024-11-26 07:42:04.595161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.592 [2024-11-26 07:42:04.595169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.592 qpair failed and we were unable to recover it. 00:32:20.592 [2024-11-26 07:42:04.595462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.592 [2024-11-26 07:42:04.595469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.592 qpair failed and we were unable to recover it. 00:32:20.592 [2024-11-26 07:42:04.595804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.592 [2024-11-26 07:42:04.595811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.592 qpair failed and we were unable to recover it. 00:32:20.592 [2024-11-26 07:42:04.595853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.592 [2024-11-26 07:42:04.595860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.592 qpair failed and we were unable to recover it. 00:32:20.592 [2024-11-26 07:42:04.596072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.592 [2024-11-26 07:42:04.596080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.592 qpair failed and we were unable to recover it. 00:32:20.592 [2024-11-26 07:42:04.596367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.592 [2024-11-26 07:42:04.596374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.592 qpair failed and we were unable to recover it. 00:32:20.592 [2024-11-26 07:42:04.596704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.592 [2024-11-26 07:42:04.596711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.592 qpair failed and we were unable to recover it. 00:32:20.592 [2024-11-26 07:42:04.596927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.592 [2024-11-26 07:42:04.596934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.592 qpair failed and we were unable to recover it. 00:32:20.592 [2024-11-26 07:42:04.597089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.592 [2024-11-26 07:42:04.597096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.592 qpair failed and we were unable to recover it. 00:32:20.592 [2024-11-26 07:42:04.597281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.592 [2024-11-26 07:42:04.597288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.592 qpair failed and we were unable to recover it. 00:32:20.592 [2024-11-26 07:42:04.597473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.592 [2024-11-26 07:42:04.597480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.592 qpair failed and we were unable to recover it. 00:32:20.592 [2024-11-26 07:42:04.597801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.592 [2024-11-26 07:42:04.597809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.592 qpair failed and we were unable to recover it. 00:32:20.592 [2024-11-26 07:42:04.598130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.592 [2024-11-26 07:42:04.598137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.592 qpair failed and we were unable to recover it. 00:32:20.592 [2024-11-26 07:42:04.598325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.592 [2024-11-26 07:42:04.598333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.592 qpair failed and we were unable to recover it. 00:32:20.592 [2024-11-26 07:42:04.598675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.592 [2024-11-26 07:42:04.598683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.592 qpair failed and we were unable to recover it. 00:32:20.592 [2024-11-26 07:42:04.598896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.592 [2024-11-26 07:42:04.598903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.592 qpair failed and we were unable to recover it. 00:32:20.592 [2024-11-26 07:42:04.599235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.592 [2024-11-26 07:42:04.599242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.592 qpair failed and we were unable to recover it. 00:32:20.592 [2024-11-26 07:42:04.599612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.592 [2024-11-26 07:42:04.599619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.592 qpair failed and we were unable to recover it. 00:32:20.592 [2024-11-26 07:42:04.599915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.592 [2024-11-26 07:42:04.599922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.592 qpair failed and we were unable to recover it. 00:32:20.592 [2024-11-26 07:42:04.600230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.592 [2024-11-26 07:42:04.600237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.592 qpair failed and we were unable to recover it. 00:32:20.592 [2024-11-26 07:42:04.600546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.593 [2024-11-26 07:42:04.600553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.593 qpair failed and we were unable to recover it. 00:32:20.593 [2024-11-26 07:42:04.600858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.593 [2024-11-26 07:42:04.600870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.593 qpair failed and we were unable to recover it. 00:32:20.593 [2024-11-26 07:42:04.601147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.593 [2024-11-26 07:42:04.601154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.593 qpair failed and we were unable to recover it. 00:32:20.593 [2024-11-26 07:42:04.601330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.593 [2024-11-26 07:42:04.601339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.593 qpair failed and we were unable to recover it. 00:32:20.593 [2024-11-26 07:42:04.601618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.593 [2024-11-26 07:42:04.601625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.593 qpair failed and we were unable to recover it. 00:32:20.593 [2024-11-26 07:42:04.601804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.593 [2024-11-26 07:42:04.601812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.593 qpair failed and we were unable to recover it. 00:32:20.593 [2024-11-26 07:42:04.601978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.593 [2024-11-26 07:42:04.601985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.593 qpair failed and we were unable to recover it. 00:32:20.593 [2024-11-26 07:42:04.602280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.593 [2024-11-26 07:42:04.602286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.593 qpair failed and we were unable to recover it. 00:32:20.593 [2024-11-26 07:42:04.602593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.593 [2024-11-26 07:42:04.602600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.593 qpair failed and we were unable to recover it. 00:32:20.593 [2024-11-26 07:42:04.602816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.593 [2024-11-26 07:42:04.602823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.593 qpair failed and we were unable to recover it. 00:32:20.593 [2024-11-26 07:42:04.603117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.593 [2024-11-26 07:42:04.603126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.593 qpair failed and we were unable to recover it. 00:32:20.593 [2024-11-26 07:42:04.603433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.593 [2024-11-26 07:42:04.603440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.593 qpair failed and we were unable to recover it. 00:32:20.593 [2024-11-26 07:42:04.603708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.593 [2024-11-26 07:42:04.603715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.593 qpair failed and we were unable to recover it. 00:32:20.593 [2024-11-26 07:42:04.603892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.593 [2024-11-26 07:42:04.603899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.593 qpair failed and we were unable to recover it. 00:32:20.593 [2024-11-26 07:42:04.604254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.593 [2024-11-26 07:42:04.604261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.593 qpair failed and we were unable to recover it. 00:32:20.593 [2024-11-26 07:42:04.604540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.593 [2024-11-26 07:42:04.604547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.593 qpair failed and we were unable to recover it. 00:32:20.593 [2024-11-26 07:42:04.604934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.593 [2024-11-26 07:42:04.604942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.593 qpair failed and we were unable to recover it. 00:32:20.593 [2024-11-26 07:42:04.605302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.593 [2024-11-26 07:42:04.605310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.593 qpair failed and we were unable to recover it. 00:32:20.593 [2024-11-26 07:42:04.605480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.593 [2024-11-26 07:42:04.605488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.593 qpair failed and we were unable to recover it. 00:32:20.593 [2024-11-26 07:42:04.605893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.593 [2024-11-26 07:42:04.605901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.593 qpair failed and we were unable to recover it. 00:32:20.593 [2024-11-26 07:42:04.606219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.593 [2024-11-26 07:42:04.606226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.593 qpair failed and we were unable to recover it. 00:32:20.593 [2024-11-26 07:42:04.606474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.593 [2024-11-26 07:42:04.606481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.593 qpair failed and we were unable to recover it. 00:32:20.593 [2024-11-26 07:42:04.606688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.593 [2024-11-26 07:42:04.606695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.593 qpair failed and we were unable to recover it. 00:32:20.593 [2024-11-26 07:42:04.606981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.593 [2024-11-26 07:42:04.606989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.593 qpair failed and we were unable to recover it. 00:32:20.593 [2024-11-26 07:42:04.607357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.593 [2024-11-26 07:42:04.607365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.593 qpair failed and we were unable to recover it. 00:32:20.593 [2024-11-26 07:42:04.607691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.593 [2024-11-26 07:42:04.607698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.593 qpair failed and we were unable to recover it. 00:32:20.593 [2024-11-26 07:42:04.607931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.593 [2024-11-26 07:42:04.607938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.593 qpair failed and we were unable to recover it. 00:32:20.593 [2024-11-26 07:42:04.608256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.593 [2024-11-26 07:42:04.608263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.593 qpair failed and we were unable to recover it. 00:32:20.593 [2024-11-26 07:42:04.608556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.593 [2024-11-26 07:42:04.608563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.593 qpair failed and we were unable to recover it. 00:32:20.593 [2024-11-26 07:42:04.608850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.593 [2024-11-26 07:42:04.608857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.593 qpair failed and we were unable to recover it. 00:32:20.593 [2024-11-26 07:42:04.609179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.593 [2024-11-26 07:42:04.609187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.593 qpair failed and we were unable to recover it. 00:32:20.593 [2024-11-26 07:42:04.609382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.593 [2024-11-26 07:42:04.609394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.593 qpair failed and we were unable to recover it. 00:32:20.593 [2024-11-26 07:42:04.609742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.593 [2024-11-26 07:42:04.609749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.593 qpair failed and we were unable to recover it. 00:32:20.593 [2024-11-26 07:42:04.610082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.593 [2024-11-26 07:42:04.610089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.593 qpair failed and we were unable to recover it. 00:32:20.593 [2024-11-26 07:42:04.610384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.593 [2024-11-26 07:42:04.610391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.593 qpair failed and we were unable to recover it. 00:32:20.593 [2024-11-26 07:42:04.610799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.593 [2024-11-26 07:42:04.610806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.593 qpair failed and we were unable to recover it. 00:32:20.593 [2024-11-26 07:42:04.611157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.593 [2024-11-26 07:42:04.611164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.593 qpair failed and we were unable to recover it. 00:32:20.593 [2024-11-26 07:42:04.611340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.593 [2024-11-26 07:42:04.611348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.593 qpair failed and we were unable to recover it. 00:32:20.594 [2024-11-26 07:42:04.611669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.594 [2024-11-26 07:42:04.611677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.594 qpair failed and we were unable to recover it. 00:32:20.594 [2024-11-26 07:42:04.611898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.594 [2024-11-26 07:42:04.611906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.594 qpair failed and we were unable to recover it. 00:32:20.594 [2024-11-26 07:42:04.612097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.594 [2024-11-26 07:42:04.612105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.594 qpair failed and we were unable to recover it. 00:32:20.594 [2024-11-26 07:42:04.612395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.594 [2024-11-26 07:42:04.612402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.594 qpair failed and we were unable to recover it. 00:32:20.594 [2024-11-26 07:42:04.612715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.594 [2024-11-26 07:42:04.612722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.594 qpair failed and we were unable to recover it. 00:32:20.594 [2024-11-26 07:42:04.613048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.594 [2024-11-26 07:42:04.613058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.594 qpair failed and we were unable to recover it. 00:32:20.594 [2024-11-26 07:42:04.613312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.594 [2024-11-26 07:42:04.613319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.594 qpair failed and we were unable to recover it. 00:32:20.594 [2024-11-26 07:42:04.613680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.594 [2024-11-26 07:42:04.613687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.594 qpair failed and we were unable to recover it. 00:32:20.594 [2024-11-26 07:42:04.613981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.594 [2024-11-26 07:42:04.613989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.594 qpair failed and we were unable to recover it. 00:32:20.594 [2024-11-26 07:42:04.614320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.594 [2024-11-26 07:42:04.614327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.594 qpair failed and we were unable to recover it. 00:32:20.594 [2024-11-26 07:42:04.614648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.594 [2024-11-26 07:42:04.614656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.594 qpair failed and we were unable to recover it. 00:32:20.594 [2024-11-26 07:42:04.614859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.594 [2024-11-26 07:42:04.614872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.594 qpair failed and we were unable to recover it. 00:32:20.594 [2024-11-26 07:42:04.615171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.594 [2024-11-26 07:42:04.615180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.594 qpair failed and we were unable to recover it. 00:32:20.594 [2024-11-26 07:42:04.615501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.594 [2024-11-26 07:42:04.615509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.594 qpair failed and we were unable to recover it. 00:32:20.594 [2024-11-26 07:42:04.615817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.594 [2024-11-26 07:42:04.615824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.594 qpair failed and we were unable to recover it. 00:32:20.594 [2024-11-26 07:42:04.616127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.594 [2024-11-26 07:42:04.616134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.594 qpair failed and we were unable to recover it. 00:32:20.594 [2024-11-26 07:42:04.616454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.594 [2024-11-26 07:42:04.616461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.594 qpair failed and we were unable to recover it. 00:32:20.594 [2024-11-26 07:42:04.616623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.594 [2024-11-26 07:42:04.616630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.594 qpair failed and we were unable to recover it. 00:32:20.594 [2024-11-26 07:42:04.616918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.594 [2024-11-26 07:42:04.616925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.594 qpair failed and we were unable to recover it. 00:32:20.594 [2024-11-26 07:42:04.617245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.594 [2024-11-26 07:42:04.617252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.594 qpair failed and we were unable to recover it. 00:32:20.594 [2024-11-26 07:42:04.617628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.594 [2024-11-26 07:42:04.617635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.594 qpair failed and we were unable to recover it. 00:32:20.594 [2024-11-26 07:42:04.617884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.594 [2024-11-26 07:42:04.617891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.594 qpair failed and we were unable to recover it. 00:32:20.594 [2024-11-26 07:42:04.618085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.594 [2024-11-26 07:42:04.618092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.594 qpair failed and we were unable to recover it. 00:32:20.594 [2024-11-26 07:42:04.618468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.594 [2024-11-26 07:42:04.618475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.594 qpair failed and we were unable to recover it. 00:32:20.594 [2024-11-26 07:42:04.618833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.594 [2024-11-26 07:42:04.618840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.594 qpair failed and we were unable to recover it. 00:32:20.594 [2024-11-26 07:42:04.619189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.594 [2024-11-26 07:42:04.619197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.594 qpair failed and we were unable to recover it. 00:32:20.594 [2024-11-26 07:42:04.619413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.594 [2024-11-26 07:42:04.619421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.594 qpair failed and we were unable to recover it. 00:32:20.594 [2024-11-26 07:42:04.619720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.594 [2024-11-26 07:42:04.619728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.594 qpair failed and we were unable to recover it. 00:32:20.594 [2024-11-26 07:42:04.620039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.594 [2024-11-26 07:42:04.620047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.594 qpair failed and we were unable to recover it. 00:32:20.594 [2024-11-26 07:42:04.620220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.594 [2024-11-26 07:42:04.620227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.594 qpair failed and we were unable to recover it. 00:32:20.594 [2024-11-26 07:42:04.620593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.594 [2024-11-26 07:42:04.620601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.594 qpair failed and we were unable to recover it. 00:32:20.594 [2024-11-26 07:42:04.620912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.594 [2024-11-26 07:42:04.620920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.594 qpair failed and we were unable to recover it. 00:32:20.594 [2024-11-26 07:42:04.621238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.594 [2024-11-26 07:42:04.621245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.594 qpair failed and we were unable to recover it. 00:32:20.594 [2024-11-26 07:42:04.621541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.594 [2024-11-26 07:42:04.621548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.594 qpair failed and we were unable to recover it. 00:32:20.594 [2024-11-26 07:42:04.621774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.594 [2024-11-26 07:42:04.621781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.594 qpair failed and we were unable to recover it. 00:32:20.594 [2024-11-26 07:42:04.622085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.594 [2024-11-26 07:42:04.622093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.594 qpair failed and we were unable to recover it. 00:32:20.594 [2024-11-26 07:42:04.622276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.594 [2024-11-26 07:42:04.622283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.594 qpair failed and we were unable to recover it. 00:32:20.595 [2024-11-26 07:42:04.622633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.595 [2024-11-26 07:42:04.622640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.595 qpair failed and we were unable to recover it. 00:32:20.595 [2024-11-26 07:42:04.622868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.595 [2024-11-26 07:42:04.622876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.595 qpair failed and we were unable to recover it. 00:32:20.595 [2024-11-26 07:42:04.623144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.595 [2024-11-26 07:42:04.623151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.595 qpair failed and we were unable to recover it. 00:32:20.595 [2024-11-26 07:42:04.623487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.595 [2024-11-26 07:42:04.623494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.595 qpair failed and we were unable to recover it. 00:32:20.595 [2024-11-26 07:42:04.623782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.595 [2024-11-26 07:42:04.623789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.595 qpair failed and we were unable to recover it. 00:32:20.595 [2024-11-26 07:42:04.624110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.595 [2024-11-26 07:42:04.624118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.595 qpair failed and we were unable to recover it. 00:32:20.595 [2024-11-26 07:42:04.624298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.595 [2024-11-26 07:42:04.624306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.595 qpair failed and we were unable to recover it. 00:32:20.595 [2024-11-26 07:42:04.624592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.595 [2024-11-26 07:42:04.624598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.595 qpair failed and we were unable to recover it. 00:32:20.595 [2024-11-26 07:42:04.624906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.595 [2024-11-26 07:42:04.624915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.595 qpair failed and we were unable to recover it. 00:32:20.595 [2024-11-26 07:42:04.625110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.595 [2024-11-26 07:42:04.625118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.595 qpair failed and we were unable to recover it. 00:32:20.595 [2024-11-26 07:42:04.625394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.595 [2024-11-26 07:42:04.625401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.595 qpair failed and we were unable to recover it. 00:32:20.595 [2024-11-26 07:42:04.625600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.595 [2024-11-26 07:42:04.625614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.595 qpair failed and we were unable to recover it. 00:32:20.595 [2024-11-26 07:42:04.625933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.595 [2024-11-26 07:42:04.625940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.595 qpair failed and we were unable to recover it. 00:32:20.595 [2024-11-26 07:42:04.626240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.595 [2024-11-26 07:42:04.626246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.595 qpair failed and we were unable to recover it. 00:32:20.595 [2024-11-26 07:42:04.626435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.595 [2024-11-26 07:42:04.626442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.595 qpair failed and we were unable to recover it. 00:32:20.595 [2024-11-26 07:42:04.626799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.595 [2024-11-26 07:42:04.626805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.595 qpair failed and we were unable to recover it. 00:32:20.595 [2024-11-26 07:42:04.627116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.595 [2024-11-26 07:42:04.627124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.595 qpair failed and we were unable to recover it. 00:32:20.595 [2024-11-26 07:42:04.627413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.595 [2024-11-26 07:42:04.627420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.595 qpair failed and we were unable to recover it. 00:32:20.595 [2024-11-26 07:42:04.627495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.595 [2024-11-26 07:42:04.627504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.595 qpair failed and we were unable to recover it. 00:32:20.595 [2024-11-26 07:42:04.627698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.595 [2024-11-26 07:42:04.627705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.595 qpair failed and we were unable to recover it. 00:32:20.595 [2024-11-26 07:42:04.628033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.595 [2024-11-26 07:42:04.628040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.595 qpair failed and we were unable to recover it. 00:32:20.595 [2024-11-26 07:42:04.628403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.595 [2024-11-26 07:42:04.628410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.595 qpair failed and we were unable to recover it. 00:32:20.595 [2024-11-26 07:42:04.628728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.595 [2024-11-26 07:42:04.628735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.595 qpair failed and we were unable to recover it. 00:32:20.595 [2024-11-26 07:42:04.628937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.595 [2024-11-26 07:42:04.628945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.595 qpair failed and we were unable to recover it. 00:32:20.595 [2024-11-26 07:42:04.629261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.595 [2024-11-26 07:42:04.629267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.595 qpair failed and we were unable to recover it. 00:32:20.595 [2024-11-26 07:42:04.629556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.595 [2024-11-26 07:42:04.629563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.595 qpair failed and we were unable to recover it. 00:32:20.595 [2024-11-26 07:42:04.629856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.595 [2024-11-26 07:42:04.629867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.595 qpair failed and we were unable to recover it. 00:32:20.595 [2024-11-26 07:42:04.630033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.595 [2024-11-26 07:42:04.630041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.595 qpair failed and we were unable to recover it. 00:32:20.595 [2024-11-26 07:42:04.630266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.595 [2024-11-26 07:42:04.630274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.595 qpair failed and we were unable to recover it. 00:32:20.595 [2024-11-26 07:42:04.630473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.595 [2024-11-26 07:42:04.630481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.595 qpair failed and we were unable to recover it. 00:32:20.595 [2024-11-26 07:42:04.630821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.595 [2024-11-26 07:42:04.630828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.595 qpair failed and we were unable to recover it. 00:32:20.595 [2024-11-26 07:42:04.631132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.595 [2024-11-26 07:42:04.631139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.595 qpair failed and we were unable to recover it. 00:32:20.595 [2024-11-26 07:42:04.631434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.595 [2024-11-26 07:42:04.631441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.595 qpair failed and we were unable to recover it. 00:32:20.595 [2024-11-26 07:42:04.631691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.596 [2024-11-26 07:42:04.631699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.596 qpair failed and we were unable to recover it. 00:32:20.596 [2024-11-26 07:42:04.632011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.596 [2024-11-26 07:42:04.632018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.596 qpair failed and we were unable to recover it. 00:32:20.596 [2024-11-26 07:42:04.632189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.596 [2024-11-26 07:42:04.632197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.596 qpair failed and we were unable to recover it. 00:32:20.596 [2024-11-26 07:42:04.632426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.596 [2024-11-26 07:42:04.632433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.596 qpair failed and we were unable to recover it. 00:32:20.596 [2024-11-26 07:42:04.632731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.596 [2024-11-26 07:42:04.632738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.596 qpair failed and we were unable to recover it. 00:32:20.596 [2024-11-26 07:42:04.633070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.596 [2024-11-26 07:42:04.633078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.596 qpair failed and we were unable to recover it. 00:32:20.596 [2024-11-26 07:42:04.633400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.596 [2024-11-26 07:42:04.633408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.596 qpair failed and we were unable to recover it. 00:32:20.596 [2024-11-26 07:42:04.633600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.596 [2024-11-26 07:42:04.633608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.596 qpair failed and we were unable to recover it. 00:32:20.596 [2024-11-26 07:42:04.633916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.596 [2024-11-26 07:42:04.633923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.596 qpair failed and we were unable to recover it. 00:32:20.596 [2024-11-26 07:42:04.634226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.596 [2024-11-26 07:42:04.634234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.596 qpair failed and we were unable to recover it. 00:32:20.596 [2024-11-26 07:42:04.634547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.596 [2024-11-26 07:42:04.634554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.596 qpair failed and we were unable to recover it. 00:32:20.596 [2024-11-26 07:42:04.634847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.596 [2024-11-26 07:42:04.634854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.596 qpair failed and we were unable to recover it. 00:32:20.596 [2024-11-26 07:42:04.635173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.596 [2024-11-26 07:42:04.635180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.596 qpair failed and we were unable to recover it. 00:32:20.596 [2024-11-26 07:42:04.635504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.596 [2024-11-26 07:42:04.635511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.596 qpair failed and we were unable to recover it. 00:32:20.596 [2024-11-26 07:42:04.635828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.596 [2024-11-26 07:42:04.635835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.596 qpair failed and we were unable to recover it. 00:32:20.596 [2024-11-26 07:42:04.636133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.596 [2024-11-26 07:42:04.636142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.596 qpair failed and we were unable to recover it. 00:32:20.596 [2024-11-26 07:42:04.636459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.596 [2024-11-26 07:42:04.636466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.596 qpair failed and we were unable to recover it. 00:32:20.596 [2024-11-26 07:42:04.636647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.596 [2024-11-26 07:42:04.636655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.596 qpair failed and we were unable to recover it. 00:32:20.596 [2024-11-26 07:42:04.637048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.596 [2024-11-26 07:42:04.637055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.596 qpair failed and we were unable to recover it. 00:32:20.596 [2024-11-26 07:42:04.637344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.596 [2024-11-26 07:42:04.637352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.596 qpair failed and we were unable to recover it. 00:32:20.596 [2024-11-26 07:42:04.637642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.596 [2024-11-26 07:42:04.637650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.596 qpair failed and we were unable to recover it. 00:32:20.596 [2024-11-26 07:42:04.637962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.596 [2024-11-26 07:42:04.637970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.596 qpair failed and we were unable to recover it. 00:32:20.596 [2024-11-26 07:42:04.638294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.596 [2024-11-26 07:42:04.638301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.596 qpair failed and we were unable to recover it. 00:32:20.596 [2024-11-26 07:42:04.638635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.596 [2024-11-26 07:42:04.638642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.596 qpair failed and we were unable to recover it. 00:32:20.596 [2024-11-26 07:42:04.638941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.596 [2024-11-26 07:42:04.638948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.596 qpair failed and we were unable to recover it. 00:32:20.596 [2024-11-26 07:42:04.639253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.596 [2024-11-26 07:42:04.639260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.596 qpair failed and we were unable to recover it. 00:32:20.596 [2024-11-26 07:42:04.639472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.596 [2024-11-26 07:42:04.639480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.596 qpair failed and we were unable to recover it. 00:32:20.596 [2024-11-26 07:42:04.639800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.596 [2024-11-26 07:42:04.639807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.596 qpair failed and we were unable to recover it. 00:32:20.596 [2024-11-26 07:42:04.640113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.596 [2024-11-26 07:42:04.640121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.596 qpair failed and we were unable to recover it. 00:32:20.596 [2024-11-26 07:42:04.640317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.596 [2024-11-26 07:42:04.640325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.596 qpair failed and we were unable to recover it. 00:32:20.596 [2024-11-26 07:42:04.640626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.596 [2024-11-26 07:42:04.640633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.596 qpair failed and we were unable to recover it. 00:32:20.596 [2024-11-26 07:42:04.640914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.596 [2024-11-26 07:42:04.640921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.596 qpair failed and we were unable to recover it. 00:32:20.596 [2024-11-26 07:42:04.641210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.596 [2024-11-26 07:42:04.641217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.596 qpair failed and we were unable to recover it. 00:32:20.596 [2024-11-26 07:42:04.641542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.596 [2024-11-26 07:42:04.641548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.596 qpair failed and we were unable to recover it. 00:32:20.596 [2024-11-26 07:42:04.641872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.596 [2024-11-26 07:42:04.641879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.596 qpair failed and we were unable to recover it. 00:32:20.596 [2024-11-26 07:42:04.642203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.596 [2024-11-26 07:42:04.642210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.596 qpair failed and we were unable to recover it. 00:32:20.596 [2024-11-26 07:42:04.642539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.596 [2024-11-26 07:42:04.642546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.596 qpair failed and we were unable to recover it. 00:32:20.596 [2024-11-26 07:42:04.642872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.596 [2024-11-26 07:42:04.642880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.596 qpair failed and we were unable to recover it. 00:32:20.596 [2024-11-26 07:42:04.643201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.597 [2024-11-26 07:42:04.643208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.597 qpair failed and we were unable to recover it. 00:32:20.597 [2024-11-26 07:42:04.643524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.597 [2024-11-26 07:42:04.643531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.597 qpair failed and we were unable to recover it. 00:32:20.597 [2024-11-26 07:42:04.643840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.597 [2024-11-26 07:42:04.643847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.597 qpair failed and we were unable to recover it. 00:32:20.597 [2024-11-26 07:42:04.644159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.597 [2024-11-26 07:42:04.644166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.597 qpair failed and we were unable to recover it. 00:32:20.597 [2024-11-26 07:42:04.644486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.597 [2024-11-26 07:42:04.644493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.597 qpair failed and we were unable to recover it. 00:32:20.597 [2024-11-26 07:42:04.644819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.597 [2024-11-26 07:42:04.644826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.597 qpair failed and we were unable to recover it. 00:32:20.597 [2024-11-26 07:42:04.645136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.597 [2024-11-26 07:42:04.645143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.597 qpair failed and we were unable to recover it. 00:32:20.597 [2024-11-26 07:42:04.645475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.597 [2024-11-26 07:42:04.645482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.597 qpair failed and we were unable to recover it. 00:32:20.597 [2024-11-26 07:42:04.645849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.597 [2024-11-26 07:42:04.645855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.597 qpair failed and we were unable to recover it. 00:32:20.597 [2024-11-26 07:42:04.646068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.597 [2024-11-26 07:42:04.646076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.597 qpair failed and we were unable to recover it. 00:32:20.597 [2024-11-26 07:42:04.646370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.597 [2024-11-26 07:42:04.646377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.597 qpair failed and we were unable to recover it. 00:32:20.597 [2024-11-26 07:42:04.646574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.597 [2024-11-26 07:42:04.646589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.597 qpair failed and we were unable to recover it. 00:32:20.597 [2024-11-26 07:42:04.646933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.597 [2024-11-26 07:42:04.646940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.597 qpair failed and we were unable to recover it. 00:32:20.597 [2024-11-26 07:42:04.647161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.597 [2024-11-26 07:42:04.647168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.597 qpair failed and we were unable to recover it. 00:32:20.597 [2024-11-26 07:42:04.647491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.597 [2024-11-26 07:42:04.647498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.597 qpair failed and we were unable to recover it. 00:32:20.597 [2024-11-26 07:42:04.647696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.597 [2024-11-26 07:42:04.647703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.597 qpair failed and we were unable to recover it. 00:32:20.597 [2024-11-26 07:42:04.648014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.597 [2024-11-26 07:42:04.648021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.597 qpair failed and we were unable to recover it. 00:32:20.597 [2024-11-26 07:42:04.648312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.597 [2024-11-26 07:42:04.648320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.597 qpair failed and we were unable to recover it. 00:32:20.597 [2024-11-26 07:42:04.648648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.597 [2024-11-26 07:42:04.648656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.597 qpair failed and we were unable to recover it. 00:32:20.597 [2024-11-26 07:42:04.648976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.597 [2024-11-26 07:42:04.648984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.597 qpair failed and we were unable to recover it. 00:32:20.597 [2024-11-26 07:42:04.649323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.597 [2024-11-26 07:42:04.649330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.597 qpair failed and we were unable to recover it. 00:32:20.597 [2024-11-26 07:42:04.649551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.597 [2024-11-26 07:42:04.649558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.597 qpair failed and we were unable to recover it. 00:32:20.597 [2024-11-26 07:42:04.649849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.597 [2024-11-26 07:42:04.649856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.597 qpair failed and we were unable to recover it. 00:32:20.597 [2024-11-26 07:42:04.650025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.597 [2024-11-26 07:42:04.650032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.597 qpair failed and we were unable to recover it. 00:32:20.597 [2024-11-26 07:42:04.650191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.597 [2024-11-26 07:42:04.650199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.597 qpair failed and we were unable to recover it. 00:32:20.597 [2024-11-26 07:42:04.650531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.597 [2024-11-26 07:42:04.650539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.597 qpair failed and we were unable to recover it. 00:32:20.597 [2024-11-26 07:42:04.650754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.597 [2024-11-26 07:42:04.650762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.597 qpair failed and we were unable to recover it. 00:32:20.597 [2024-11-26 07:42:04.650936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.597 [2024-11-26 07:42:04.650945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.597 qpair failed and we were unable to recover it. 00:32:20.597 [2024-11-26 07:42:04.651156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.597 [2024-11-26 07:42:04.651163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.597 qpair failed and we were unable to recover it. 00:32:20.597 [2024-11-26 07:42:04.651331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.597 [2024-11-26 07:42:04.651339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.597 qpair failed and we were unable to recover it. 00:32:20.597 [2024-11-26 07:42:04.651684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.597 [2024-11-26 07:42:04.651692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.597 qpair failed and we were unable to recover it. 00:32:20.597 [2024-11-26 07:42:04.652029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.597 [2024-11-26 07:42:04.652038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.597 qpair failed and we were unable to recover it. 00:32:20.597 [2024-11-26 07:42:04.652081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.597 [2024-11-26 07:42:04.652090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.597 qpair failed and we were unable to recover it. 00:32:20.597 [2024-11-26 07:42:04.652381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.597 [2024-11-26 07:42:04.652389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.597 qpair failed and we were unable to recover it. 00:32:20.597 [2024-11-26 07:42:04.652706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.597 [2024-11-26 07:42:04.652714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.597 qpair failed and we were unable to recover it. 00:32:20.597 [2024-11-26 07:42:04.653029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.597 [2024-11-26 07:42:04.653038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.597 qpair failed and we were unable to recover it. 00:32:20.597 [2024-11-26 07:42:04.653210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.597 [2024-11-26 07:42:04.653218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.597 qpair failed and we were unable to recover it. 00:32:20.597 [2024-11-26 07:42:04.653512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.597 [2024-11-26 07:42:04.653520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.597 qpair failed and we were unable to recover it. 00:32:20.597 [2024-11-26 07:42:04.653858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.598 [2024-11-26 07:42:04.653871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.598 qpair failed and we were unable to recover it. 00:32:20.598 [2024-11-26 07:42:04.654169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.598 [2024-11-26 07:42:04.654177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.598 qpair failed and we were unable to recover it. 00:32:20.598 [2024-11-26 07:42:04.654468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.598 [2024-11-26 07:42:04.654477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.598 qpair failed and we were unable to recover it. 00:32:20.598 [2024-11-26 07:42:04.654784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.598 [2024-11-26 07:42:04.654793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.598 qpair failed and we were unable to recover it. 00:32:20.598 [2024-11-26 07:42:04.655124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.598 [2024-11-26 07:42:04.655133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.598 qpair failed and we were unable to recover it. 00:32:20.598 [2024-11-26 07:42:04.655473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.598 [2024-11-26 07:42:04.655482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.598 qpair failed and we were unable to recover it. 00:32:20.598 [2024-11-26 07:42:04.655667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.598 [2024-11-26 07:42:04.655674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.598 qpair failed and we were unable to recover it. 00:32:20.598 [2024-11-26 07:42:04.655941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.598 [2024-11-26 07:42:04.655950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.598 qpair failed and we were unable to recover it. 00:32:20.598 [2024-11-26 07:42:04.656250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.598 [2024-11-26 07:42:04.656259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.598 qpair failed and we were unable to recover it. 00:32:20.598 [2024-11-26 07:42:04.656567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.598 [2024-11-26 07:42:04.656576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.598 qpair failed and we were unable to recover it. 00:32:20.598 [2024-11-26 07:42:04.656887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.598 [2024-11-26 07:42:04.656896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.598 qpair failed and we were unable to recover it. 00:32:20.598 [2024-11-26 07:42:04.657196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.598 [2024-11-26 07:42:04.657204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.598 qpair failed and we were unable to recover it. 00:32:20.598 [2024-11-26 07:42:04.657514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.598 [2024-11-26 07:42:04.657523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.598 qpair failed and we were unable to recover it. 00:32:20.598 [2024-11-26 07:42:04.657826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.598 [2024-11-26 07:42:04.657835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.598 qpair failed and we were unable to recover it. 00:32:20.598 [2024-11-26 07:42:04.657990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.598 [2024-11-26 07:42:04.658007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.598 qpair failed and we were unable to recover it. 00:32:20.598 [2024-11-26 07:42:04.658336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.598 [2024-11-26 07:42:04.658346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.598 qpair failed and we were unable to recover it. 00:32:20.598 [2024-11-26 07:42:04.658596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.598 [2024-11-26 07:42:04.658604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.598 qpair failed and we were unable to recover it. 00:32:20.598 [2024-11-26 07:42:04.658948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.598 [2024-11-26 07:42:04.658957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.598 qpair failed and we were unable to recover it. 00:32:20.598 [2024-11-26 07:42:04.659288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.598 [2024-11-26 07:42:04.659296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.598 qpair failed and we were unable to recover it. 00:32:20.598 [2024-11-26 07:42:04.659608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.598 [2024-11-26 07:42:04.659619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.598 qpair failed and we were unable to recover it. 00:32:20.598 [2024-11-26 07:42:04.659957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.598 [2024-11-26 07:42:04.659966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.598 qpair failed and we were unable to recover it. 00:32:20.598 [2024-11-26 07:42:04.660157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.598 [2024-11-26 07:42:04.660165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.598 qpair failed and we were unable to recover it. 00:32:20.598 [2024-11-26 07:42:04.660479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.598 [2024-11-26 07:42:04.660488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.598 qpair failed and we were unable to recover it. 00:32:20.598 [2024-11-26 07:42:04.660792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.598 [2024-11-26 07:42:04.660801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.598 qpair failed and we were unable to recover it. 00:32:20.598 [2024-11-26 07:42:04.661109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.598 [2024-11-26 07:42:04.661118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.598 qpair failed and we were unable to recover it. 00:32:20.598 [2024-11-26 07:42:04.661426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.598 [2024-11-26 07:42:04.661433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.598 qpair failed and we were unable to recover it. 00:32:20.598 [2024-11-26 07:42:04.661596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.598 [2024-11-26 07:42:04.661605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.598 qpair failed and we were unable to recover it. 00:32:20.598 [2024-11-26 07:42:04.661885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.598 [2024-11-26 07:42:04.661893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.598 qpair failed and we were unable to recover it. 00:32:20.598 [2024-11-26 07:42:04.662073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.598 [2024-11-26 07:42:04.662083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.598 qpair failed and we were unable to recover it. 00:32:20.598 [2024-11-26 07:42:04.662391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.598 [2024-11-26 07:42:04.662399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.598 qpair failed and we were unable to recover it. 00:32:20.598 [2024-11-26 07:42:04.662742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.598 [2024-11-26 07:42:04.662750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.598 qpair failed and we were unable to recover it. 00:32:20.598 [2024-11-26 07:42:04.662914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.598 [2024-11-26 07:42:04.662922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.598 qpair failed and we were unable to recover it. 00:32:20.598 [2024-11-26 07:42:04.663097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.598 [2024-11-26 07:42:04.663105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.598 qpair failed and we were unable to recover it. 00:32:20.598 [2024-11-26 07:42:04.663428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.598 [2024-11-26 07:42:04.663436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.598 qpair failed and we were unable to recover it. 00:32:20.598 [2024-11-26 07:42:04.663744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.598 [2024-11-26 07:42:04.663752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.598 qpair failed and we were unable to recover it. 00:32:20.598 [2024-11-26 07:42:04.664050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.598 [2024-11-26 07:42:04.664059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.598 qpair failed and we were unable to recover it. 00:32:20.598 [2024-11-26 07:42:04.664370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.598 [2024-11-26 07:42:04.664377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.598 qpair failed and we were unable to recover it. 00:32:20.598 [2024-11-26 07:42:04.664680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.598 [2024-11-26 07:42:04.664688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.598 qpair failed and we were unable to recover it. 00:32:20.599 [2024-11-26 07:42:04.664872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.599 [2024-11-26 07:42:04.664882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.599 qpair failed and we were unable to recover it. 00:32:20.599 [2024-11-26 07:42:04.665197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.599 [2024-11-26 07:42:04.665206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.599 qpair failed and we were unable to recover it. 00:32:20.599 [2024-11-26 07:42:04.665521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.599 [2024-11-26 07:42:04.665530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.599 qpair failed and we were unable to recover it. 00:32:20.599 [2024-11-26 07:42:04.665834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.599 [2024-11-26 07:42:04.665841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.599 qpair failed and we were unable to recover it. 00:32:20.599 [2024-11-26 07:42:04.666029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.599 [2024-11-26 07:42:04.666038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.599 qpair failed and we were unable to recover it. 00:32:20.599 [2024-11-26 07:42:04.666363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.599 [2024-11-26 07:42:04.666371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.599 qpair failed and we were unable to recover it. 00:32:20.599 [2024-11-26 07:42:04.666714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.599 [2024-11-26 07:42:04.666722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.599 qpair failed and we were unable to recover it. 00:32:20.599 [2024-11-26 07:42:04.667032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.599 [2024-11-26 07:42:04.667040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.599 qpair failed and we were unable to recover it. 00:32:20.599 [2024-11-26 07:42:04.667364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.599 [2024-11-26 07:42:04.667373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.599 qpair failed and we were unable to recover it. 00:32:20.599 [2024-11-26 07:42:04.667701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.599 [2024-11-26 07:42:04.667709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.599 qpair failed and we were unable to recover it. 00:32:20.599 [2024-11-26 07:42:04.668026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.599 [2024-11-26 07:42:04.668034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.599 qpair failed and we were unable to recover it. 00:32:20.599 [2024-11-26 07:42:04.668363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.599 [2024-11-26 07:42:04.668371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.599 qpair failed and we were unable to recover it. 00:32:20.599 [2024-11-26 07:42:04.668552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.599 [2024-11-26 07:42:04.668561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.599 qpair failed and we were unable to recover it. 00:32:20.599 [2024-11-26 07:42:04.668753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.599 [2024-11-26 07:42:04.668761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.599 qpair failed and we were unable to recover it. 00:32:20.599 [2024-11-26 07:42:04.669033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.599 [2024-11-26 07:42:04.669042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.599 qpair failed and we were unable to recover it. 00:32:20.599 [2024-11-26 07:42:04.669314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.599 [2024-11-26 07:42:04.669322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.599 qpair failed and we were unable to recover it. 00:32:20.599 [2024-11-26 07:42:04.669658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.599 [2024-11-26 07:42:04.669666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.599 qpair failed and we were unable to recover it. 00:32:20.599 [2024-11-26 07:42:04.669979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.599 [2024-11-26 07:42:04.669987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.599 qpair failed and we were unable to recover it. 00:32:20.599 [2024-11-26 07:42:04.670209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.599 [2024-11-26 07:42:04.670217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.599 qpair failed and we were unable to recover it. 00:32:20.599 [2024-11-26 07:42:04.670397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.599 [2024-11-26 07:42:04.670406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.599 qpair failed and we were unable to recover it. 00:32:20.599 [2024-11-26 07:42:04.670731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.599 [2024-11-26 07:42:04.670739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.599 qpair failed and we were unable to recover it. 00:32:20.599 [2024-11-26 07:42:04.670898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.599 [2024-11-26 07:42:04.670908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.599 qpair failed and we were unable to recover it. 00:32:20.599 [2024-11-26 07:42:04.671192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.599 [2024-11-26 07:42:04.671200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.599 qpair failed and we were unable to recover it. 00:32:20.599 [2024-11-26 07:42:04.671512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.599 [2024-11-26 07:42:04.671520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.599 qpair failed and we were unable to recover it. 00:32:20.599 [2024-11-26 07:42:04.671822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.599 [2024-11-26 07:42:04.671830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.599 qpair failed and we were unable to recover it. 00:32:20.599 [2024-11-26 07:42:04.672139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.599 [2024-11-26 07:42:04.672148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.599 qpair failed and we were unable to recover it. 00:32:20.599 [2024-11-26 07:42:04.672325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:20.599 [2024-11-26 07:42:04.672336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.599 [2024-11-26 07:42:04.672344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.599 qpair failed and we were unable to recover it. 00:32:20.599 [2024-11-26 07:42:04.672673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.599 [2024-11-26 07:42:04.672681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.599 qpair failed and we were unable to recover it. 00:32:20.599 [2024-11-26 07:42:04.672985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.599 [2024-11-26 07:42:04.672993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.599 qpair failed and we were unable to recover it. 00:32:20.599 [2024-11-26 07:42:04.673175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.599 [2024-11-26 07:42:04.673184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.599 qpair failed and we were unable to recover it. 00:32:20.599 [2024-11-26 07:42:04.673522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.599 [2024-11-26 07:42:04.673530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.599 qpair failed and we were unable to recover it. 00:32:20.600 [2024-11-26 07:42:04.673837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.600 [2024-11-26 07:42:04.673845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.600 qpair failed and we were unable to recover it. 00:32:20.600 [2024-11-26 07:42:04.674255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.600 [2024-11-26 07:42:04.674263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.600 qpair failed and we were unable to recover it. 00:32:20.600 [2024-11-26 07:42:04.674579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.600 [2024-11-26 07:42:04.674587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.600 qpair failed and we were unable to recover it. 00:32:20.600 [2024-11-26 07:42:04.674942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.600 [2024-11-26 07:42:04.674951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.600 qpair failed and we were unable to recover it. 00:32:20.600 [2024-11-26 07:42:04.675150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.600 [2024-11-26 07:42:04.675159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.600 qpair failed and we were unable to recover it. 00:32:20.600 [2024-11-26 07:42:04.675452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.600 [2024-11-26 07:42:04.675460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.600 qpair failed and we were unable to recover it. 00:32:20.600 [2024-11-26 07:42:04.675812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.600 [2024-11-26 07:42:04.675821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.600 qpair failed and we were unable to recover it. 00:32:20.600 [2024-11-26 07:42:04.676111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.600 [2024-11-26 07:42:04.676119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.600 qpair failed and we were unable to recover it. 00:32:20.600 [2024-11-26 07:42:04.676412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.600 [2024-11-26 07:42:04.676420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.600 qpair failed and we were unable to recover it. 00:32:20.600 [2024-11-26 07:42:04.676698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.600 [2024-11-26 07:42:04.676706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.600 qpair failed and we were unable to recover it. 00:32:20.600 [2024-11-26 07:42:04.676962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.600 [2024-11-26 07:42:04.676970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.600 qpair failed and we were unable to recover it. 00:32:20.600 [2024-11-26 07:42:04.677310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.600 [2024-11-26 07:42:04.677319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.600 qpair failed and we were unable to recover it. 00:32:20.600 [2024-11-26 07:42:04.677529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.600 [2024-11-26 07:42:04.677537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.600 qpair failed and we were unable to recover it. 00:32:20.600 [2024-11-26 07:42:04.677714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.600 [2024-11-26 07:42:04.677722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.600 qpair failed and we were unable to recover it. 00:32:20.600 [2024-11-26 07:42:04.678041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.600 [2024-11-26 07:42:04.678050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.600 qpair failed and we were unable to recover it. 00:32:20.600 [2024-11-26 07:42:04.678416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.600 [2024-11-26 07:42:04.678424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.600 qpair failed and we were unable to recover it. 00:32:20.600 [2024-11-26 07:42:04.678629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.600 [2024-11-26 07:42:04.678637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.600 qpair failed and we were unable to recover it. 00:32:20.600 [2024-11-26 07:42:04.678807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.600 [2024-11-26 07:42:04.678815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.600 qpair failed and we were unable to recover it. 00:32:20.600 [2024-11-26 07:42:04.679129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.600 [2024-11-26 07:42:04.679138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.600 qpair failed and we were unable to recover it. 00:32:20.600 [2024-11-26 07:42:04.679297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.600 [2024-11-26 07:42:04.679305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.600 qpair failed and we were unable to recover it. 00:32:20.600 [2024-11-26 07:42:04.679487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.600 [2024-11-26 07:42:04.679496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.600 qpair failed and we were unable to recover it. 00:32:20.600 [2024-11-26 07:42:04.679811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.600 [2024-11-26 07:42:04.679819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.600 qpair failed and we were unable to recover it. 00:32:20.600 [2024-11-26 07:42:04.680101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.600 [2024-11-26 07:42:04.680110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.600 qpair failed and we were unable to recover it. 00:32:20.600 [2024-11-26 07:42:04.680444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.600 [2024-11-26 07:42:04.680452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.600 qpair failed and we were unable to recover it. 00:32:20.600 [2024-11-26 07:42:04.680765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.600 [2024-11-26 07:42:04.680773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.600 qpair failed and we were unable to recover it. 00:32:20.600 [2024-11-26 07:42:04.681115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.600 [2024-11-26 07:42:04.681124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.600 qpair failed and we were unable to recover it. 00:32:20.600 [2024-11-26 07:42:04.681460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.600 [2024-11-26 07:42:04.681469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.600 qpair failed and we were unable to recover it. 00:32:20.600 [2024-11-26 07:42:04.681781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.600 [2024-11-26 07:42:04.681789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.600 qpair failed and we were unable to recover it. 00:32:20.600 [2024-11-26 07:42:04.682159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.600 [2024-11-26 07:42:04.682167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.600 qpair failed and we were unable to recover it. 00:32:20.600 [2024-11-26 07:42:04.682507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.600 [2024-11-26 07:42:04.682516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.600 qpair failed and we were unable to recover it. 00:32:20.600 [2024-11-26 07:42:04.682833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.600 [2024-11-26 07:42:04.682844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.600 qpair failed and we were unable to recover it. 00:32:20.600 [2024-11-26 07:42:04.683011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.600 [2024-11-26 07:42:04.683021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.600 qpair failed and we were unable to recover it. 00:32:20.600 [2024-11-26 07:42:04.683216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.600 [2024-11-26 07:42:04.683225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.600 qpair failed and we were unable to recover it. 00:32:20.600 [2024-11-26 07:42:04.683546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.600 [2024-11-26 07:42:04.683555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.600 qpair failed and we were unable to recover it. 00:32:20.600 [2024-11-26 07:42:04.683735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.600 [2024-11-26 07:42:04.683744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.600 qpair failed and we were unable to recover it. 00:32:20.600 [2024-11-26 07:42:04.684058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.600 [2024-11-26 07:42:04.684067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.600 qpair failed and we were unable to recover it. 00:32:20.600 [2024-11-26 07:42:04.684381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.600 [2024-11-26 07:42:04.684389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.600 qpair failed and we were unable to recover it. 00:32:20.600 [2024-11-26 07:42:04.684709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.601 [2024-11-26 07:42:04.684717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.601 qpair failed and we were unable to recover it. 00:32:20.601 [2024-11-26 07:42:04.685044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.601 [2024-11-26 07:42:04.685052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.601 qpair failed and we were unable to recover it. 00:32:20.601 [2024-11-26 07:42:04.685229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.601 [2024-11-26 07:42:04.685237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.601 qpair failed and we were unable to recover it. 00:32:20.601 [2024-11-26 07:42:04.685581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.601 [2024-11-26 07:42:04.685589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.601 qpair failed and we were unable to recover it. 00:32:20.601 [2024-11-26 07:42:04.685765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.601 [2024-11-26 07:42:04.685773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.601 qpair failed and we were unable to recover it. 00:32:20.601 [2024-11-26 07:42:04.685956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.601 [2024-11-26 07:42:04.685965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.601 qpair failed and we were unable to recover it. 00:32:20.879 [2024-11-26 07:42:04.686250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.879 [2024-11-26 07:42:04.686259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.879 qpair failed and we were unable to recover it. 00:32:20.879 [2024-11-26 07:42:04.686548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.879 [2024-11-26 07:42:04.686558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.879 qpair failed and we were unable to recover it. 00:32:20.879 [2024-11-26 07:42:04.686870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.879 [2024-11-26 07:42:04.686879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.879 qpair failed and we were unable to recover it. 00:32:20.879 [2024-11-26 07:42:04.687101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.879 [2024-11-26 07:42:04.687109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.879 qpair failed and we were unable to recover it. 00:32:20.879 [2024-11-26 07:42:04.687416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.879 [2024-11-26 07:42:04.687424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.879 qpair failed and we were unable to recover it. 00:32:20.879 [2024-11-26 07:42:04.687731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.879 [2024-11-26 07:42:04.687739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.879 qpair failed and we were unable to recover it. 00:32:20.879 [2024-11-26 07:42:04.688045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.879 [2024-11-26 07:42:04.688054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.879 qpair failed and we were unable to recover it. 00:32:20.879 [2024-11-26 07:42:04.688240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.879 [2024-11-26 07:42:04.688249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.879 qpair failed and we were unable to recover it. 00:32:20.879 [2024-11-26 07:42:04.688576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.879 [2024-11-26 07:42:04.688585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.879 qpair failed and we were unable to recover it. 00:32:20.879 [2024-11-26 07:42:04.688899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.879 [2024-11-26 07:42:04.688907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.879 qpair failed and we were unable to recover it. 00:32:20.879 [2024-11-26 07:42:04.689249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.879 [2024-11-26 07:42:04.689257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.879 qpair failed and we were unable to recover it. 00:32:20.879 [2024-11-26 07:42:04.689623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.879 [2024-11-26 07:42:04.689632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.879 qpair failed and we were unable to recover it. 00:32:20.879 [2024-11-26 07:42:04.689940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.879 [2024-11-26 07:42:04.689948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.880 qpair failed and we were unable to recover it. 00:32:20.880 [2024-11-26 07:42:04.690274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.880 [2024-11-26 07:42:04.690282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.880 qpair failed and we were unable to recover it. 00:32:20.880 [2024-11-26 07:42:04.690590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.880 [2024-11-26 07:42:04.690598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.880 qpair failed and we were unable to recover it. 00:32:20.880 [2024-11-26 07:42:04.690908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.880 [2024-11-26 07:42:04.690916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.880 qpair failed and we were unable to recover it. 00:32:20.880 [2024-11-26 07:42:04.691247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.880 [2024-11-26 07:42:04.691255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.880 qpair failed and we were unable to recover it. 00:32:20.880 [2024-11-26 07:42:04.691440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.880 [2024-11-26 07:42:04.691449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.880 qpair failed and we were unable to recover it. 00:32:20.880 [2024-11-26 07:42:04.691766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.880 [2024-11-26 07:42:04.691774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.880 qpair failed and we were unable to recover it. 00:32:20.880 [2024-11-26 07:42:04.692079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.880 [2024-11-26 07:42:04.692087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.880 qpair failed and we were unable to recover it. 00:32:20.880 [2024-11-26 07:42:04.692394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.880 [2024-11-26 07:42:04.692401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.880 qpair failed and we were unable to recover it. 00:32:20.880 [2024-11-26 07:42:04.692752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.880 [2024-11-26 07:42:04.692760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.880 qpair failed and we were unable to recover it. 00:32:20.880 [2024-11-26 07:42:04.693079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.880 [2024-11-26 07:42:04.693088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.880 qpair failed and we were unable to recover it. 00:32:20.880 [2024-11-26 07:42:04.693405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.880 [2024-11-26 07:42:04.693413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.880 qpair failed and we were unable to recover it. 00:32:20.880 [2024-11-26 07:42:04.693737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.880 [2024-11-26 07:42:04.693745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.880 qpair failed and we were unable to recover it. 00:32:20.880 [2024-11-26 07:42:04.694076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.880 [2024-11-26 07:42:04.694085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.880 qpair failed and we were unable to recover it. 00:32:20.880 [2024-11-26 07:42:04.694416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.880 [2024-11-26 07:42:04.694425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.880 qpair failed and we were unable to recover it. 00:32:20.880 [2024-11-26 07:42:04.694748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.880 [2024-11-26 07:42:04.694759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.880 qpair failed and we were unable to recover it. 00:32:20.880 [2024-11-26 07:42:04.695055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.880 [2024-11-26 07:42:04.695063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.880 qpair failed and we were unable to recover it. 00:32:20.880 [2024-11-26 07:42:04.695395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.880 [2024-11-26 07:42:04.695403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.880 qpair failed and we were unable to recover it. 00:32:20.880 [2024-11-26 07:42:04.695709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.880 [2024-11-26 07:42:04.695716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.880 qpair failed and we were unable to recover it. 00:32:20.880 [2024-11-26 07:42:04.695875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.880 [2024-11-26 07:42:04.695884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.880 qpair failed and we were unable to recover it. 00:32:20.880 [2024-11-26 07:42:04.696078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.880 [2024-11-26 07:42:04.696086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.880 qpair failed and we were unable to recover it. 00:32:20.880 [2024-11-26 07:42:04.696260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.880 [2024-11-26 07:42:04.696267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.880 qpair failed and we were unable to recover it. 00:32:20.880 [2024-11-26 07:42:04.696544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.880 [2024-11-26 07:42:04.696552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.880 qpair failed and we were unable to recover it. 00:32:20.880 [2024-11-26 07:42:04.696917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.880 [2024-11-26 07:42:04.696925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.880 qpair failed and we were unable to recover it. 00:32:20.880 [2024-11-26 07:42:04.697250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.880 [2024-11-26 07:42:04.697259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.880 qpair failed and we were unable to recover it. 00:32:20.880 [2024-11-26 07:42:04.697436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.880 [2024-11-26 07:42:04.697446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.880 qpair failed and we were unable to recover it. 00:32:20.880 [2024-11-26 07:42:04.697618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.880 [2024-11-26 07:42:04.697626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.880 qpair failed and we were unable to recover it. 00:32:20.880 [2024-11-26 07:42:04.697959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.880 [2024-11-26 07:42:04.697967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.880 qpair failed and we were unable to recover it. 00:32:20.880 [2024-11-26 07:42:04.698168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.880 [2024-11-26 07:42:04.698176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.880 qpair failed and we were unable to recover it. 00:32:20.880 [2024-11-26 07:42:04.698471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.880 [2024-11-26 07:42:04.698479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.880 qpair failed and we were unable to recover it. 00:32:20.880 [2024-11-26 07:42:04.698814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.880 [2024-11-26 07:42:04.698821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.880 qpair failed and we were unable to recover it. 00:32:20.880 [2024-11-26 07:42:04.699150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.880 [2024-11-26 07:42:04.699158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.880 qpair failed and we were unable to recover it. 00:32:20.880 [2024-11-26 07:42:04.699344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.880 [2024-11-26 07:42:04.699353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.880 qpair failed and we were unable to recover it. 00:32:20.880 [2024-11-26 07:42:04.699705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.880 [2024-11-26 07:42:04.699713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.880 qpair failed and we were unable to recover it. 00:32:20.880 [2024-11-26 07:42:04.700015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.880 [2024-11-26 07:42:04.700024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.880 qpair failed and we were unable to recover it. 00:32:20.880 [2024-11-26 07:42:04.700343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.880 [2024-11-26 07:42:04.700351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.880 qpair failed and we were unable to recover it. 00:32:20.880 [2024-11-26 07:42:04.700518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.880 [2024-11-26 07:42:04.700526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.880 qpair failed and we were unable to recover it. 00:32:20.880 [2024-11-26 07:42:04.700840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.880 [2024-11-26 07:42:04.700849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.880 qpair failed and we were unable to recover it. 00:32:20.880 [2024-11-26 07:42:04.701120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.881 [2024-11-26 07:42:04.701129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.881 qpair failed and we were unable to recover it. 00:32:20.881 [2024-11-26 07:42:04.701437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.881 [2024-11-26 07:42:04.701446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.881 qpair failed and we were unable to recover it. 00:32:20.881 [2024-11-26 07:42:04.701631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.881 [2024-11-26 07:42:04.701641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.881 qpair failed and we were unable to recover it. 00:32:20.881 [2024-11-26 07:42:04.701826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.881 [2024-11-26 07:42:04.701835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.881 qpair failed and we were unable to recover it. 00:32:20.881 [2024-11-26 07:42:04.702162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.881 [2024-11-26 07:42:04.702172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.881 qpair failed and we were unable to recover it. 00:32:20.881 [2024-11-26 07:42:04.702463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.881 [2024-11-26 07:42:04.702472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.881 qpair failed and we were unable to recover it. 00:32:20.881 [2024-11-26 07:42:04.702793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.881 [2024-11-26 07:42:04.702802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.881 qpair failed and we were unable to recover it. 00:32:20.881 [2024-11-26 07:42:04.703186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.881 [2024-11-26 07:42:04.703195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.881 qpair failed and we were unable to recover it. 00:32:20.881 [2024-11-26 07:42:04.703541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.881 [2024-11-26 07:42:04.703549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.881 qpair failed and we were unable to recover it. 00:32:20.881 [2024-11-26 07:42:04.703857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.881 [2024-11-26 07:42:04.703870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.881 qpair failed and we were unable to recover it. 00:32:20.881 [2024-11-26 07:42:04.704210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.881 [2024-11-26 07:42:04.704219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.881 qpair failed and we were unable to recover it. 00:32:20.881 [2024-11-26 07:42:04.704513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.881 [2024-11-26 07:42:04.704521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.881 qpair failed and we were unable to recover it. 00:32:20.881 [2024-11-26 07:42:04.704692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.881 [2024-11-26 07:42:04.704700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.881 qpair failed and we were unable to recover it. 00:32:20.881 [2024-11-26 07:42:04.704986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.881 [2024-11-26 07:42:04.704994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.881 qpair failed and we were unable to recover it. 00:32:20.881 [2024-11-26 07:42:04.705299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.881 [2024-11-26 07:42:04.705308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.881 qpair failed and we were unable to recover it. 00:32:20.881 [2024-11-26 07:42:04.705639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.881 [2024-11-26 07:42:04.705649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.881 qpair failed and we were unable to recover it. 00:32:20.881 [2024-11-26 07:42:04.705931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.881 [2024-11-26 07:42:04.705951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.881 qpair failed and we were unable to recover it. 00:32:20.881 [2024-11-26 07:42:04.706287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.881 [2024-11-26 07:42:04.706296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.881 qpair failed and we were unable to recover it. 00:32:20.881 [2024-11-26 07:42:04.706468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.881 [2024-11-26 07:42:04.706477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.881 qpair failed and we were unable to recover it. 00:32:20.881 [2024-11-26 07:42:04.706770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.881 [2024-11-26 07:42:04.706778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.881 qpair failed and we were unable to recover it. 00:32:20.881 [2024-11-26 07:42:04.707102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.881 [2024-11-26 07:42:04.707112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.881 qpair failed and we were unable to recover it. 00:32:20.881 [2024-11-26 07:42:04.707458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.881 [2024-11-26 07:42:04.707467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.881 qpair failed and we were unable to recover it. 00:32:20.881 [2024-11-26 07:42:04.707778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.881 [2024-11-26 07:42:04.707786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.881 qpair failed and we were unable to recover it. 00:32:20.881 [2024-11-26 07:42:04.708037] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:20.881 [2024-11-26 07:42:04.708066] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:20.881 [2024-11-26 07:42:04.708074] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:20.881 [2024-11-26 07:42:04.708080] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:20.881 [2024-11-26 07:42:04.708086] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:20.881 [2024-11-26 07:42:04.708114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.881 [2024-11-26 07:42:04.708122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.881 qpair failed and we were unable to recover it. 00:32:20.881 [2024-11-26 07:42:04.708434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.881 [2024-11-26 07:42:04.708442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.881 qpair failed and we were unable to recover it. 00:32:20.881 [2024-11-26 07:42:04.708516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.881 [2024-11-26 07:42:04.708524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.881 qpair failed and we were unable to recover it. 00:32:20.881 Read completed with error (sct=0, sc=8) 00:32:20.881 starting I/O failed 00:32:20.881 Read completed with error (sct=0, sc=8) 00:32:20.881 starting I/O failed 00:32:20.881 Read completed with error (sct=0, sc=8) 00:32:20.881 starting I/O failed 00:32:20.881 Write completed with error (sct=0, sc=8) 00:32:20.881 starting I/O failed 00:32:20.881 Read completed with error (sct=0, sc=8) 00:32:20.881 starting I/O failed 00:32:20.881 Write completed with error (sct=0, sc=8) 00:32:20.881 starting I/O failed 00:32:20.881 Write completed with error (sct=0, sc=8) 00:32:20.881 starting I/O failed 00:32:20.881 Write completed with error (sct=0, sc=8) 00:32:20.881 starting I/O failed 00:32:20.881 Write completed with error (sct=0, sc=8) 00:32:20.881 starting I/O failed 00:32:20.881 Read completed with error (sct=0, sc=8) 00:32:20.881 starting I/O failed 00:32:20.881 Write completed with error (sct=0, sc=8) 00:32:20.881 starting I/O failed 00:32:20.881 Read completed with error (sct=0, sc=8) 00:32:20.881 starting I/O failed 00:32:20.881 Read completed with error (sct=0, sc=8) 00:32:20.881 starting I/O failed 00:32:20.881 Write completed with error (sct=0, sc=8) 00:32:20.881 starting I/O failed 00:32:20.881 Read completed with error (sct=0, sc=8) 00:32:20.881 starting I/O failed 00:32:20.881 Write completed with error (sct=0, sc=8) 00:32:20.881 starting I/O failed 00:32:20.881 Write completed with error (sct=0, sc=8) 00:32:20.881 starting I/O failed 00:32:20.881 Read completed with error (sct=0, sc=8) 00:32:20.881 starting I/O failed 00:32:20.881 Read completed with error (sct=0, sc=8) 00:32:20.881 starting I/O failed 00:32:20.881 Read completed with error (sct=0, sc=8) 00:32:20.881 starting I/O failed 00:32:20.881 Read completed with error (sct=0, sc=8) 00:32:20.881 starting I/O failed 00:32:20.881 Read completed with error (sct=0, sc=8) 00:32:20.881 starting I/O failed 00:32:20.881 Write completed with error (sct=0, sc=8) 00:32:20.881 starting I/O failed 00:32:20.881 Read completed with error (sct=0, sc=8) 00:32:20.881 starting I/O failed 00:32:20.881 Read completed with error (sct=0, sc=8) 00:32:20.881 starting I/O failed 00:32:20.881 Write completed with error (sct=0, sc=8) 00:32:20.881 starting I/O failed 00:32:20.881 Write completed with error (sct=0, sc=8) 00:32:20.881 starting I/O failed 00:32:20.881 Write completed with error (sct=0, sc=8) 00:32:20.881 starting I/O failed 00:32:20.881 Write completed with error (sct=0, sc=8) 00:32:20.881 starting I/O failed 00:32:20.881 Write completed with error (sct=0, sc=8) 00:32:20.881 starting I/O failed 00:32:20.881 Write completed with error (sct=0, sc=8) 00:32:20.881 starting I/O failed 00:32:20.881 Read completed with error (sct=0, sc=8) 00:32:20.882 starting I/O failed 00:32:20.882 [2024-11-26 07:42:04.709274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:20.882 [2024-11-26 07:42:04.709597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:20.882 [2024-11-26 07:42:04.709710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.882 [2024-11-26 07:42:04.709771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f0000b90 with addr=10.0.0.2, port=4420 00:32:20.882 qpair failed and we were unable to recover it. 00:32:20.882 [2024-11-26 07:42:04.709737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:20.882 [2024-11-26 07:42:04.709848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.882 [2024-11-26 07:42:04.709858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.882 qpair failed and we were unable to recover it. 00:32:20.882 [2024-11-26 07:42:04.709916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:20.882 [2024-11-26 07:42:04.709918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:20.882 [2024-11-26 07:42:04.710191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.882 [2024-11-26 07:42:04.710200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.882 qpair failed and we were unable to recover it. 00:32:20.882 [2024-11-26 07:42:04.710389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.882 [2024-11-26 07:42:04.710397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.882 qpair failed and we were unable to recover it. 00:32:20.882 [2024-11-26 07:42:04.710616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.882 [2024-11-26 07:42:04.710624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.882 qpair failed and we were unable to recover it. 00:32:20.882 [2024-11-26 07:42:04.710949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.882 [2024-11-26 07:42:04.710957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.882 qpair failed and we were unable to recover it. 00:32:20.882 [2024-11-26 07:42:04.711321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.882 [2024-11-26 07:42:04.711329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.882 qpair failed and we were unable to recover it. 00:32:20.882 [2024-11-26 07:42:04.711674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.882 [2024-11-26 07:42:04.711682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.882 qpair failed and we were unable to recover it. 00:32:20.882 [2024-11-26 07:42:04.711992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.882 [2024-11-26 07:42:04.712000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.882 qpair failed and we were unable to recover it. 00:32:20.882 [2024-11-26 07:42:04.712326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.882 [2024-11-26 07:42:04.712334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.882 qpair failed and we were unable to recover it. 00:32:20.882 [2024-11-26 07:42:04.712655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.882 [2024-11-26 07:42:04.712664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.882 qpair failed and we were unable to recover it. 00:32:20.882 [2024-11-26 07:42:04.712855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.882 [2024-11-26 07:42:04.712871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.882 qpair failed and we were unable to recover it. 00:32:20.882 [2024-11-26 07:42:04.713079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.882 [2024-11-26 07:42:04.713088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.882 qpair failed and we were unable to recover it. 00:32:20.882 [2024-11-26 07:42:04.713370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.882 [2024-11-26 07:42:04.713378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.882 qpair failed and we were unable to recover it. 00:32:20.882 [2024-11-26 07:42:04.713713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.882 [2024-11-26 07:42:04.713721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.882 qpair failed and we were unable to recover it. 00:32:20.882 [2024-11-26 07:42:04.714035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.882 [2024-11-26 07:42:04.714044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.882 qpair failed and we were unable to recover it. 00:32:20.882 [2024-11-26 07:42:04.714373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.882 [2024-11-26 07:42:04.714382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.882 qpair failed and we were unable to recover it. 00:32:20.882 [2024-11-26 07:42:04.714682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.882 [2024-11-26 07:42:04.714690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.882 qpair failed and we were unable to recover it. 00:32:20.882 [2024-11-26 07:42:04.715033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.882 [2024-11-26 07:42:04.715042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.882 qpair failed and we were unable to recover it. 00:32:20.882 [2024-11-26 07:42:04.715381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.882 [2024-11-26 07:42:04.715389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.882 qpair failed and we were unable to recover it. 00:32:20.882 [2024-11-26 07:42:04.715692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.882 [2024-11-26 07:42:04.715700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.882 qpair failed and we were unable to recover it. 00:32:20.882 [2024-11-26 07:42:04.715801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.882 [2024-11-26 07:42:04.715808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.882 qpair failed and we were unable to recover it. 00:32:20.882 [2024-11-26 07:42:04.716088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.882 [2024-11-26 07:42:04.716096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.882 qpair failed and we were unable to recover it. 00:32:20.882 [2024-11-26 07:42:04.716406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.882 [2024-11-26 07:42:04.716415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.882 qpair failed and we were unable to recover it. 00:32:20.882 [2024-11-26 07:42:04.716617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.882 [2024-11-26 07:42:04.716625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.882 qpair failed and we were unable to recover it. 00:32:20.882 [2024-11-26 07:42:04.716948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.882 [2024-11-26 07:42:04.716956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.882 qpair failed and we were unable to recover it. 00:32:20.882 [2024-11-26 07:42:04.717155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.882 [2024-11-26 07:42:04.717164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.882 qpair failed and we were unable to recover it. 00:32:20.882 [2024-11-26 07:42:04.717478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.882 [2024-11-26 07:42:04.717486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.882 qpair failed and we were unable to recover it. 00:32:20.882 [2024-11-26 07:42:04.717801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.882 [2024-11-26 07:42:04.717810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.882 qpair failed and we were unable to recover it. 00:32:20.882 [2024-11-26 07:42:04.717993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.882 [2024-11-26 07:42:04.718002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.882 qpair failed and we were unable to recover it. 00:32:20.882 [2024-11-26 07:42:04.718348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.882 [2024-11-26 07:42:04.718356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.882 qpair failed and we were unable to recover it. 00:32:20.882 [2024-11-26 07:42:04.718554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.882 [2024-11-26 07:42:04.718562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.882 qpair failed and we were unable to recover it. 00:32:20.882 [2024-11-26 07:42:04.718909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.882 [2024-11-26 07:42:04.718918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.882 qpair failed and we were unable to recover it. 00:32:20.882 [2024-11-26 07:42:04.719264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.882 [2024-11-26 07:42:04.719272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.882 qpair failed and we were unable to recover it. 00:32:20.882 [2024-11-26 07:42:04.719466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.883 [2024-11-26 07:42:04.719474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.883 qpair failed and we were unable to recover it. 00:32:20.883 [2024-11-26 07:42:04.719789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.883 [2024-11-26 07:42:04.719797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.883 qpair failed and we were unable to recover it. 00:32:20.883 [2024-11-26 07:42:04.720104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.883 [2024-11-26 07:42:04.720113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.883 qpair failed and we were unable to recover it. 00:32:20.883 [2024-11-26 07:42:04.720426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.883 [2024-11-26 07:42:04.720435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.883 qpair failed and we were unable to recover it. 00:32:20.883 [2024-11-26 07:42:04.720600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.883 [2024-11-26 07:42:04.720609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.883 qpair failed and we were unable to recover it. 00:32:20.883 [2024-11-26 07:42:04.720962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.883 [2024-11-26 07:42:04.720971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.883 qpair failed and we were unable to recover it. 00:32:20.883 [2024-11-26 07:42:04.721161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.883 [2024-11-26 07:42:04.721170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.883 qpair failed and we were unable to recover it. 00:32:20.883 [2024-11-26 07:42:04.721497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.883 [2024-11-26 07:42:04.721505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.883 qpair failed and we were unable to recover it. 00:32:20.883 [2024-11-26 07:42:04.721727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.883 [2024-11-26 07:42:04.721734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.883 qpair failed and we were unable to recover it. 00:32:20.883 [2024-11-26 07:42:04.722058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.883 [2024-11-26 07:42:04.722066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.883 qpair failed and we were unable to recover it. 00:32:20.883 [2024-11-26 07:42:04.722358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.883 [2024-11-26 07:42:04.722366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.883 qpair failed and we were unable to recover it. 00:32:20.883 [2024-11-26 07:42:04.722709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.883 [2024-11-26 07:42:04.722717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.883 qpair failed and we were unable to recover it. 00:32:20.883 [2024-11-26 07:42:04.722945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.883 [2024-11-26 07:42:04.722954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.883 qpair failed and we were unable to recover it. 00:32:20.883 [2024-11-26 07:42:04.723237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.883 [2024-11-26 07:42:04.723245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.883 qpair failed and we were unable to recover it. 00:32:20.883 [2024-11-26 07:42:04.723577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.883 [2024-11-26 07:42:04.723585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.883 qpair failed and we were unable to recover it. 00:32:20.883 [2024-11-26 07:42:04.723898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.883 [2024-11-26 07:42:04.723907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.883 qpair failed and we were unable to recover it. 00:32:20.883 [2024-11-26 07:42:04.724247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.883 [2024-11-26 07:42:04.724256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.883 qpair failed and we were unable to recover it. 00:32:20.883 [2024-11-26 07:42:04.724589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.883 [2024-11-26 07:42:04.724598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.883 qpair failed and we were unable to recover it. 00:32:20.883 [2024-11-26 07:42:04.724658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.883 [2024-11-26 07:42:04.724664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.883 qpair failed and we were unable to recover it. 00:32:20.883 [2024-11-26 07:42:04.725001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.883 [2024-11-26 07:42:04.725009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.883 qpair failed and we were unable to recover it. 00:32:20.883 [2024-11-26 07:42:04.725355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.883 [2024-11-26 07:42:04.725363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.883 qpair failed and we were unable to recover it. 00:32:20.883 [2024-11-26 07:42:04.725704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.883 [2024-11-26 07:42:04.725712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.883 qpair failed and we were unable to recover it. 00:32:20.883 [2024-11-26 07:42:04.726025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.883 [2024-11-26 07:42:04.726035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.883 qpair failed and we were unable to recover it. 00:32:20.883 [2024-11-26 07:42:04.726343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.883 [2024-11-26 07:42:04.726352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.883 qpair failed and we were unable to recover it. 00:32:20.883 [2024-11-26 07:42:04.726524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.883 [2024-11-26 07:42:04.726534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.883 qpair failed and we were unable to recover it. 00:32:20.883 [2024-11-26 07:42:04.726800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.883 [2024-11-26 07:42:04.726808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.883 qpair failed and we were unable to recover it. 00:32:20.883 [2024-11-26 07:42:04.727123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.883 [2024-11-26 07:42:04.727132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.883 qpair failed and we were unable to recover it. 00:32:20.883 [2024-11-26 07:42:04.727296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.883 [2024-11-26 07:42:04.727308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.883 qpair failed and we were unable to recover it. 00:32:20.883 [2024-11-26 07:42:04.727613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.883 [2024-11-26 07:42:04.727622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.883 qpair failed and we were unable to recover it. 00:32:20.883 [2024-11-26 07:42:04.727931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.883 [2024-11-26 07:42:04.727939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.883 qpair failed and we were unable to recover it. 00:32:20.883 [2024-11-26 07:42:04.728106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.883 [2024-11-26 07:42:04.728116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.883 qpair failed and we were unable to recover it. 00:32:20.883 [2024-11-26 07:42:04.728377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.883 [2024-11-26 07:42:04.728385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.883 qpair failed and we were unable to recover it. 00:32:20.883 [2024-11-26 07:42:04.728699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.883 [2024-11-26 07:42:04.728707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.883 qpair failed and we were unable to recover it. 00:32:20.883 [2024-11-26 07:42:04.729018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.883 [2024-11-26 07:42:04.729028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.883 qpair failed and we were unable to recover it. 00:32:20.883 [2024-11-26 07:42:04.729375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.883 [2024-11-26 07:42:04.729384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.883 qpair failed and we were unable to recover it. 00:32:20.883 [2024-11-26 07:42:04.729558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.883 [2024-11-26 07:42:04.729567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.883 qpair failed and we were unable to recover it. 00:32:20.883 [2024-11-26 07:42:04.729648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.883 [2024-11-26 07:42:04.729656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.883 qpair failed and we were unable to recover it. 00:32:20.883 [2024-11-26 07:42:04.729965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.883 [2024-11-26 07:42:04.729974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.883 qpair failed and we were unable to recover it. 00:32:20.883 [2024-11-26 07:42:04.730162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.884 [2024-11-26 07:42:04.730170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.884 qpair failed and we were unable to recover it. 00:32:20.884 [2024-11-26 07:42:04.730458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.884 [2024-11-26 07:42:04.730466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.884 qpair failed and we were unable to recover it. 00:32:20.884 [2024-11-26 07:42:04.730782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.884 [2024-11-26 07:42:04.730791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.884 qpair failed and we were unable to recover it. 00:32:20.884 [2024-11-26 07:42:04.731091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.884 [2024-11-26 07:42:04.731100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.884 qpair failed and we were unable to recover it. 00:32:20.884 [2024-11-26 07:42:04.731425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.884 [2024-11-26 07:42:04.731433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.884 qpair failed and we were unable to recover it. 00:32:20.884 [2024-11-26 07:42:04.731622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.884 [2024-11-26 07:42:04.731631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.884 qpair failed and we were unable to recover it. 00:32:20.884 [2024-11-26 07:42:04.731808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.884 [2024-11-26 07:42:04.731816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.884 qpair failed and we were unable to recover it. 00:32:20.884 [2024-11-26 07:42:04.732103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.884 [2024-11-26 07:42:04.732113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.884 qpair failed and we were unable to recover it. 00:32:20.884 [2024-11-26 07:42:04.732444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.884 [2024-11-26 07:42:04.732453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.884 qpair failed and we were unable to recover it. 00:32:20.884 [2024-11-26 07:42:04.732628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.884 [2024-11-26 07:42:04.732636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.884 qpair failed and we were unable to recover it. 00:32:20.884 [2024-11-26 07:42:04.732983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.884 [2024-11-26 07:42:04.732991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.884 qpair failed and we were unable to recover it. 00:32:20.884 [2024-11-26 07:42:04.733179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.884 [2024-11-26 07:42:04.733188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.884 qpair failed and we were unable to recover it. 00:32:20.884 [2024-11-26 07:42:04.733518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.884 [2024-11-26 07:42:04.733526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.884 qpair failed and we were unable to recover it. 00:32:20.884 [2024-11-26 07:42:04.733719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.884 [2024-11-26 07:42:04.733728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.884 qpair failed and we were unable to recover it. 00:32:20.884 [2024-11-26 07:42:04.733966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.884 [2024-11-26 07:42:04.733975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.884 qpair failed and we were unable to recover it. 00:32:20.884 [2024-11-26 07:42:04.734171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.884 [2024-11-26 07:42:04.734179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.884 qpair failed and we were unable to recover it. 00:32:20.884 [2024-11-26 07:42:04.734350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.884 [2024-11-26 07:42:04.734359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.884 qpair failed and we were unable to recover it. 00:32:20.884 [2024-11-26 07:42:04.734400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.884 [2024-11-26 07:42:04.734406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.884 qpair failed and we were unable to recover it. 00:32:20.884 [2024-11-26 07:42:04.734719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.884 [2024-11-26 07:42:04.734728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.884 qpair failed and we were unable to recover it. 00:32:20.884 [2024-11-26 07:42:04.735035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.884 [2024-11-26 07:42:04.735045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.884 qpair failed and we were unable to recover it. 00:32:20.884 [2024-11-26 07:42:04.735403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.884 [2024-11-26 07:42:04.735411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.884 qpair failed and we were unable to recover it. 00:32:20.884 [2024-11-26 07:42:04.735773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.884 [2024-11-26 07:42:04.735781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.884 qpair failed and we were unable to recover it. 00:32:20.884 [2024-11-26 07:42:04.736093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.884 [2024-11-26 07:42:04.736102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.884 qpair failed and we were unable to recover it. 00:32:20.884 [2024-11-26 07:42:04.736298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.884 [2024-11-26 07:42:04.736307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.884 qpair failed and we were unable to recover it. 00:32:20.884 [2024-11-26 07:42:04.736584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.884 [2024-11-26 07:42:04.736593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.884 qpair failed and we were unable to recover it. 00:32:20.884 [2024-11-26 07:42:04.736912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.884 [2024-11-26 07:42:04.736920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.884 qpair failed and we were unable to recover it. 00:32:20.884 [2024-11-26 07:42:04.737094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.884 [2024-11-26 07:42:04.737102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.884 qpair failed and we were unable to recover it. 00:32:20.884 [2024-11-26 07:42:04.737405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.884 [2024-11-26 07:42:04.737412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.884 qpair failed and we were unable to recover it. 00:32:20.884 [2024-11-26 07:42:04.737708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.884 [2024-11-26 07:42:04.737716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.884 qpair failed and we were unable to recover it. 00:32:20.884 [2024-11-26 07:42:04.738033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.884 [2024-11-26 07:42:04.738043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.884 qpair failed and we were unable to recover it. 00:32:20.884 [2024-11-26 07:42:04.738376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.885 [2024-11-26 07:42:04.738384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.885 qpair failed and we were unable to recover it. 00:32:20.885 [2024-11-26 07:42:04.738454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.885 [2024-11-26 07:42:04.738460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.885 qpair failed and we were unable to recover it. 00:32:20.885 [2024-11-26 07:42:04.738615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.885 [2024-11-26 07:42:04.738623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.885 qpair failed and we were unable to recover it. 00:32:20.885 [2024-11-26 07:42:04.738929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.885 [2024-11-26 07:42:04.738938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.885 qpair failed and we were unable to recover it. 00:32:20.885 [2024-11-26 07:42:04.738977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.885 [2024-11-26 07:42:04.738985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.885 qpair failed and we were unable to recover it. 00:32:20.885 [2024-11-26 07:42:04.739386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.885 [2024-11-26 07:42:04.739393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.885 qpair failed and we were unable to recover it. 00:32:20.885 [2024-11-26 07:42:04.739736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.885 [2024-11-26 07:42:04.739744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.885 qpair failed and we were unable to recover it. 00:32:20.885 [2024-11-26 07:42:04.739917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.885 [2024-11-26 07:42:04.739926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.885 qpair failed and we were unable to recover it. 00:32:20.885 [2024-11-26 07:42:04.740167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.885 [2024-11-26 07:42:04.740174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.885 qpair failed and we were unable to recover it. 00:32:20.885 [2024-11-26 07:42:04.740349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.885 [2024-11-26 07:42:04.740356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.885 qpair failed and we were unable to recover it. 00:32:20.885 [2024-11-26 07:42:04.740672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.885 [2024-11-26 07:42:04.740680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.885 qpair failed and we were unable to recover it. 00:32:20.885 [2024-11-26 07:42:04.741010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.885 [2024-11-26 07:42:04.741018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.885 qpair failed and we were unable to recover it. 00:32:20.885 [2024-11-26 07:42:04.741194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.885 [2024-11-26 07:42:04.741202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.885 qpair failed and we were unable to recover it. 00:32:20.885 [2024-11-26 07:42:04.741464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.885 [2024-11-26 07:42:04.741472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.885 qpair failed and we were unable to recover it. 00:32:20.885 [2024-11-26 07:42:04.741794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.885 [2024-11-26 07:42:04.741802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.885 qpair failed and we were unable to recover it. 00:32:20.885 [2024-11-26 07:42:04.742112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.885 [2024-11-26 07:42:04.742121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.885 qpair failed and we were unable to recover it. 00:32:20.885 [2024-11-26 07:42:04.742453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.885 [2024-11-26 07:42:04.742461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.885 qpair failed and we were unable to recover it. 00:32:20.885 [2024-11-26 07:42:04.742661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.885 [2024-11-26 07:42:04.742669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.885 qpair failed and we were unable to recover it. 00:32:20.885 [2024-11-26 07:42:04.742950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.885 [2024-11-26 07:42:04.742959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.885 qpair failed and we were unable to recover it. 00:32:20.885 [2024-11-26 07:42:04.743300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.885 [2024-11-26 07:42:04.743309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.885 qpair failed and we were unable to recover it. 00:32:20.885 [2024-11-26 07:42:04.743621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.885 [2024-11-26 07:42:04.743630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.885 qpair failed and we were unable to recover it. 00:32:20.885 [2024-11-26 07:42:04.743795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.885 [2024-11-26 07:42:04.743803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.885 qpair failed and we were unable to recover it. 00:32:20.885 [2024-11-26 07:42:04.744088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.885 [2024-11-26 07:42:04.744097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.885 qpair failed and we were unable to recover it. 00:32:20.885 [2024-11-26 07:42:04.744275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.885 [2024-11-26 07:42:04.744284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.885 qpair failed and we were unable to recover it. 00:32:20.885 [2024-11-26 07:42:04.744458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.885 [2024-11-26 07:42:04.744467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.885 qpair failed and we were unable to recover it. 00:32:20.885 [2024-11-26 07:42:04.744813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.885 [2024-11-26 07:42:04.744821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.885 qpair failed and we were unable to recover it. 00:32:20.885 [2024-11-26 07:42:04.745141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.885 [2024-11-26 07:42:04.745150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.885 qpair failed and we were unable to recover it. 00:32:20.885 [2024-11-26 07:42:04.745463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.885 [2024-11-26 07:42:04.745472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.885 qpair failed and we were unable to recover it. 00:32:20.885 [2024-11-26 07:42:04.745662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.885 [2024-11-26 07:42:04.745670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.885 qpair failed and we were unable to recover it. 00:32:20.885 [2024-11-26 07:42:04.745990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.885 [2024-11-26 07:42:04.745999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.885 qpair failed and we were unable to recover it. 00:32:20.885 [2024-11-26 07:42:04.746214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.885 [2024-11-26 07:42:04.746224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.885 qpair failed and we were unable to recover it. 00:32:20.885 [2024-11-26 07:42:04.746387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.885 [2024-11-26 07:42:04.746396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.885 qpair failed and we were unable to recover it. 00:32:20.885 [2024-11-26 07:42:04.746583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.885 [2024-11-26 07:42:04.746592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.885 qpair failed and we were unable to recover it. 00:32:20.885 [2024-11-26 07:42:04.746907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.885 [2024-11-26 07:42:04.746917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.885 qpair failed and we were unable to recover it. 00:32:20.885 [2024-11-26 07:42:04.747244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.885 [2024-11-26 07:42:04.747253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.885 qpair failed and we were unable to recover it. 00:32:20.885 [2024-11-26 07:42:04.747563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.885 [2024-11-26 07:42:04.747571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.885 qpair failed and we were unable to recover it. 00:32:20.885 [2024-11-26 07:42:04.747870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.885 [2024-11-26 07:42:04.747878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.885 qpair failed and we were unable to recover it. 00:32:20.885 [2024-11-26 07:42:04.748118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.885 [2024-11-26 07:42:04.748126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.885 qpair failed and we were unable to recover it. 00:32:20.886 [2024-11-26 07:42:04.748438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.886 [2024-11-26 07:42:04.748446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.886 qpair failed and we were unable to recover it. 00:32:20.886 [2024-11-26 07:42:04.748738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.886 [2024-11-26 07:42:04.748747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.886 qpair failed and we were unable to recover it. 00:32:20.886 [2024-11-26 07:42:04.748911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.886 [2024-11-26 07:42:04.748920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.886 qpair failed and we were unable to recover it. 00:32:20.886 [2024-11-26 07:42:04.749196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.886 [2024-11-26 07:42:04.749205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.886 qpair failed and we were unable to recover it. 00:32:20.886 [2024-11-26 07:42:04.749539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.886 [2024-11-26 07:42:04.749547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.886 qpair failed and we were unable to recover it. 00:32:20.886 [2024-11-26 07:42:04.749713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.886 [2024-11-26 07:42:04.749721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.886 qpair failed and we were unable to recover it. 00:32:20.886 [2024-11-26 07:42:04.750035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.886 [2024-11-26 07:42:04.750045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.886 qpair failed and we were unable to recover it. 00:32:20.886 [2024-11-26 07:42:04.750381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.886 [2024-11-26 07:42:04.750389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.886 qpair failed and we were unable to recover it. 00:32:20.886 [2024-11-26 07:42:04.750586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.886 [2024-11-26 07:42:04.750593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.886 qpair failed and we were unable to recover it. 00:32:20.886 [2024-11-26 07:42:04.750795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.886 [2024-11-26 07:42:04.750804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.886 qpair failed and we were unable to recover it. 00:32:20.886 [2024-11-26 07:42:04.751112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.886 [2024-11-26 07:42:04.751120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.886 qpair failed and we were unable to recover it. 00:32:20.886 [2024-11-26 07:42:04.751429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.886 [2024-11-26 07:42:04.751437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.886 qpair failed and we were unable to recover it. 00:32:20.886 [2024-11-26 07:42:04.751631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.886 [2024-11-26 07:42:04.751639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.886 qpair failed and we were unable to recover it. 00:32:20.886 [2024-11-26 07:42:04.751940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.886 [2024-11-26 07:42:04.751949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.886 qpair failed and we were unable to recover it. 00:32:20.886 [2024-11-26 07:42:04.752284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.886 [2024-11-26 07:42:04.752292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.886 qpair failed and we were unable to recover it. 00:32:20.886 [2024-11-26 07:42:04.752603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.886 [2024-11-26 07:42:04.752611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.886 qpair failed and we were unable to recover it. 00:32:20.886 [2024-11-26 07:42:04.752919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.886 [2024-11-26 07:42:04.752928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.886 qpair failed and we were unable to recover it. 00:32:20.886 [2024-11-26 07:42:04.753249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.886 [2024-11-26 07:42:04.753257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.886 qpair failed and we were unable to recover it. 00:32:20.886 [2024-11-26 07:42:04.753440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.886 [2024-11-26 07:42:04.753448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.886 qpair failed and we were unable to recover it. 00:32:20.886 [2024-11-26 07:42:04.753698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.886 [2024-11-26 07:42:04.753706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.886 qpair failed and we were unable to recover it. 00:32:20.886 [2024-11-26 07:42:04.753887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.886 [2024-11-26 07:42:04.753895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.886 qpair failed and we were unable to recover it. 00:32:20.886 [2024-11-26 07:42:04.754072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.886 [2024-11-26 07:42:04.754080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.886 qpair failed and we were unable to recover it. 00:32:20.886 [2024-11-26 07:42:04.754271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.886 [2024-11-26 07:42:04.754279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.886 qpair failed and we were unable to recover it. 00:32:20.886 [2024-11-26 07:42:04.754566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.886 [2024-11-26 07:42:04.754574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.886 qpair failed and we were unable to recover it. 00:32:20.886 [2024-11-26 07:42:04.754769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.886 [2024-11-26 07:42:04.754777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.886 qpair failed and we were unable to recover it. 00:32:20.886 [2024-11-26 07:42:04.754962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.886 [2024-11-26 07:42:04.754970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.886 qpair failed and we were unable to recover it. 00:32:20.886 [2024-11-26 07:42:04.755382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.886 [2024-11-26 07:42:04.755390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.886 qpair failed and we were unable to recover it. 00:32:20.886 [2024-11-26 07:42:04.755704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.886 [2024-11-26 07:42:04.755712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.886 qpair failed and we were unable to recover it. 00:32:20.886 [2024-11-26 07:42:04.755878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.886 [2024-11-26 07:42:04.755887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.886 qpair failed and we were unable to recover it. 00:32:20.886 [2024-11-26 07:42:04.756060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.886 [2024-11-26 07:42:04.756068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.886 qpair failed and we were unable to recover it. 00:32:20.886 [2024-11-26 07:42:04.756104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.886 [2024-11-26 07:42:04.756111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.886 qpair failed and we were unable to recover it. 00:32:20.886 [2024-11-26 07:42:04.756293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.886 [2024-11-26 07:42:04.756302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.886 qpair failed and we were unable to recover it. 00:32:20.886 [2024-11-26 07:42:04.756600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.886 [2024-11-26 07:42:04.756608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.886 qpair failed and we were unable to recover it. 00:32:20.886 [2024-11-26 07:42:04.756796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.886 [2024-11-26 07:42:04.756805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.886 qpair failed and we were unable to recover it. 00:32:20.886 [2024-11-26 07:42:04.756978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.886 [2024-11-26 07:42:04.756986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.886 qpair failed and we were unable to recover it. 00:32:20.886 [2024-11-26 07:42:04.757326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.886 [2024-11-26 07:42:04.757334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.886 qpair failed and we were unable to recover it. 00:32:20.886 [2024-11-26 07:42:04.757605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.886 [2024-11-26 07:42:04.757613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.886 qpair failed and we were unable to recover it. 00:32:20.886 [2024-11-26 07:42:04.757894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.886 [2024-11-26 07:42:04.757903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.886 qpair failed and we were unable to recover it. 00:32:20.887 [2024-11-26 07:42:04.758261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.887 [2024-11-26 07:42:04.758269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.887 qpair failed and we were unable to recover it. 00:32:20.887 [2024-11-26 07:42:04.758452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.887 [2024-11-26 07:42:04.758461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.887 qpair failed and we were unable to recover it. 00:32:20.887 [2024-11-26 07:42:04.758627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.887 [2024-11-26 07:42:04.758636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.887 qpair failed and we were unable to recover it. 00:32:20.887 [2024-11-26 07:42:04.758807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.887 [2024-11-26 07:42:04.758817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.887 qpair failed and we were unable to recover it. 00:32:20.887 [2024-11-26 07:42:04.758861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.887 [2024-11-26 07:42:04.758873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.887 qpair failed and we were unable to recover it. 00:32:20.887 [2024-11-26 07:42:04.759057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.887 [2024-11-26 07:42:04.759065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.887 qpair failed and we were unable to recover it. 00:32:20.887 [2024-11-26 07:42:04.759376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.887 [2024-11-26 07:42:04.759384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.887 qpair failed and we were unable to recover it. 00:32:20.887 [2024-11-26 07:42:04.759572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.887 [2024-11-26 07:42:04.759581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.887 qpair failed and we were unable to recover it. 00:32:20.887 [2024-11-26 07:42:04.759739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.887 [2024-11-26 07:42:04.759748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.887 qpair failed and we were unable to recover it. 00:32:20.887 [2024-11-26 07:42:04.760064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.887 [2024-11-26 07:42:04.760072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.887 qpair failed and we were unable to recover it. 00:32:20.887 [2024-11-26 07:42:04.760264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.887 [2024-11-26 07:42:04.760272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.887 qpair failed and we were unable to recover it. 00:32:20.887 [2024-11-26 07:42:04.760583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.887 [2024-11-26 07:42:04.760591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.887 qpair failed and we were unable to recover it. 00:32:20.887 [2024-11-26 07:42:04.760907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.887 [2024-11-26 07:42:04.760916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.887 qpair failed and we were unable to recover it. 00:32:20.887 [2024-11-26 07:42:04.761253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.887 [2024-11-26 07:42:04.761261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.887 qpair failed and we were unable to recover it. 00:32:20.887 [2024-11-26 07:42:04.761420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.887 [2024-11-26 07:42:04.761428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.887 qpair failed and we were unable to recover it. 00:32:20.887 [2024-11-26 07:42:04.761739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.887 [2024-11-26 07:42:04.761746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.887 qpair failed and we were unable to recover it. 00:32:20.887 [2024-11-26 07:42:04.761944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.887 [2024-11-26 07:42:04.761953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.887 qpair failed and we were unable to recover it. 00:32:20.887 [2024-11-26 07:42:04.762274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.887 [2024-11-26 07:42:04.762282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.887 qpair failed and we were unable to recover it. 00:32:20.887 [2024-11-26 07:42:04.762598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.887 [2024-11-26 07:42:04.762606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.887 qpair failed and we were unable to recover it. 00:32:20.887 [2024-11-26 07:42:04.762968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.887 [2024-11-26 07:42:04.762976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.887 qpair failed and we were unable to recover it. 00:32:20.887 [2024-11-26 07:42:04.763199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.887 [2024-11-26 07:42:04.763207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.887 qpair failed and we were unable to recover it. 00:32:20.887 [2024-11-26 07:42:04.763529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.887 [2024-11-26 07:42:04.763537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.887 qpair failed and we were unable to recover it. 00:32:20.887 [2024-11-26 07:42:04.763880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.887 [2024-11-26 07:42:04.763889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.887 qpair failed and we were unable to recover it. 00:32:20.887 [2024-11-26 07:42:04.764198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.887 [2024-11-26 07:42:04.764206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.887 qpair failed and we were unable to recover it. 00:32:20.887 [2024-11-26 07:42:04.764397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.887 [2024-11-26 07:42:04.764405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.887 qpair failed and we were unable to recover it. 00:32:20.887 [2024-11-26 07:42:04.764599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.887 [2024-11-26 07:42:04.764608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.887 qpair failed and we were unable to recover it. 00:32:20.887 [2024-11-26 07:42:04.764902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.887 [2024-11-26 07:42:04.764909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.887 qpair failed and we were unable to recover it. 00:32:20.887 [2024-11-26 07:42:04.765072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.887 [2024-11-26 07:42:04.765080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.887 qpair failed and we were unable to recover it. 00:32:20.887 [2024-11-26 07:42:04.765260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.887 [2024-11-26 07:42:04.765269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.887 qpair failed and we were unable to recover it. 00:32:20.887 [2024-11-26 07:42:04.765439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.887 [2024-11-26 07:42:04.765448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.887 qpair failed and we were unable to recover it. 00:32:20.887 [2024-11-26 07:42:04.765768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.887 [2024-11-26 07:42:04.765777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.887 qpair failed and we were unable to recover it. 00:32:20.887 [2024-11-26 07:42:04.766103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.887 [2024-11-26 07:42:04.766111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.887 qpair failed and we were unable to recover it. 00:32:20.887 [2024-11-26 07:42:04.766428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.887 [2024-11-26 07:42:04.766436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.887 qpair failed and we were unable to recover it. 00:32:20.887 [2024-11-26 07:42:04.766760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.887 [2024-11-26 07:42:04.766768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.887 qpair failed and we were unable to recover it. 00:32:20.887 [2024-11-26 07:42:04.767115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.887 [2024-11-26 07:42:04.767124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.887 qpair failed and we were unable to recover it. 00:32:20.887 [2024-11-26 07:42:04.767295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.887 [2024-11-26 07:42:04.767304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.887 qpair failed and we were unable to recover it. 00:32:20.887 [2024-11-26 07:42:04.767637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.887 [2024-11-26 07:42:04.767644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.887 qpair failed and we were unable to recover it. 00:32:20.887 [2024-11-26 07:42:04.768015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.887 [2024-11-26 07:42:04.768024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.887 qpair failed and we were unable to recover it. 00:32:20.888 [2024-11-26 07:42:04.768364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.888 [2024-11-26 07:42:04.768371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.888 qpair failed and we were unable to recover it. 00:32:20.888 [2024-11-26 07:42:04.768684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.888 [2024-11-26 07:42:04.768693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.888 qpair failed and we were unable to recover it. 00:32:20.888 [2024-11-26 07:42:04.768881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.888 [2024-11-26 07:42:04.768890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.888 qpair failed and we were unable to recover it. 00:32:20.888 [2024-11-26 07:42:04.769138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.888 [2024-11-26 07:42:04.769146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.888 qpair failed and we were unable to recover it. 00:32:20.888 [2024-11-26 07:42:04.769362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.888 [2024-11-26 07:42:04.769370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.888 qpair failed and we were unable to recover it. 00:32:20.888 [2024-11-26 07:42:04.769552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.888 [2024-11-26 07:42:04.769562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.888 qpair failed and we were unable to recover it. 00:32:20.888 [2024-11-26 07:42:04.769853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.888 [2024-11-26 07:42:04.769868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.888 qpair failed and we were unable to recover it. 00:32:20.888 [2024-11-26 07:42:04.770157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.888 [2024-11-26 07:42:04.770165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.888 qpair failed and we were unable to recover it. 00:32:20.888 [2024-11-26 07:42:04.770333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.888 [2024-11-26 07:42:04.770342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.888 qpair failed and we were unable to recover it. 00:32:20.888 [2024-11-26 07:42:04.770546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.888 [2024-11-26 07:42:04.770554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.888 qpair failed and we were unable to recover it. 00:32:20.888 [2024-11-26 07:42:04.770858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.888 [2024-11-26 07:42:04.770877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.888 qpair failed and we were unable to recover it. 00:32:20.888 [2024-11-26 07:42:04.771182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.888 [2024-11-26 07:42:04.771190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.888 qpair failed and we were unable to recover it. 00:32:20.888 [2024-11-26 07:42:04.771460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.888 [2024-11-26 07:42:04.771468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.888 qpair failed and we were unable to recover it. 00:32:20.888 [2024-11-26 07:42:04.771773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.888 [2024-11-26 07:42:04.771781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.888 qpair failed and we were unable to recover it. 00:32:20.888 [2024-11-26 07:42:04.772066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.888 [2024-11-26 07:42:04.772075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.888 qpair failed and we were unable to recover it. 00:32:20.888 [2024-11-26 07:42:04.772388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.888 [2024-11-26 07:42:04.772396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.888 qpair failed and we were unable to recover it. 00:32:20.888 [2024-11-26 07:42:04.772710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.888 [2024-11-26 07:42:04.772718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.888 qpair failed and we were unable to recover it. 00:32:20.888 [2024-11-26 07:42:04.772922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.888 [2024-11-26 07:42:04.772931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.888 qpair failed and we were unable to recover it. 00:32:20.888 [2024-11-26 07:42:04.773128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.888 [2024-11-26 07:42:04.773136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.888 qpair failed and we were unable to recover it. 00:32:20.888 [2024-11-26 07:42:04.773472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.888 [2024-11-26 07:42:04.773480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.888 qpair failed and we were unable to recover it. 00:32:20.888 [2024-11-26 07:42:04.773795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.888 [2024-11-26 07:42:04.773803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.888 qpair failed and we were unable to recover it. 00:32:20.888 [2024-11-26 07:42:04.773843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.888 [2024-11-26 07:42:04.773849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.888 qpair failed and we were unable to recover it. 00:32:20.888 [2024-11-26 07:42:04.774159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.888 [2024-11-26 07:42:04.774167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.888 qpair failed and we were unable to recover it. 00:32:20.888 [2024-11-26 07:42:04.774513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.888 [2024-11-26 07:42:04.774521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.888 qpair failed and we were unable to recover it. 00:32:20.888 [2024-11-26 07:42:04.774840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.888 [2024-11-26 07:42:04.774847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.888 qpair failed and we were unable to recover it. 00:32:20.888 [2024-11-26 07:42:04.775183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.888 [2024-11-26 07:42:04.775191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.888 qpair failed and we were unable to recover it. 00:32:20.888 [2024-11-26 07:42:04.775532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.888 [2024-11-26 07:42:04.775540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.888 qpair failed and we were unable to recover it. 00:32:20.888 [2024-11-26 07:42:04.775853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.888 [2024-11-26 07:42:04.775865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.888 qpair failed and we were unable to recover it. 00:32:20.888 [2024-11-26 07:42:04.776151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.888 [2024-11-26 07:42:04.776161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.888 qpair failed and we were unable to recover it. 00:32:20.888 [2024-11-26 07:42:04.776343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.888 [2024-11-26 07:42:04.776351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.888 qpair failed and we were unable to recover it. 00:32:20.888 [2024-11-26 07:42:04.776651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.888 [2024-11-26 07:42:04.776660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.888 qpair failed and we were unable to recover it. 00:32:20.888 [2024-11-26 07:42:04.776701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.888 [2024-11-26 07:42:04.776709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.888 qpair failed and we were unable to recover it. 00:32:20.888 [2024-11-26 07:42:04.776867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.888 [2024-11-26 07:42:04.776876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.888 qpair failed and we were unable to recover it. 00:32:20.889 [2024-11-26 07:42:04.777217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.889 [2024-11-26 07:42:04.777225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.889 qpair failed and we were unable to recover it. 00:32:20.889 [2024-11-26 07:42:04.777403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.889 [2024-11-26 07:42:04.777413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.889 qpair failed and we were unable to recover it. 00:32:20.889 [2024-11-26 07:42:04.777638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.889 [2024-11-26 07:42:04.777646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.889 qpair failed and we were unable to recover it. 00:32:20.889 [2024-11-26 07:42:04.777954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.889 [2024-11-26 07:42:04.777963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.889 qpair failed and we were unable to recover it. 00:32:20.889 [2024-11-26 07:42:04.778278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.889 [2024-11-26 07:42:04.778286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.889 qpair failed and we were unable to recover it. 00:32:20.889 [2024-11-26 07:42:04.778598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.889 [2024-11-26 07:42:04.778606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.889 qpair failed and we were unable to recover it. 00:32:20.889 [2024-11-26 07:42:04.778790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.889 [2024-11-26 07:42:04.778798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.889 qpair failed and we were unable to recover it. 00:32:20.889 [2024-11-26 07:42:04.779129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.889 [2024-11-26 07:42:04.779137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.889 qpair failed and we were unable to recover it. 00:32:20.889 [2024-11-26 07:42:04.779295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.889 [2024-11-26 07:42:04.779304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.889 qpair failed and we were unable to recover it. 00:32:20.889 [2024-11-26 07:42:04.779463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.889 [2024-11-26 07:42:04.779471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.889 qpair failed and we were unable to recover it. 00:32:20.889 [2024-11-26 07:42:04.779791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.889 [2024-11-26 07:42:04.779799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.889 qpair failed and we were unable to recover it. 00:32:20.889 [2024-11-26 07:42:04.779974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.889 [2024-11-26 07:42:04.779983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.889 qpair failed and we were unable to recover it. 00:32:20.889 [2024-11-26 07:42:04.780322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.889 [2024-11-26 07:42:04.780332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.889 qpair failed and we were unable to recover it. 00:32:20.889 [2024-11-26 07:42:04.780513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.889 [2024-11-26 07:42:04.780522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.889 qpair failed and we were unable to recover it. 00:32:20.889 [2024-11-26 07:42:04.780802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.889 [2024-11-26 07:42:04.780810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.889 qpair failed and we were unable to recover it. 00:32:20.889 [2024-11-26 07:42:04.781127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.889 [2024-11-26 07:42:04.781135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.889 qpair failed and we were unable to recover it. 00:32:20.889 [2024-11-26 07:42:04.781454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.889 [2024-11-26 07:42:04.781462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.889 qpair failed and we were unable to recover it. 00:32:20.889 [2024-11-26 07:42:04.781502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.889 [2024-11-26 07:42:04.781509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.889 qpair failed and we were unable to recover it. 00:32:20.889 [2024-11-26 07:42:04.781787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.889 [2024-11-26 07:42:04.781795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.889 qpair failed and we were unable to recover it. 00:32:20.889 [2024-11-26 07:42:04.782142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.889 [2024-11-26 07:42:04.782151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.889 qpair failed and we were unable to recover it. 00:32:20.889 [2024-11-26 07:42:04.782485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.889 [2024-11-26 07:42:04.782493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.889 qpair failed and we were unable to recover it. 00:32:20.889 [2024-11-26 07:42:04.782529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.889 [2024-11-26 07:42:04.782535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.889 qpair failed and we were unable to recover it. 00:32:20.889 [2024-11-26 07:42:04.782727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.889 [2024-11-26 07:42:04.782735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.889 qpair failed and we were unable to recover it. 00:32:20.889 [2024-11-26 07:42:04.783076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.889 [2024-11-26 07:42:04.783084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.889 qpair failed and we were unable to recover it. 00:32:20.889 [2024-11-26 07:42:04.783396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.889 [2024-11-26 07:42:04.783405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.889 qpair failed and we were unable to recover it. 00:32:20.889 [2024-11-26 07:42:04.783595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.889 [2024-11-26 07:42:04.783604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.889 qpair failed and we were unable to recover it. 00:32:20.889 [2024-11-26 07:42:04.783805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.889 [2024-11-26 07:42:04.783813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.889 qpair failed and we were unable to recover it. 00:32:20.889 [2024-11-26 07:42:04.784108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.889 [2024-11-26 07:42:04.784116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.889 qpair failed and we were unable to recover it. 00:32:20.889 [2024-11-26 07:42:04.784411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.889 [2024-11-26 07:42:04.784418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.889 qpair failed and we were unable to recover it. 00:32:20.889 [2024-11-26 07:42:04.784735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.889 [2024-11-26 07:42:04.784743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.889 qpair failed and we were unable to recover it. 00:32:20.889 [2024-11-26 07:42:04.785042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.889 [2024-11-26 07:42:04.785050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.889 qpair failed and we were unable to recover it. 00:32:20.889 [2024-11-26 07:42:04.785349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.889 [2024-11-26 07:42:04.785357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.889 qpair failed and we were unable to recover it. 00:32:20.889 [2024-11-26 07:42:04.785742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.889 [2024-11-26 07:42:04.785750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.889 qpair failed and we were unable to recover it. 00:32:20.889 [2024-11-26 07:42:04.786091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.889 [2024-11-26 07:42:04.786099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.889 qpair failed and we were unable to recover it. 00:32:20.889 [2024-11-26 07:42:04.786402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.889 [2024-11-26 07:42:04.786410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.889 qpair failed and we were unable to recover it. 00:32:20.889 [2024-11-26 07:42:04.786707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.889 [2024-11-26 07:42:04.786716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.889 qpair failed and we were unable to recover it. 00:32:20.889 [2024-11-26 07:42:04.786786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.889 [2024-11-26 07:42:04.786793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.889 qpair failed and we were unable to recover it. 00:32:20.889 [2024-11-26 07:42:04.787107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.889 [2024-11-26 07:42:04.787115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.890 qpair failed and we were unable to recover it. 00:32:20.890 [2024-11-26 07:42:04.787428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.890 [2024-11-26 07:42:04.787436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.890 qpair failed and we were unable to recover it. 00:32:20.890 [2024-11-26 07:42:04.787601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.890 [2024-11-26 07:42:04.787610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.890 qpair failed and we were unable to recover it. 00:32:20.890 [2024-11-26 07:42:04.787931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.890 [2024-11-26 07:42:04.787939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.890 qpair failed and we were unable to recover it. 00:32:20.890 [2024-11-26 07:42:04.788002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.890 [2024-11-26 07:42:04.788008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.890 qpair failed and we were unable to recover it. 00:32:20.890 [2024-11-26 07:42:04.788314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.890 [2024-11-26 07:42:04.788321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.890 qpair failed and we were unable to recover it. 00:32:20.890 [2024-11-26 07:42:04.788521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.890 [2024-11-26 07:42:04.788529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.890 qpair failed and we were unable to recover it. 00:32:20.890 [2024-11-26 07:42:04.788841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.890 [2024-11-26 07:42:04.788849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.890 qpair failed and we were unable to recover it. 00:32:20.890 [2024-11-26 07:42:04.788925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.890 [2024-11-26 07:42:04.788932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.890 qpair failed and we were unable to recover it. 00:32:20.890 [2024-11-26 07:42:04.789241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.890 [2024-11-26 07:42:04.789248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.890 qpair failed and we were unable to recover it. 00:32:20.890 [2024-11-26 07:42:04.789476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.890 [2024-11-26 07:42:04.789484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.890 qpair failed and we were unable to recover it. 00:32:20.890 [2024-11-26 07:42:04.789815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.890 [2024-11-26 07:42:04.789822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.890 qpair failed and we were unable to recover it. 00:32:20.890 [2024-11-26 07:42:04.790014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.890 [2024-11-26 07:42:04.790023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.890 qpair failed and we were unable to recover it. 00:32:20.890 [2024-11-26 07:42:04.790220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.890 [2024-11-26 07:42:04.790228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.890 qpair failed and we were unable to recover it. 00:32:20.890 [2024-11-26 07:42:04.790451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.890 [2024-11-26 07:42:04.790460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.890 qpair failed and we were unable to recover it. 00:32:20.890 [2024-11-26 07:42:04.790675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.890 [2024-11-26 07:42:04.790684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.890 qpair failed and we were unable to recover it. 00:32:20.890 [2024-11-26 07:42:04.790978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.890 [2024-11-26 07:42:04.790986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.890 qpair failed and we were unable to recover it. 00:32:20.890 [2024-11-26 07:42:04.791172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.890 [2024-11-26 07:42:04.791180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.890 qpair failed and we were unable to recover it. 00:32:20.890 [2024-11-26 07:42:04.791496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.890 [2024-11-26 07:42:04.791504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.890 qpair failed and we were unable to recover it. 00:32:20.890 [2024-11-26 07:42:04.791689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.890 [2024-11-26 07:42:04.791697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.890 qpair failed and we were unable to recover it. 00:32:20.890 [2024-11-26 07:42:04.791873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.890 [2024-11-26 07:42:04.791881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.890 qpair failed and we were unable to recover it. 00:32:20.890 [2024-11-26 07:42:04.792173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.890 [2024-11-26 07:42:04.792181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.890 qpair failed and we were unable to recover it. 00:32:20.890 [2024-11-26 07:42:04.792551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.890 [2024-11-26 07:42:04.792559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.890 qpair failed and we were unable to recover it. 00:32:20.890 [2024-11-26 07:42:04.792726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.890 [2024-11-26 07:42:04.792734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.890 qpair failed and we were unable to recover it. 00:32:20.890 [2024-11-26 07:42:04.793078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.890 [2024-11-26 07:42:04.793086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.890 qpair failed and we were unable to recover it. 00:32:20.890 [2024-11-26 07:42:04.793386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.890 [2024-11-26 07:42:04.793394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.890 qpair failed and we were unable to recover it. 00:32:20.890 [2024-11-26 07:42:04.793562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.890 [2024-11-26 07:42:04.793572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.890 qpair failed and we were unable to recover it. 00:32:20.890 [2024-11-26 07:42:04.793934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.890 [2024-11-26 07:42:04.793943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.890 qpair failed and we were unable to recover it. 00:32:20.890 [2024-11-26 07:42:04.794124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.890 [2024-11-26 07:42:04.794132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.890 qpair failed and we were unable to recover it. 00:32:20.890 [2024-11-26 07:42:04.794325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.890 [2024-11-26 07:42:04.794332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.890 qpair failed and we were unable to recover it. 00:32:20.890 [2024-11-26 07:42:04.794614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.890 [2024-11-26 07:42:04.794622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.890 qpair failed and we were unable to recover it. 00:32:20.890 [2024-11-26 07:42:04.794797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.890 [2024-11-26 07:42:04.794806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.890 qpair failed and we were unable to recover it. 00:32:20.890 [2024-11-26 07:42:04.795113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.890 [2024-11-26 07:42:04.795121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.890 qpair failed and we were unable to recover it. 00:32:20.890 [2024-11-26 07:42:04.795312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.890 [2024-11-26 07:42:04.795321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.890 qpair failed and we were unable to recover it. 00:32:20.890 [2024-11-26 07:42:04.795621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.890 [2024-11-26 07:42:04.795631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.890 qpair failed and we were unable to recover it. 00:32:20.890 [2024-11-26 07:42:04.795934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.890 [2024-11-26 07:42:04.795942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.890 qpair failed and we were unable to recover it. 00:32:20.890 [2024-11-26 07:42:04.796255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.890 [2024-11-26 07:42:04.796263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.890 qpair failed and we were unable to recover it. 00:32:20.890 [2024-11-26 07:42:04.796560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.890 [2024-11-26 07:42:04.796568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.890 qpair failed and we were unable to recover it. 00:32:20.890 [2024-11-26 07:42:04.796881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.891 [2024-11-26 07:42:04.796890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.891 qpair failed and we were unable to recover it. 00:32:20.891 [2024-11-26 07:42:04.797212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.891 [2024-11-26 07:42:04.797220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.891 qpair failed and we were unable to recover it. 00:32:20.891 [2024-11-26 07:42:04.797415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.891 [2024-11-26 07:42:04.797422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.891 qpair failed and we were unable to recover it. 00:32:20.891 [2024-11-26 07:42:04.797737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.891 [2024-11-26 07:42:04.797745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.891 qpair failed and we were unable to recover it. 00:32:20.891 [2024-11-26 07:42:04.797936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.891 [2024-11-26 07:42:04.797945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.891 qpair failed and we were unable to recover it. 00:32:20.891 [2024-11-26 07:42:04.798263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.891 [2024-11-26 07:42:04.798271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.891 qpair failed and we were unable to recover it. 00:32:20.891 [2024-11-26 07:42:04.798590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.891 [2024-11-26 07:42:04.798598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.891 qpair failed and we were unable to recover it. 00:32:20.891 [2024-11-26 07:42:04.798930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.891 [2024-11-26 07:42:04.798938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.891 qpair failed and we were unable to recover it. 00:32:20.891 [2024-11-26 07:42:04.799153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.891 [2024-11-26 07:42:04.799162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.891 qpair failed and we were unable to recover it. 00:32:20.891 [2024-11-26 07:42:04.799475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.891 [2024-11-26 07:42:04.799483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.891 qpair failed and we were unable to recover it. 00:32:20.891 [2024-11-26 07:42:04.799793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.891 [2024-11-26 07:42:04.799801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.891 qpair failed and we were unable to recover it. 00:32:20.891 [2024-11-26 07:42:04.800119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.891 [2024-11-26 07:42:04.800127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.891 qpair failed and we were unable to recover it. 00:32:20.891 [2024-11-26 07:42:04.800290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.891 [2024-11-26 07:42:04.800299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.891 qpair failed and we were unable to recover it. 00:32:20.891 [2024-11-26 07:42:04.800629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.891 [2024-11-26 07:42:04.800637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.891 qpair failed and we were unable to recover it. 00:32:20.891 [2024-11-26 07:42:04.800938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.891 [2024-11-26 07:42:04.800946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.891 qpair failed and we were unable to recover it. 00:32:20.891 [2024-11-26 07:42:04.801276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.891 [2024-11-26 07:42:04.801284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.891 qpair failed and we were unable to recover it. 00:32:20.891 [2024-11-26 07:42:04.801326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.891 [2024-11-26 07:42:04.801332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.891 qpair failed and we were unable to recover it. 00:32:20.891 [2024-11-26 07:42:04.801543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.891 [2024-11-26 07:42:04.801553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.891 qpair failed and we were unable to recover it. 00:32:20.891 [2024-11-26 07:42:04.801746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.891 [2024-11-26 07:42:04.801755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.891 qpair failed and we were unable to recover it. 00:32:20.891 [2024-11-26 07:42:04.802132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.891 [2024-11-26 07:42:04.802140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.891 qpair failed and we were unable to recover it. 00:32:20.891 [2024-11-26 07:42:04.802441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.891 [2024-11-26 07:42:04.802449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.891 qpair failed and we were unable to recover it. 00:32:20.891 [2024-11-26 07:42:04.802780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.891 [2024-11-26 07:42:04.802788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.891 qpair failed and we were unable to recover it. 00:32:20.891 [2024-11-26 07:42:04.802975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.891 [2024-11-26 07:42:04.802983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.891 qpair failed and we were unable to recover it. 00:32:20.891 [2024-11-26 07:42:04.803319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.891 [2024-11-26 07:42:04.803326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.891 qpair failed and we were unable to recover it. 00:32:20.891 [2024-11-26 07:42:04.803657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.891 [2024-11-26 07:42:04.803665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.891 qpair failed and we were unable to recover it. 00:32:20.891 [2024-11-26 07:42:04.803828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.891 [2024-11-26 07:42:04.803836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.891 qpair failed and we were unable to recover it. 00:32:20.891 [2024-11-26 07:42:04.804147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.891 [2024-11-26 07:42:04.804156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.891 qpair failed and we were unable to recover it. 00:32:20.891 [2024-11-26 07:42:04.804476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.891 [2024-11-26 07:42:04.804484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.891 qpair failed and we were unable to recover it. 00:32:20.891 [2024-11-26 07:42:04.804805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.891 [2024-11-26 07:42:04.804812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.891 qpair failed and we were unable to recover it. 00:32:20.891 [2024-11-26 07:42:04.805133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.891 [2024-11-26 07:42:04.805141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.891 qpair failed and we were unable to recover it. 00:32:20.891 [2024-11-26 07:42:04.805293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.891 [2024-11-26 07:42:04.805300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.891 qpair failed and we were unable to recover it. 00:32:20.891 [2024-11-26 07:42:04.805623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.891 [2024-11-26 07:42:04.805631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.891 qpair failed and we were unable to recover it. 00:32:20.891 [2024-11-26 07:42:04.805918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.891 [2024-11-26 07:42:04.805926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.891 qpair failed and we were unable to recover it. 00:32:20.891 [2024-11-26 07:42:04.806094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.891 [2024-11-26 07:42:04.806102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.891 qpair failed and we were unable to recover it. 00:32:20.891 [2024-11-26 07:42:04.806404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.891 [2024-11-26 07:42:04.806412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.891 qpair failed and we were unable to recover it. 00:32:20.891 [2024-11-26 07:42:04.806724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.891 [2024-11-26 07:42:04.806732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.891 qpair failed and we were unable to recover it. 00:32:20.891 [2024-11-26 07:42:04.807067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.891 [2024-11-26 07:42:04.807076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.891 qpair failed and we were unable to recover it. 00:32:20.891 [2024-11-26 07:42:04.807478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.891 [2024-11-26 07:42:04.807487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.891 qpair failed and we were unable to recover it. 00:32:20.891 [2024-11-26 07:42:04.807797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.892 [2024-11-26 07:42:04.807806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.892 qpair failed and we were unable to recover it. 00:32:20.892 [2024-11-26 07:42:04.807993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.892 [2024-11-26 07:42:04.808001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.892 qpair failed and we were unable to recover it. 00:32:20.892 [2024-11-26 07:42:04.808316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.892 [2024-11-26 07:42:04.808324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.892 qpair failed and we were unable to recover it. 00:32:20.892 [2024-11-26 07:42:04.808641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.892 [2024-11-26 07:42:04.808648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.892 qpair failed and we were unable to recover it. 00:32:20.892 [2024-11-26 07:42:04.808994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.892 [2024-11-26 07:42:04.809002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.892 qpair failed and we were unable to recover it. 00:32:20.892 [2024-11-26 07:42:04.809173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.892 [2024-11-26 07:42:04.809182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.892 qpair failed and we were unable to recover it. 00:32:20.892 [2024-11-26 07:42:04.809392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.892 [2024-11-26 07:42:04.809400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.892 qpair failed and we were unable to recover it. 00:32:20.892 [2024-11-26 07:42:04.809703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.892 [2024-11-26 07:42:04.809711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.892 qpair failed and we were unable to recover it. 00:32:20.892 [2024-11-26 07:42:04.809878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.892 [2024-11-26 07:42:04.809886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.892 qpair failed and we were unable to recover it. 00:32:20.892 [2024-11-26 07:42:04.809948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.892 [2024-11-26 07:42:04.809957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.892 qpair failed and we were unable to recover it. 00:32:20.892 [2024-11-26 07:42:04.810123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.892 [2024-11-26 07:42:04.810130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.892 qpair failed and we were unable to recover it. 00:32:20.892 [2024-11-26 07:42:04.810436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.892 [2024-11-26 07:42:04.810444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.892 qpair failed and we were unable to recover it. 00:32:20.892 [2024-11-26 07:42:04.810759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.892 [2024-11-26 07:42:04.810767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.892 qpair failed and we were unable to recover it. 00:32:20.892 [2024-11-26 07:42:04.811163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.892 [2024-11-26 07:42:04.811171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.892 qpair failed and we were unable to recover it. 00:32:20.892 [2024-11-26 07:42:04.811350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.892 [2024-11-26 07:42:04.811359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.892 qpair failed and we were unable to recover it. 00:32:20.892 [2024-11-26 07:42:04.811527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.892 [2024-11-26 07:42:04.811534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.892 qpair failed and we were unable to recover it. 00:32:20.892 [2024-11-26 07:42:04.811843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.892 [2024-11-26 07:42:04.811851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.892 qpair failed and we were unable to recover it. 00:32:20.892 [2024-11-26 07:42:04.812142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.892 [2024-11-26 07:42:04.812151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.892 qpair failed and we were unable to recover it. 00:32:20.892 [2024-11-26 07:42:04.812462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.892 [2024-11-26 07:42:04.812471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.892 qpair failed and we were unable to recover it. 00:32:20.892 [2024-11-26 07:42:04.812634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.892 [2024-11-26 07:42:04.812644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.892 qpair failed and we were unable to recover it. 00:32:20.892 [2024-11-26 07:42:04.812981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.892 [2024-11-26 07:42:04.812989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.892 qpair failed and we were unable to recover it. 00:32:20.892 [2024-11-26 07:42:04.813202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.892 [2024-11-26 07:42:04.813210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.892 qpair failed and we were unable to recover it. 00:32:20.892 [2024-11-26 07:42:04.813392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.892 [2024-11-26 07:42:04.813400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.892 qpair failed and we were unable to recover it. 00:32:20.892 [2024-11-26 07:42:04.813712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.892 [2024-11-26 07:42:04.813720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.892 qpair failed and we were unable to recover it. 00:32:20.892 [2024-11-26 07:42:04.814035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.892 [2024-11-26 07:42:04.814044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.892 qpair failed and we were unable to recover it. 00:32:20.892 [2024-11-26 07:42:04.814342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.892 [2024-11-26 07:42:04.814349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.892 qpair failed and we were unable to recover it. 00:32:20.892 [2024-11-26 07:42:04.814530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.892 [2024-11-26 07:42:04.814538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.892 qpair failed and we were unable to recover it. 00:32:20.892 [2024-11-26 07:42:04.814709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.892 [2024-11-26 07:42:04.814717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.892 qpair failed and we were unable to recover it. 00:32:20.892 [2024-11-26 07:42:04.814917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.892 [2024-11-26 07:42:04.814925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.892 qpair failed and we were unable to recover it. 00:32:20.892 [2024-11-26 07:42:04.815092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.892 [2024-11-26 07:42:04.815100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.892 qpair failed and we were unable to recover it. 00:32:20.892 [2024-11-26 07:42:04.815406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.892 [2024-11-26 07:42:04.815415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.892 qpair failed and we were unable to recover it. 00:32:20.892 [2024-11-26 07:42:04.815574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.892 [2024-11-26 07:42:04.815582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.892 qpair failed and we were unable to recover it. 00:32:20.892 [2024-11-26 07:42:04.815752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.892 [2024-11-26 07:42:04.815760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.892 qpair failed and we were unable to recover it. 00:32:20.892 [2024-11-26 07:42:04.816048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.892 [2024-11-26 07:42:04.816057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.892 qpair failed and we were unable to recover it. 00:32:20.892 [2024-11-26 07:42:04.816366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.892 [2024-11-26 07:42:04.816375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.892 qpair failed and we were unable to recover it. 00:32:20.892 [2024-11-26 07:42:04.816686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.892 [2024-11-26 07:42:04.816694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.892 qpair failed and we were unable to recover it. 00:32:20.892 [2024-11-26 07:42:04.816956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.892 [2024-11-26 07:42:04.816965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.892 qpair failed and we were unable to recover it. 00:32:20.892 [2024-11-26 07:42:04.817157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.892 [2024-11-26 07:42:04.817166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.892 qpair failed and we were unable to recover it. 00:32:20.892 [2024-11-26 07:42:04.817353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.893 [2024-11-26 07:42:04.817361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.893 qpair failed and we were unable to recover it. 00:32:20.893 [2024-11-26 07:42:04.817540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.893 [2024-11-26 07:42:04.817548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.893 qpair failed and we were unable to recover it. 00:32:20.893 [2024-11-26 07:42:04.817859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.893 [2024-11-26 07:42:04.817873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.893 qpair failed and we were unable to recover it. 00:32:20.893 [2024-11-26 07:42:04.818029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.893 [2024-11-26 07:42:04.818036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.893 qpair failed and we were unable to recover it. 00:32:20.893 [2024-11-26 07:42:04.818419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.893 [2024-11-26 07:42:04.818427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.893 qpair failed and we were unable to recover it. 00:32:20.893 [2024-11-26 07:42:04.818610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.893 [2024-11-26 07:42:04.818619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.893 qpair failed and we were unable to recover it. 00:32:20.893 [2024-11-26 07:42:04.818934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.893 [2024-11-26 07:42:04.818943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.893 qpair failed and we were unable to recover it. 00:32:20.893 [2024-11-26 07:42:04.819126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.893 [2024-11-26 07:42:04.819134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.893 qpair failed and we were unable to recover it. 00:32:20.893 [2024-11-26 07:42:04.819320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.893 [2024-11-26 07:42:04.819328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.893 qpair failed and we were unable to recover it. 00:32:20.893 [2024-11-26 07:42:04.819616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.893 [2024-11-26 07:42:04.819624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.893 qpair failed and we were unable to recover it. 00:32:20.893 [2024-11-26 07:42:04.819796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.893 [2024-11-26 07:42:04.819805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.893 qpair failed and we were unable to recover it. 00:32:20.893 [2024-11-26 07:42:04.820091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.893 [2024-11-26 07:42:04.820099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.893 qpair failed and we were unable to recover it. 00:32:20.893 [2024-11-26 07:42:04.820191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.893 [2024-11-26 07:42:04.820198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.893 qpair failed and we were unable to recover it. 00:32:20.893 [2024-11-26 07:42:04.820428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.893 [2024-11-26 07:42:04.820436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.893 qpair failed and we were unable to recover it. 00:32:20.893 [2024-11-26 07:42:04.820735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.893 [2024-11-26 07:42:04.820743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.893 qpair failed and we were unable to recover it. 00:32:20.893 [2024-11-26 07:42:04.821031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.893 [2024-11-26 07:42:04.821039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.893 qpair failed and we were unable to recover it. 00:32:20.893 [2024-11-26 07:42:04.821216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.893 [2024-11-26 07:42:04.821224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.893 qpair failed and we were unable to recover it. 00:32:20.893 [2024-11-26 07:42:04.821407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.893 [2024-11-26 07:42:04.821416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.893 qpair failed and we were unable to recover it. 00:32:20.893 [2024-11-26 07:42:04.821700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.893 [2024-11-26 07:42:04.821707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.893 qpair failed and we were unable to recover it. 00:32:20.893 [2024-11-26 07:42:04.822028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.893 [2024-11-26 07:42:04.822036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.893 qpair failed and we were unable to recover it. 00:32:20.893 [2024-11-26 07:42:04.822388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.893 [2024-11-26 07:42:04.822395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.893 qpair failed and we were unable to recover it. 00:32:20.893 [2024-11-26 07:42:04.822580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.893 [2024-11-26 07:42:04.822591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.893 qpair failed and we were unable to recover it. 00:32:20.893 [2024-11-26 07:42:04.822913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.893 [2024-11-26 07:42:04.822921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.893 qpair failed and we were unable to recover it. 00:32:20.893 [2024-11-26 07:42:04.823102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.893 [2024-11-26 07:42:04.823110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.893 qpair failed and we were unable to recover it. 00:32:20.893 [2024-11-26 07:42:04.823409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.893 [2024-11-26 07:42:04.823417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.893 qpair failed and we were unable to recover it. 00:32:20.893 [2024-11-26 07:42:04.823726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.893 [2024-11-26 07:42:04.823734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.893 qpair failed and we were unable to recover it. 00:32:20.893 [2024-11-26 07:42:04.823902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.893 [2024-11-26 07:42:04.823910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.893 qpair failed and we were unable to recover it. 00:32:20.893 [2024-11-26 07:42:04.824059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.893 [2024-11-26 07:42:04.824067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.893 qpair failed and we were unable to recover it. 00:32:20.893 [2024-11-26 07:42:04.824401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.893 [2024-11-26 07:42:04.824409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.893 qpair failed and we were unable to recover it. 00:32:20.893 [2024-11-26 07:42:04.824586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.893 [2024-11-26 07:42:04.824596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.893 qpair failed and we were unable to recover it. 00:32:20.893 [2024-11-26 07:42:04.824899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.893 [2024-11-26 07:42:04.824907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.893 qpair failed and we were unable to recover it. 00:32:20.893 [2024-11-26 07:42:04.825213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.893 [2024-11-26 07:42:04.825221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.893 qpair failed and we were unable to recover it. 00:32:20.893 [2024-11-26 07:42:04.825396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.893 [2024-11-26 07:42:04.825405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.894 qpair failed and we were unable to recover it. 00:32:20.894 [2024-11-26 07:42:04.825569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.894 [2024-11-26 07:42:04.825577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.894 qpair failed and we were unable to recover it. 00:32:20.894 [2024-11-26 07:42:04.825783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.894 [2024-11-26 07:42:04.825792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.894 qpair failed and we were unable to recover it. 00:32:20.894 [2024-11-26 07:42:04.826096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.894 [2024-11-26 07:42:04.826105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.894 qpair failed and we were unable to recover it. 00:32:20.894 [2024-11-26 07:42:04.826419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.894 [2024-11-26 07:42:04.826427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.894 qpair failed and we were unable to recover it. 00:32:20.894 [2024-11-26 07:42:04.826731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.894 [2024-11-26 07:42:04.826740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.894 qpair failed and we were unable to recover it. 00:32:20.894 [2024-11-26 07:42:04.827040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.894 [2024-11-26 07:42:04.827048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.894 qpair failed and we were unable to recover it. 00:32:20.894 [2024-11-26 07:42:04.827347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.894 [2024-11-26 07:42:04.827356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.894 qpair failed and we were unable to recover it. 00:32:20.894 [2024-11-26 07:42:04.827544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.894 [2024-11-26 07:42:04.827554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.894 qpair failed and we were unable to recover it. 00:32:20.894 [2024-11-26 07:42:04.827759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.894 [2024-11-26 07:42:04.827768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.894 qpair failed and we were unable to recover it. 00:32:20.894 [2024-11-26 07:42:04.828072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.894 [2024-11-26 07:42:04.828080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.894 qpair failed and we were unable to recover it. 00:32:20.894 [2024-11-26 07:42:04.828265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.894 [2024-11-26 07:42:04.828274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.894 qpair failed and we were unable to recover it. 00:32:20.894 [2024-11-26 07:42:04.828471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.894 [2024-11-26 07:42:04.828479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.894 qpair failed and we were unable to recover it. 00:32:20.894 [2024-11-26 07:42:04.828519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.894 [2024-11-26 07:42:04.828525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.894 qpair failed and we were unable to recover it. 00:32:20.894 [2024-11-26 07:42:04.828790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.894 [2024-11-26 07:42:04.828798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.894 qpair failed and we were unable to recover it. 00:32:20.894 [2024-11-26 07:42:04.829090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.894 [2024-11-26 07:42:04.829098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.894 qpair failed and we were unable to recover it. 00:32:20.894 [2024-11-26 07:42:04.829438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.894 [2024-11-26 07:42:04.829446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.894 qpair failed and we were unable to recover it. 00:32:20.894 [2024-11-26 07:42:04.829779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.894 [2024-11-26 07:42:04.829787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.894 qpair failed and we were unable to recover it. 00:32:20.894 [2024-11-26 07:42:04.829940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.894 [2024-11-26 07:42:04.829948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.894 qpair failed and we were unable to recover it. 00:32:20.894 [2024-11-26 07:42:04.830133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.894 [2024-11-26 07:42:04.830141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.894 qpair failed and we were unable to recover it. 00:32:20.894 [2024-11-26 07:42:04.830375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.894 [2024-11-26 07:42:04.830384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.894 qpair failed and we were unable to recover it. 00:32:20.894 [2024-11-26 07:42:04.830716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.894 [2024-11-26 07:42:04.830724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.894 qpair failed and we were unable to recover it. 00:32:20.894 [2024-11-26 07:42:04.831060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.894 [2024-11-26 07:42:04.831067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.894 qpair failed and we were unable to recover it. 00:32:20.894 [2024-11-26 07:42:04.831243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.894 [2024-11-26 07:42:04.831252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.894 qpair failed and we were unable to recover it. 00:32:20.894 [2024-11-26 07:42:04.831599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.894 [2024-11-26 07:42:04.831607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.894 qpair failed and we were unable to recover it. 00:32:20.894 [2024-11-26 07:42:04.831941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.894 [2024-11-26 07:42:04.831949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.894 qpair failed and we were unable to recover it. 00:32:20.894 [2024-11-26 07:42:04.832263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.894 [2024-11-26 07:42:04.832271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.894 qpair failed and we were unable to recover it. 00:32:20.894 [2024-11-26 07:42:04.832323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.894 [2024-11-26 07:42:04.832329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.894 qpair failed and we were unable to recover it. 00:32:20.894 [2024-11-26 07:42:04.832504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.894 [2024-11-26 07:42:04.832512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.894 qpair failed and we were unable to recover it. 00:32:20.894 [2024-11-26 07:42:04.832840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.894 [2024-11-26 07:42:04.832849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.894 qpair failed and we were unable to recover it. 00:32:20.894 [2024-11-26 07:42:04.833019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.894 [2024-11-26 07:42:04.833028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.894 qpair failed and we were unable to recover it. 00:32:20.894 [2024-11-26 07:42:04.833268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.894 [2024-11-26 07:42:04.833276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.894 qpair failed and we were unable to recover it. 00:32:20.894 [2024-11-26 07:42:04.833617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.894 [2024-11-26 07:42:04.833625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.894 qpair failed and we were unable to recover it. 00:32:20.894 [2024-11-26 07:42:04.833803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.894 [2024-11-26 07:42:04.833811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.894 qpair failed and we were unable to recover it. 00:32:20.894 [2024-11-26 07:42:04.834085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.894 [2024-11-26 07:42:04.834093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.894 qpair failed and we were unable to recover it. 00:32:20.894 [2024-11-26 07:42:04.834392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.894 [2024-11-26 07:42:04.834400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.894 qpair failed and we were unable to recover it. 00:32:20.894 [2024-11-26 07:42:04.834555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.894 [2024-11-26 07:42:04.834562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.894 qpair failed and we were unable to recover it. 00:32:20.894 [2024-11-26 07:42:04.834905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.894 [2024-11-26 07:42:04.834914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.895 qpair failed and we were unable to recover it. 00:32:20.895 [2024-11-26 07:42:04.835197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.895 [2024-11-26 07:42:04.835205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.895 qpair failed and we were unable to recover it. 00:32:20.895 [2024-11-26 07:42:04.835533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.895 [2024-11-26 07:42:04.835541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.895 qpair failed and we were unable to recover it. 00:32:20.895 [2024-11-26 07:42:04.835853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.895 [2024-11-26 07:42:04.835861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.895 qpair failed and we were unable to recover it. 00:32:20.895 [2024-11-26 07:42:04.836156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.895 [2024-11-26 07:42:04.836164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.895 qpair failed and we were unable to recover it. 00:32:20.895 [2024-11-26 07:42:04.836468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.895 [2024-11-26 07:42:04.836477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.895 qpair failed and we were unable to recover it. 00:32:20.895 [2024-11-26 07:42:04.836791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.895 [2024-11-26 07:42:04.836799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.895 qpair failed and we were unable to recover it. 00:32:20.895 [2024-11-26 07:42:04.837149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.895 [2024-11-26 07:42:04.837157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.895 qpair failed and we were unable to recover it. 00:32:20.895 [2024-11-26 07:42:04.837475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.895 [2024-11-26 07:42:04.837482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.895 qpair failed and we were unable to recover it. 00:32:20.895 [2024-11-26 07:42:04.837803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.895 [2024-11-26 07:42:04.837810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.895 qpair failed and we were unable to recover it. 00:32:20.895 [2024-11-26 07:42:04.838126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.895 [2024-11-26 07:42:04.838134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.895 qpair failed and we were unable to recover it. 00:32:20.895 [2024-11-26 07:42:04.838465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.895 [2024-11-26 07:42:04.838472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.895 qpair failed and we were unable to recover it. 00:32:20.895 [2024-11-26 07:42:04.838633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.895 [2024-11-26 07:42:04.838641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.895 qpair failed and we were unable to recover it. 00:32:20.895 [2024-11-26 07:42:04.838964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.895 [2024-11-26 07:42:04.838972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.895 qpair failed and we were unable to recover it. 00:32:20.895 [2024-11-26 07:42:04.839291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.895 [2024-11-26 07:42:04.839299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.895 qpair failed and we were unable to recover it. 00:32:20.895 [2024-11-26 07:42:04.839339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.895 [2024-11-26 07:42:04.839346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.895 qpair failed and we were unable to recover it. 00:32:20.895 [2024-11-26 07:42:04.839388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.895 [2024-11-26 07:42:04.839395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.895 qpair failed and we were unable to recover it. 00:32:20.895 [2024-11-26 07:42:04.839550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.895 [2024-11-26 07:42:04.839558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.895 qpair failed and we were unable to recover it. 00:32:20.895 [2024-11-26 07:42:04.839877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.895 [2024-11-26 07:42:04.839886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.895 qpair failed and we were unable to recover it. 00:32:20.895 [2024-11-26 07:42:04.840132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.895 [2024-11-26 07:42:04.840140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.895 qpair failed and we were unable to recover it. 00:32:20.895 [2024-11-26 07:42:04.840321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.895 [2024-11-26 07:42:04.840329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.895 qpair failed and we were unable to recover it. 00:32:20.895 [2024-11-26 07:42:04.840642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.895 [2024-11-26 07:42:04.840650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.895 qpair failed and we were unable to recover it. 00:32:20.895 [2024-11-26 07:42:04.840957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.895 [2024-11-26 07:42:04.840965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.895 qpair failed and we were unable to recover it. 00:32:20.895 [2024-11-26 07:42:04.841148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.895 [2024-11-26 07:42:04.841157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.895 qpair failed and we were unable to recover it. 00:32:20.895 [2024-11-26 07:42:04.841342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.895 [2024-11-26 07:42:04.841351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.895 qpair failed and we were unable to recover it. 00:32:20.895 [2024-11-26 07:42:04.841626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.895 [2024-11-26 07:42:04.841634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.895 qpair failed and we were unable to recover it. 00:32:20.895 [2024-11-26 07:42:04.841823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.895 [2024-11-26 07:42:04.841832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.895 qpair failed and we were unable to recover it. 00:32:20.895 [2024-11-26 07:42:04.842001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.895 [2024-11-26 07:42:04.842009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.895 qpair failed and we were unable to recover it. 00:32:20.895 [2024-11-26 07:42:04.842222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.895 [2024-11-26 07:42:04.842229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.895 qpair failed and we were unable to recover it. 00:32:20.895 [2024-11-26 07:42:04.842585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.895 [2024-11-26 07:42:04.842593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.895 qpair failed and we were unable to recover it. 00:32:20.895 [2024-11-26 07:42:04.842767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.895 [2024-11-26 07:42:04.842775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.895 qpair failed and we were unable to recover it. 00:32:20.895 [2024-11-26 07:42:04.843092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.895 [2024-11-26 07:42:04.843100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.895 qpair failed and we were unable to recover it. 00:32:20.895 [2024-11-26 07:42:04.843405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.895 [2024-11-26 07:42:04.843415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.895 qpair failed and we were unable to recover it. 00:32:20.895 [2024-11-26 07:42:04.843721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.895 [2024-11-26 07:42:04.843728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.895 qpair failed and we were unable to recover it. 00:32:20.895 [2024-11-26 07:42:04.843767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.895 [2024-11-26 07:42:04.843773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.895 qpair failed and we were unable to recover it. 00:32:20.895 [2024-11-26 07:42:04.844055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.895 [2024-11-26 07:42:04.844063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.895 qpair failed and we were unable to recover it. 00:32:20.895 [2024-11-26 07:42:04.844378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.895 [2024-11-26 07:42:04.844386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.895 qpair failed and we were unable to recover it. 00:32:20.895 [2024-11-26 07:42:04.844568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.895 [2024-11-26 07:42:04.844577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.895 qpair failed and we were unable to recover it. 00:32:20.895 [2024-11-26 07:42:04.844764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.896 [2024-11-26 07:42:04.844772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.896 qpair failed and we were unable to recover it. 00:32:20.896 [2024-11-26 07:42:04.844833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.896 [2024-11-26 07:42:04.844841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.896 qpair failed and we were unable to recover it. 00:32:20.896 [2024-11-26 07:42:04.844987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.896 [2024-11-26 07:42:04.844996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.896 qpair failed and we were unable to recover it. 00:32:20.896 [2024-11-26 07:42:04.845341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.896 [2024-11-26 07:42:04.845350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.896 qpair failed and we were unable to recover it. 00:32:20.896 [2024-11-26 07:42:04.845688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.896 [2024-11-26 07:42:04.845697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.896 qpair failed and we were unable to recover it. 00:32:20.896 [2024-11-26 07:42:04.845852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.896 [2024-11-26 07:42:04.845861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.896 qpair failed and we were unable to recover it. 00:32:20.896 [2024-11-26 07:42:04.846026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.896 [2024-11-26 07:42:04.846035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.896 qpair failed and we were unable to recover it. 00:32:20.896 [2024-11-26 07:42:04.846194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.896 [2024-11-26 07:42:04.846203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.896 qpair failed and we were unable to recover it. 00:32:20.896 [2024-11-26 07:42:04.846519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.896 [2024-11-26 07:42:04.846528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.896 qpair failed and we were unable to recover it. 00:32:20.896 [2024-11-26 07:42:04.846904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.896 [2024-11-26 07:42:04.846912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.896 qpair failed and we were unable to recover it. 00:32:20.896 [2024-11-26 07:42:04.847184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.896 [2024-11-26 07:42:04.847192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.896 qpair failed and we were unable to recover it. 00:32:20.896 [2024-11-26 07:42:04.847356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.896 [2024-11-26 07:42:04.847364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.896 qpair failed and we were unable to recover it. 00:32:20.896 [2024-11-26 07:42:04.847708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.896 [2024-11-26 07:42:04.847716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.896 qpair failed and we were unable to recover it. 00:32:20.896 [2024-11-26 07:42:04.847900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.896 [2024-11-26 07:42:04.847909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.896 qpair failed and we were unable to recover it. 00:32:20.896 [2024-11-26 07:42:04.848209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.896 [2024-11-26 07:42:04.848217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.896 qpair failed and we were unable to recover it. 00:32:20.896 [2024-11-26 07:42:04.848513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.896 [2024-11-26 07:42:04.848521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.896 qpair failed and we were unable to recover it. 00:32:20.896 [2024-11-26 07:42:04.848832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.896 [2024-11-26 07:42:04.848840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.896 qpair failed and we were unable to recover it. 00:32:20.896 [2024-11-26 07:42:04.849158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.896 [2024-11-26 07:42:04.849167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.896 qpair failed and we were unable to recover it. 00:32:20.896 [2024-11-26 07:42:04.849348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.896 [2024-11-26 07:42:04.849356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.896 qpair failed and we were unable to recover it. 00:32:20.896 [2024-11-26 07:42:04.849543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.896 [2024-11-26 07:42:04.849551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.896 qpair failed and we were unable to recover it. 00:32:20.896 [2024-11-26 07:42:04.849878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.896 [2024-11-26 07:42:04.849886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.896 qpair failed and we were unable to recover it. 00:32:20.896 [2024-11-26 07:42:04.850205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.896 [2024-11-26 07:42:04.850214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.896 qpair failed and we were unable to recover it. 00:32:20.896 [2024-11-26 07:42:04.850534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.896 [2024-11-26 07:42:04.850542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.896 qpair failed and we were unable to recover it. 00:32:20.896 [2024-11-26 07:42:04.850872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.896 [2024-11-26 07:42:04.850880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.896 qpair failed and we were unable to recover it. 00:32:20.896 [2024-11-26 07:42:04.850920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.896 [2024-11-26 07:42:04.850926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.896 qpair failed and we were unable to recover it. 00:32:20.896 [2024-11-26 07:42:04.851229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.896 [2024-11-26 07:42:04.851237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.896 qpair failed and we were unable to recover it. 00:32:20.896 [2024-11-26 07:42:04.851464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.896 [2024-11-26 07:42:04.851473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.896 qpair failed and we were unable to recover it. 00:32:20.896 [2024-11-26 07:42:04.851803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.896 [2024-11-26 07:42:04.851812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.896 qpair failed and we were unable to recover it. 00:32:20.896 [2024-11-26 07:42:04.852000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.896 [2024-11-26 07:42:04.852008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.896 qpair failed and we were unable to recover it. 00:32:20.896 [2024-11-26 07:42:04.852287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.896 [2024-11-26 07:42:04.852295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.896 qpair failed and we were unable to recover it. 00:32:20.896 [2024-11-26 07:42:04.852501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.896 [2024-11-26 07:42:04.852509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.896 qpair failed and we were unable to recover it. 00:32:20.896 [2024-11-26 07:42:04.852811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.896 [2024-11-26 07:42:04.852819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.896 qpair failed and we were unable to recover it. 00:32:20.896 [2024-11-26 07:42:04.852899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.896 [2024-11-26 07:42:04.852905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.896 qpair failed and we were unable to recover it. 00:32:20.896 [2024-11-26 07:42:04.853102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.896 [2024-11-26 07:42:04.853110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.896 qpair failed and we were unable to recover it. 00:32:20.896 [2024-11-26 07:42:04.853438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.896 [2024-11-26 07:42:04.853446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.896 qpair failed and we were unable to recover it. 00:32:20.896 [2024-11-26 07:42:04.853634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.896 [2024-11-26 07:42:04.853643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.896 qpair failed and we were unable to recover it. 00:32:20.896 [2024-11-26 07:42:04.853951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.896 [2024-11-26 07:42:04.853959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.896 qpair failed and we were unable to recover it. 00:32:20.896 [2024-11-26 07:42:04.854148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.896 [2024-11-26 07:42:04.854156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.896 qpair failed and we were unable to recover it. 00:32:20.896 [2024-11-26 07:42:04.854320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.897 [2024-11-26 07:42:04.854327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.897 qpair failed and we were unable to recover it. 00:32:20.897 [2024-11-26 07:42:04.854616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.897 [2024-11-26 07:42:04.854624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.897 qpair failed and we were unable to recover it. 00:32:20.897 [2024-11-26 07:42:04.854804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.897 [2024-11-26 07:42:04.854813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.897 qpair failed and we were unable to recover it. 00:32:20.897 [2024-11-26 07:42:04.855148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.897 [2024-11-26 07:42:04.855156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.897 qpair failed and we were unable to recover it. 00:32:20.897 [2024-11-26 07:42:04.855346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.897 [2024-11-26 07:42:04.855354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.897 qpair failed and we were unable to recover it. 00:32:20.897 [2024-11-26 07:42:04.855658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.897 [2024-11-26 07:42:04.855666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.897 qpair failed and we were unable to recover it. 00:32:20.897 [2024-11-26 07:42:04.855956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.897 [2024-11-26 07:42:04.855965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.897 qpair failed and we were unable to recover it. 00:32:20.897 [2024-11-26 07:42:04.856158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.897 [2024-11-26 07:42:04.856166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.897 qpair failed and we were unable to recover it. 00:32:20.897 [2024-11-26 07:42:04.856354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.897 [2024-11-26 07:42:04.856362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.897 qpair failed and we were unable to recover it. 00:32:20.897 [2024-11-26 07:42:04.856476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.897 [2024-11-26 07:42:04.856483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.897 qpair failed and we were unable to recover it. 00:32:20.897 [2024-11-26 07:42:04.856522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.897 [2024-11-26 07:42:04.856529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.897 qpair failed and we were unable to recover it. 00:32:20.897 [2024-11-26 07:42:04.856834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.897 [2024-11-26 07:42:04.856843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.897 qpair failed and we were unable to recover it. 00:32:20.897 [2024-11-26 07:42:04.857170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.897 [2024-11-26 07:42:04.857178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.897 qpair failed and we were unable to recover it. 00:32:20.897 [2024-11-26 07:42:04.857342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.897 [2024-11-26 07:42:04.857350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.897 qpair failed and we were unable to recover it. 00:32:20.897 [2024-11-26 07:42:04.857658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.897 [2024-11-26 07:42:04.857666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.897 qpair failed and we were unable to recover it. 00:32:20.897 [2024-11-26 07:42:04.857798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.897 [2024-11-26 07:42:04.857805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.897 qpair failed and we were unable to recover it. 00:32:20.897 [2024-11-26 07:42:04.857992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.897 [2024-11-26 07:42:04.858001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.897 qpair failed and we were unable to recover it. 00:32:20.897 [2024-11-26 07:42:04.858310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.897 [2024-11-26 07:42:04.858318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.897 qpair failed and we were unable to recover it. 00:32:20.897 [2024-11-26 07:42:04.858601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.897 [2024-11-26 07:42:04.858608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.897 qpair failed and we were unable to recover it. 00:32:20.897 [2024-11-26 07:42:04.858936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.897 [2024-11-26 07:42:04.858945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.897 qpair failed and we were unable to recover it. 00:32:20.897 [2024-11-26 07:42:04.859144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.897 [2024-11-26 07:42:04.859152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.897 qpair failed and we were unable to recover it. 00:32:20.897 [2024-11-26 07:42:04.859433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.897 [2024-11-26 07:42:04.859440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.897 qpair failed and we were unable to recover it. 00:32:20.897 [2024-11-26 07:42:04.859737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.897 [2024-11-26 07:42:04.859745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.897 qpair failed and we were unable to recover it. 00:32:20.897 [2024-11-26 07:42:04.860031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.897 [2024-11-26 07:42:04.860041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.897 qpair failed and we were unable to recover it. 00:32:20.897 [2024-11-26 07:42:04.860364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.897 [2024-11-26 07:42:04.860373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.897 qpair failed and we were unable to recover it. 00:32:20.897 [2024-11-26 07:42:04.860672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.897 [2024-11-26 07:42:04.860681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.897 qpair failed and we were unable to recover it. 00:32:20.897 [2024-11-26 07:42:04.860870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.897 [2024-11-26 07:42:04.860879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.897 qpair failed and we were unable to recover it. 00:32:20.897 [2024-11-26 07:42:04.861054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.897 [2024-11-26 07:42:04.861062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.897 qpair failed and we were unable to recover it. 00:32:20.897 [2024-11-26 07:42:04.861365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.897 [2024-11-26 07:42:04.861372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.897 qpair failed and we were unable to recover it. 00:32:20.897 [2024-11-26 07:42:04.861688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.897 [2024-11-26 07:42:04.861696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.897 qpair failed and we were unable to recover it. 00:32:20.897 [2024-11-26 07:42:04.861868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.897 [2024-11-26 07:42:04.861877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.897 qpair failed and we were unable to recover it. 00:32:20.897 [2024-11-26 07:42:04.862045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.897 [2024-11-26 07:42:04.862053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.897 qpair failed and we were unable to recover it. 00:32:20.897 [2024-11-26 07:42:04.862239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.897 [2024-11-26 07:42:04.862248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.897 qpair failed and we were unable to recover it. 00:32:20.897 [2024-11-26 07:42:04.862407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.897 [2024-11-26 07:42:04.862414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.897 qpair failed and we were unable to recover it. 00:32:20.897 [2024-11-26 07:42:04.862580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.897 [2024-11-26 07:42:04.862589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.897 qpair failed and we were unable to recover it. 00:32:20.897 [2024-11-26 07:42:04.862896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.897 [2024-11-26 07:42:04.862904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.897 qpair failed and we were unable to recover it. 00:32:20.897 [2024-11-26 07:42:04.863220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.897 [2024-11-26 07:42:04.863228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.897 qpair failed and we were unable to recover it. 00:32:20.897 [2024-11-26 07:42:04.863423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.897 [2024-11-26 07:42:04.863430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.897 qpair failed and we were unable to recover it. 00:32:20.897 [2024-11-26 07:42:04.863585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.898 [2024-11-26 07:42:04.863592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.898 qpair failed and we were unable to recover it. 00:32:20.898 [2024-11-26 07:42:04.864004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.898 [2024-11-26 07:42:04.864012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.898 qpair failed and we were unable to recover it. 00:32:20.898 [2024-11-26 07:42:04.864331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.898 [2024-11-26 07:42:04.864339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.898 qpair failed and we were unable to recover it. 00:32:20.898 [2024-11-26 07:42:04.864650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.898 [2024-11-26 07:42:04.864657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.898 qpair failed and we were unable to recover it. 00:32:20.898 [2024-11-26 07:42:04.865051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.898 [2024-11-26 07:42:04.865059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.898 qpair failed and we were unable to recover it. 00:32:20.898 [2024-11-26 07:42:04.865363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.898 [2024-11-26 07:42:04.865370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.898 qpair failed and we were unable to recover it. 00:32:20.898 [2024-11-26 07:42:04.865543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.898 [2024-11-26 07:42:04.865552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.898 qpair failed and we were unable to recover it. 00:32:20.898 [2024-11-26 07:42:04.865867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.898 [2024-11-26 07:42:04.865875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.898 qpair failed and we were unable to recover it. 00:32:20.898 [2024-11-26 07:42:04.866197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.898 [2024-11-26 07:42:04.866205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.898 qpair failed and we were unable to recover it. 00:32:20.898 [2024-11-26 07:42:04.866526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.898 [2024-11-26 07:42:04.866534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.898 qpair failed and we were unable to recover it. 00:32:20.898 [2024-11-26 07:42:04.866850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.898 [2024-11-26 07:42:04.866859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.898 qpair failed and we were unable to recover it. 00:32:20.898 [2024-11-26 07:42:04.867037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.898 [2024-11-26 07:42:04.867046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.898 qpair failed and we were unable to recover it. 00:32:20.898 [2024-11-26 07:42:04.867228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.898 [2024-11-26 07:42:04.867236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.898 qpair failed and we were unable to recover it. 00:32:20.898 [2024-11-26 07:42:04.867575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.898 [2024-11-26 07:42:04.867583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.898 qpair failed and we were unable to recover it. 00:32:20.898 [2024-11-26 07:42:04.867875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.898 [2024-11-26 07:42:04.867883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.898 qpair failed and we were unable to recover it. 00:32:20.898 [2024-11-26 07:42:04.868040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.898 [2024-11-26 07:42:04.868049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.898 qpair failed and we were unable to recover it. 00:32:20.898 [2024-11-26 07:42:04.868209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.898 [2024-11-26 07:42:04.868217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.898 qpair failed and we were unable to recover it. 00:32:20.898 [2024-11-26 07:42:04.868519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.898 [2024-11-26 07:42:04.868527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.898 qpair failed and we were unable to recover it. 00:32:20.898 [2024-11-26 07:42:04.868710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.898 [2024-11-26 07:42:04.868719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.898 qpair failed and we were unable to recover it. 00:32:20.898 [2024-11-26 07:42:04.869013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.898 [2024-11-26 07:42:04.869021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.898 qpair failed and we were unable to recover it. 00:32:20.898 [2024-11-26 07:42:04.869061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.898 [2024-11-26 07:42:04.869067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.898 qpair failed and we were unable to recover it. 00:32:20.898 [2024-11-26 07:42:04.869233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.898 [2024-11-26 07:42:04.869242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.898 qpair failed and we were unable to recover it. 00:32:20.898 [2024-11-26 07:42:04.869448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.898 [2024-11-26 07:42:04.869456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.898 qpair failed and we were unable to recover it. 00:32:20.898 [2024-11-26 07:42:04.869638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.898 [2024-11-26 07:42:04.869647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.898 qpair failed and we were unable to recover it. 00:32:20.898 [2024-11-26 07:42:04.869949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.898 [2024-11-26 07:42:04.869957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.898 qpair failed and we were unable to recover it. 00:32:20.898 [2024-11-26 07:42:04.870284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.898 [2024-11-26 07:42:04.870294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.898 qpair failed and we were unable to recover it. 00:32:20.898 [2024-11-26 07:42:04.870451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.898 [2024-11-26 07:42:04.870460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.898 qpair failed and we were unable to recover it. 00:32:20.898 [2024-11-26 07:42:04.870637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.898 [2024-11-26 07:42:04.870646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.898 qpair failed and we were unable to recover it. 00:32:20.898 [2024-11-26 07:42:04.870814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.898 [2024-11-26 07:42:04.870822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.898 qpair failed and we were unable to recover it. 00:32:20.898 [2024-11-26 07:42:04.871151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.898 [2024-11-26 07:42:04.871158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.898 qpair failed and we were unable to recover it. 00:32:20.898 [2024-11-26 07:42:04.871483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.898 [2024-11-26 07:42:04.871491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.898 qpair failed and we were unable to recover it. 00:32:20.898 [2024-11-26 07:42:04.871665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.898 [2024-11-26 07:42:04.871673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.898 qpair failed and we were unable to recover it. 00:32:20.898 [2024-11-26 07:42:04.872013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.898 [2024-11-26 07:42:04.872021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.898 qpair failed and we were unable to recover it. 00:32:20.898 [2024-11-26 07:42:04.872363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.898 [2024-11-26 07:42:04.872371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.899 qpair failed and we were unable to recover it. 00:32:20.899 [2024-11-26 07:42:04.872718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.899 [2024-11-26 07:42:04.872726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.899 qpair failed and we were unable to recover it. 00:32:20.899 [2024-11-26 07:42:04.873042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.899 [2024-11-26 07:42:04.873050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.899 qpair failed and we were unable to recover it. 00:32:20.899 [2024-11-26 07:42:04.873089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.899 [2024-11-26 07:42:04.873095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.899 qpair failed and we were unable to recover it. 00:32:20.899 [2024-11-26 07:42:04.873401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.899 [2024-11-26 07:42:04.873409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.899 qpair failed and we were unable to recover it. 00:32:20.899 [2024-11-26 07:42:04.873629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.899 [2024-11-26 07:42:04.873637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.899 qpair failed and we were unable to recover it. 00:32:20.899 [2024-11-26 07:42:04.873860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.899 [2024-11-26 07:42:04.873871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.899 qpair failed and we were unable to recover it. 00:32:20.899 [2024-11-26 07:42:04.874075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.899 [2024-11-26 07:42:04.874084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.899 qpair failed and we were unable to recover it. 00:32:20.899 [2024-11-26 07:42:04.874426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.899 [2024-11-26 07:42:04.874433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.899 qpair failed and we were unable to recover it. 00:32:20.899 [2024-11-26 07:42:04.874610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.899 [2024-11-26 07:42:04.874618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.899 qpair failed and we were unable to recover it. 00:32:20.899 [2024-11-26 07:42:04.874966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.899 [2024-11-26 07:42:04.874974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.899 qpair failed and we were unable to recover it. 00:32:20.899 [2024-11-26 07:42:04.875162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.899 [2024-11-26 07:42:04.875170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.899 qpair failed and we were unable to recover it. 00:32:20.899 [2024-11-26 07:42:04.875347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.899 [2024-11-26 07:42:04.875355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.899 qpair failed and we were unable to recover it. 00:32:20.899 [2024-11-26 07:42:04.875656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.899 [2024-11-26 07:42:04.875665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.899 qpair failed and we were unable to recover it. 00:32:20.899 [2024-11-26 07:42:04.875949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.899 [2024-11-26 07:42:04.875957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.899 qpair failed and we were unable to recover it. 00:32:20.899 [2024-11-26 07:42:04.876133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.899 [2024-11-26 07:42:04.876142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.899 qpair failed and we were unable to recover it. 00:32:20.899 [2024-11-26 07:42:04.876469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.899 [2024-11-26 07:42:04.876476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.899 qpair failed and we were unable to recover it. 00:32:20.899 [2024-11-26 07:42:04.876722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.899 [2024-11-26 07:42:04.876730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.899 qpair failed and we were unable to recover it. 00:32:20.899 [2024-11-26 07:42:04.877032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.899 [2024-11-26 07:42:04.877040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.899 qpair failed and we were unable to recover it. 00:32:20.899 [2024-11-26 07:42:04.877349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.899 [2024-11-26 07:42:04.877357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.899 qpair failed and we were unable to recover it. 00:32:20.899 [2024-11-26 07:42:04.877672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.899 [2024-11-26 07:42:04.877680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.899 qpair failed and we were unable to recover it. 00:32:20.899 [2024-11-26 07:42:04.877845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.899 [2024-11-26 07:42:04.877854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.899 qpair failed and we were unable to recover it. 00:32:20.899 [2024-11-26 07:42:04.878157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.899 [2024-11-26 07:42:04.878165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.899 qpair failed and we were unable to recover it. 00:32:20.899 [2024-11-26 07:42:04.878349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.899 [2024-11-26 07:42:04.878365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.899 qpair failed and we were unable to recover it. 00:32:20.899 [2024-11-26 07:42:04.878677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.899 [2024-11-26 07:42:04.878684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.899 qpair failed and we were unable to recover it. 00:32:20.899 [2024-11-26 07:42:04.878729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.899 [2024-11-26 07:42:04.878735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.899 qpair failed and we were unable to recover it. 00:32:20.899 [2024-11-26 07:42:04.879058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.899 [2024-11-26 07:42:04.879066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.899 qpair failed and we were unable to recover it. 00:32:20.899 [2024-11-26 07:42:04.879248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.899 [2024-11-26 07:42:04.879257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.899 qpair failed and we were unable to recover it. 00:32:20.899 [2024-11-26 07:42:04.879566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.899 [2024-11-26 07:42:04.879574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.899 qpair failed and we were unable to recover it. 00:32:20.899 [2024-11-26 07:42:04.879927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.899 [2024-11-26 07:42:04.879936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.899 qpair failed and we were unable to recover it. 00:32:20.899 [2024-11-26 07:42:04.880266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.899 [2024-11-26 07:42:04.880274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.899 qpair failed and we were unable to recover it. 00:32:20.899 [2024-11-26 07:42:04.880455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.899 [2024-11-26 07:42:04.880463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.899 qpair failed and we were unable to recover it. 00:32:20.899 [2024-11-26 07:42:04.880654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.899 [2024-11-26 07:42:04.880663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.899 qpair failed and we were unable to recover it. 00:32:20.899 [2024-11-26 07:42:04.880703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.899 [2024-11-26 07:42:04.880709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.899 qpair failed and we were unable to recover it. 00:32:20.899 [2024-11-26 07:42:04.881007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.899 [2024-11-26 07:42:04.881015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.899 qpair failed and we were unable to recover it. 00:32:20.899 [2024-11-26 07:42:04.881260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.899 [2024-11-26 07:42:04.881269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.899 qpair failed and we were unable to recover it. 00:32:20.899 [2024-11-26 07:42:04.881564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.899 [2024-11-26 07:42:04.881572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.899 qpair failed and we were unable to recover it. 00:32:20.899 [2024-11-26 07:42:04.881887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.899 [2024-11-26 07:42:04.881895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.899 qpair failed and we were unable to recover it. 00:32:20.899 [2024-11-26 07:42:04.882212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.900 [2024-11-26 07:42:04.882220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.900 qpair failed and we were unable to recover it. 00:32:20.900 [2024-11-26 07:42:04.882395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.900 [2024-11-26 07:42:04.882404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.900 qpair failed and we were unable to recover it. 00:32:20.900 [2024-11-26 07:42:04.882571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.900 [2024-11-26 07:42:04.882579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.900 qpair failed and we were unable to recover it. 00:32:20.900 [2024-11-26 07:42:04.882663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.900 [2024-11-26 07:42:04.882671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.900 qpair failed and we were unable to recover it. 00:32:20.900 [2024-11-26 07:42:04.883122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.900 [2024-11-26 07:42:04.883131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.900 qpair failed and we were unable to recover it. 00:32:20.900 [2024-11-26 07:42:04.883426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.900 [2024-11-26 07:42:04.883434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.900 qpair failed and we were unable to recover it. 00:32:20.900 [2024-11-26 07:42:04.883765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.900 [2024-11-26 07:42:04.883773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.900 qpair failed and we were unable to recover it. 00:32:20.900 [2024-11-26 07:42:04.883944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.900 [2024-11-26 07:42:04.883953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.900 qpair failed and we were unable to recover it. 00:32:20.900 [2024-11-26 07:42:04.884126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.900 [2024-11-26 07:42:04.884134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.900 qpair failed and we were unable to recover it. 00:32:20.900 [2024-11-26 07:42:04.884329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.900 [2024-11-26 07:42:04.884337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.900 qpair failed and we were unable to recover it. 00:32:20.900 [2024-11-26 07:42:04.884641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.900 [2024-11-26 07:42:04.884649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.900 qpair failed and we were unable to recover it. 00:32:20.900 [2024-11-26 07:42:04.884936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.900 [2024-11-26 07:42:04.884945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.900 qpair failed and we were unable to recover it. 00:32:20.900 [2024-11-26 07:42:04.885278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.900 [2024-11-26 07:42:04.885287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.900 qpair failed and we were unable to recover it. 00:32:20.900 [2024-11-26 07:42:04.885628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.900 [2024-11-26 07:42:04.885636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.900 qpair failed and we were unable to recover it. 00:32:20.900 [2024-11-26 07:42:04.886015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.900 [2024-11-26 07:42:04.886024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.900 qpair failed and we were unable to recover it. 00:32:20.900 [2024-11-26 07:42:04.886276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.900 [2024-11-26 07:42:04.886284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.900 qpair failed and we were unable to recover it. 00:32:20.900 [2024-11-26 07:42:04.886617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.900 [2024-11-26 07:42:04.886626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.900 qpair failed and we were unable to recover it. 00:32:20.900 [2024-11-26 07:42:04.886813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.900 [2024-11-26 07:42:04.886823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.900 qpair failed and we were unable to recover it. 00:32:20.900 [2024-11-26 07:42:04.886946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.900 [2024-11-26 07:42:04.886955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.900 qpair failed and we were unable to recover it. 00:32:20.900 [2024-11-26 07:42:04.887295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.900 [2024-11-26 07:42:04.887306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.900 qpair failed and we were unable to recover it. 00:32:20.900 [2024-11-26 07:42:04.887484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.900 [2024-11-26 07:42:04.887492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.900 qpair failed and we were unable to recover it. 00:32:20.900 [2024-11-26 07:42:04.887725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.900 [2024-11-26 07:42:04.887733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.900 qpair failed and we were unable to recover it. 00:32:20.900 [2024-11-26 07:42:04.888046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.900 [2024-11-26 07:42:04.888055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.900 qpair failed and we were unable to recover it. 00:32:20.900 [2024-11-26 07:42:04.888380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.900 [2024-11-26 07:42:04.888388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.900 qpair failed and we were unable to recover it. 00:32:20.900 [2024-11-26 07:42:04.888563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.900 [2024-11-26 07:42:04.888572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.900 qpair failed and we were unable to recover it. 00:32:20.900 [2024-11-26 07:42:04.888742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.900 [2024-11-26 07:42:04.888750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.900 qpair failed and we were unable to recover it. 00:32:20.900 [2024-11-26 07:42:04.889042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.900 [2024-11-26 07:42:04.889050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.900 qpair failed and we were unable to recover it. 00:32:20.900 [2024-11-26 07:42:04.889235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.900 [2024-11-26 07:42:04.889244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.900 qpair failed and we were unable to recover it. 00:32:20.900 [2024-11-26 07:42:04.889589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.900 [2024-11-26 07:42:04.889597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.900 qpair failed and we were unable to recover it. 00:32:20.900 [2024-11-26 07:42:04.889883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.900 [2024-11-26 07:42:04.889892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.900 qpair failed and we were unable to recover it. 00:32:20.900 [2024-11-26 07:42:04.890245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.900 [2024-11-26 07:42:04.890253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.900 qpair failed and we were unable to recover it. 00:32:20.900 [2024-11-26 07:42:04.890438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.900 [2024-11-26 07:42:04.890447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.900 qpair failed and we were unable to recover it. 00:32:20.900 [2024-11-26 07:42:04.890624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.900 [2024-11-26 07:42:04.890632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.900 qpair failed and we were unable to recover it. 00:32:20.900 [2024-11-26 07:42:04.890814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.900 [2024-11-26 07:42:04.890822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.900 qpair failed and we were unable to recover it. 00:32:20.900 [2024-11-26 07:42:04.891091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.900 [2024-11-26 07:42:04.891103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.900 qpair failed and we were unable to recover it. 00:32:20.900 [2024-11-26 07:42:04.891412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.900 [2024-11-26 07:42:04.891420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.900 qpair failed and we were unable to recover it. 00:32:20.900 [2024-11-26 07:42:04.891758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.900 [2024-11-26 07:42:04.891766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.900 qpair failed and we were unable to recover it. 00:32:20.900 [2024-11-26 07:42:04.891989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.900 [2024-11-26 07:42:04.891998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.900 qpair failed and we were unable to recover it. 00:32:20.901 [2024-11-26 07:42:04.892185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.901 [2024-11-26 07:42:04.892193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.901 qpair failed and we were unable to recover it. 00:32:20.901 [2024-11-26 07:42:04.892519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.901 [2024-11-26 07:42:04.892527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.901 qpair failed and we were unable to recover it. 00:32:20.901 [2024-11-26 07:42:04.892831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.901 [2024-11-26 07:42:04.892840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.901 qpair failed and we were unable to recover it. 00:32:20.901 [2024-11-26 07:42:04.893159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.901 [2024-11-26 07:42:04.893168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.901 qpair failed and we were unable to recover it. 00:32:20.901 [2024-11-26 07:42:04.893279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.901 [2024-11-26 07:42:04.893287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.901 qpair failed and we were unable to recover it. 00:32:20.901 [2024-11-26 07:42:04.893599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.901 [2024-11-26 07:42:04.893607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.901 qpair failed and we were unable to recover it. 00:32:20.901 [2024-11-26 07:42:04.893924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.901 [2024-11-26 07:42:04.893933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.901 qpair failed and we were unable to recover it. 00:32:20.901 [2024-11-26 07:42:04.894279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.901 [2024-11-26 07:42:04.894287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.901 qpair failed and we were unable to recover it. 00:32:20.901 [2024-11-26 07:42:04.894446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.901 [2024-11-26 07:42:04.894454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.901 qpair failed and we were unable to recover it. 00:32:20.901 [2024-11-26 07:42:04.894770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.901 [2024-11-26 07:42:04.894779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.901 qpair failed and we were unable to recover it. 00:32:20.901 [2024-11-26 07:42:04.895113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.901 [2024-11-26 07:42:04.895121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.901 qpair failed and we were unable to recover it. 00:32:20.901 [2024-11-26 07:42:04.895432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.901 [2024-11-26 07:42:04.895440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.901 qpair failed and we were unable to recover it. 00:32:20.901 [2024-11-26 07:42:04.895479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.901 [2024-11-26 07:42:04.895485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.901 qpair failed and we were unable to recover it. 00:32:20.901 [2024-11-26 07:42:04.895769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.901 [2024-11-26 07:42:04.895777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.901 qpair failed and we were unable to recover it. 00:32:20.901 [2024-11-26 07:42:04.896128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.901 [2024-11-26 07:42:04.896136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.901 qpair failed and we were unable to recover it. 00:32:20.901 [2024-11-26 07:42:04.896449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.901 [2024-11-26 07:42:04.896458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.901 qpair failed and we were unable to recover it. 00:32:20.901 [2024-11-26 07:42:04.896775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.901 [2024-11-26 07:42:04.896784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.901 qpair failed and we were unable to recover it. 00:32:20.901 [2024-11-26 07:42:04.896944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.901 [2024-11-26 07:42:04.896952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.901 qpair failed and we were unable to recover it. 00:32:20.901 [2024-11-26 07:42:04.897116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.901 [2024-11-26 07:42:04.897124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.901 qpair failed and we were unable to recover it. 00:32:20.901 [2024-11-26 07:42:04.897443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.901 [2024-11-26 07:42:04.897451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.901 qpair failed and we were unable to recover it. 00:32:20.901 [2024-11-26 07:42:04.897613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.901 [2024-11-26 07:42:04.897620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.901 qpair failed and we were unable to recover it. 00:32:20.901 [2024-11-26 07:42:04.897804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.901 [2024-11-26 07:42:04.897812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.901 qpair failed and we were unable to recover it. 00:32:20.901 [2024-11-26 07:42:04.898097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.901 [2024-11-26 07:42:04.898106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.901 qpair failed and we were unable to recover it. 00:32:20.901 [2024-11-26 07:42:04.898283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.901 [2024-11-26 07:42:04.898291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.901 qpair failed and we were unable to recover it. 00:32:20.901 [2024-11-26 07:42:04.898561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.901 [2024-11-26 07:42:04.898569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.901 qpair failed and we were unable to recover it. 00:32:20.901 [2024-11-26 07:42:04.898731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.901 [2024-11-26 07:42:04.898739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.901 qpair failed and we were unable to recover it. 00:32:20.901 [2024-11-26 07:42:04.898898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.901 [2024-11-26 07:42:04.898906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.901 qpair failed and we were unable to recover it. 00:32:20.901 [2024-11-26 07:42:04.899221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.901 [2024-11-26 07:42:04.899229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.901 qpair failed and we were unable to recover it. 00:32:20.901 [2024-11-26 07:42:04.899529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.901 [2024-11-26 07:42:04.899537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.901 qpair failed and we were unable to recover it. 00:32:20.901 [2024-11-26 07:42:04.899868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.901 [2024-11-26 07:42:04.899877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.901 qpair failed and we were unable to recover it. 00:32:20.901 [2024-11-26 07:42:04.899958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.901 [2024-11-26 07:42:04.899964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.901 qpair failed and we were unable to recover it. 00:32:20.901 [2024-11-26 07:42:04.900255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.901 [2024-11-26 07:42:04.900263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.901 qpair failed and we were unable to recover it. 00:32:20.901 [2024-11-26 07:42:04.900410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.901 [2024-11-26 07:42:04.900418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.901 qpair failed and we were unable to recover it. 00:32:20.901 [2024-11-26 07:42:04.900785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.901 [2024-11-26 07:42:04.900793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.901 qpair failed and we were unable to recover it. 00:32:20.901 [2024-11-26 07:42:04.900988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.901 [2024-11-26 07:42:04.900997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.901 qpair failed and we were unable to recover it. 00:32:20.901 [2024-11-26 07:42:04.901342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.901 [2024-11-26 07:42:04.901349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.901 qpair failed and we were unable to recover it. 00:32:20.901 [2024-11-26 07:42:04.901687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.901 [2024-11-26 07:42:04.901696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.901 qpair failed and we were unable to recover it. 00:32:20.901 [2024-11-26 07:42:04.902038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.901 [2024-11-26 07:42:04.902047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.901 qpair failed and we were unable to recover it. 00:32:20.902 [2024-11-26 07:42:04.902364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.902 [2024-11-26 07:42:04.902372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.902 qpair failed and we were unable to recover it. 00:32:20.902 [2024-11-26 07:42:04.902532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.902 [2024-11-26 07:42:04.902541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.902 qpair failed and we were unable to recover it. 00:32:20.902 [2024-11-26 07:42:04.902767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.902 [2024-11-26 07:42:04.902775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.902 qpair failed and we were unable to recover it. 00:32:20.902 [2024-11-26 07:42:04.903076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.902 [2024-11-26 07:42:04.903085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.902 qpair failed and we were unable to recover it. 00:32:20.902 [2024-11-26 07:42:04.903173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.902 [2024-11-26 07:42:04.903180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.902 qpair failed and we were unable to recover it. 00:32:20.902 [2024-11-26 07:42:04.903446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.902 [2024-11-26 07:42:04.903453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.902 qpair failed and we were unable to recover it. 00:32:20.902 [2024-11-26 07:42:04.903611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.902 [2024-11-26 07:42:04.903620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.902 qpair failed and we were unable to recover it. 00:32:20.902 [2024-11-26 07:42:04.903664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.902 [2024-11-26 07:42:04.903672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.902 qpair failed and we were unable to recover it. 00:32:20.902 [2024-11-26 07:42:04.903844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.902 [2024-11-26 07:42:04.903853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.902 qpair failed and we were unable to recover it. 00:32:20.902 [2024-11-26 07:42:04.904084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.902 [2024-11-26 07:42:04.904092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.902 qpair failed and we were unable to recover it. 00:32:20.902 [2024-11-26 07:42:04.904463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.902 [2024-11-26 07:42:04.904471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.902 qpair failed and we were unable to recover it. 00:32:20.902 [2024-11-26 07:42:04.904781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.902 [2024-11-26 07:42:04.904789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.902 qpair failed and we were unable to recover it. 00:32:20.902 [2024-11-26 07:42:04.905104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.902 [2024-11-26 07:42:04.905113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.902 qpair failed and we were unable to recover it. 00:32:20.902 [2024-11-26 07:42:04.905432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.902 [2024-11-26 07:42:04.905440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.902 qpair failed and we were unable to recover it. 00:32:20.902 [2024-11-26 07:42:04.905618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.902 [2024-11-26 07:42:04.905626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.902 qpair failed and we were unable to recover it. 00:32:20.902 [2024-11-26 07:42:04.905891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.902 [2024-11-26 07:42:04.905899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.902 qpair failed and we were unable to recover it. 00:32:20.902 [2024-11-26 07:42:04.906090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.902 [2024-11-26 07:42:04.906098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.902 qpair failed and we were unable to recover it. 00:32:20.902 [2024-11-26 07:42:04.906444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.902 [2024-11-26 07:42:04.906452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.902 qpair failed and we were unable to recover it. 00:32:20.902 [2024-11-26 07:42:04.906758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.902 [2024-11-26 07:42:04.906766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.902 qpair failed and we were unable to recover it. 00:32:20.902 [2024-11-26 07:42:04.906964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.902 [2024-11-26 07:42:04.906973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.902 qpair failed and we were unable to recover it. 00:32:20.902 [2024-11-26 07:42:04.907296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.902 [2024-11-26 07:42:04.907304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.902 qpair failed and we were unable to recover it. 00:32:20.902 [2024-11-26 07:42:04.907343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.902 [2024-11-26 07:42:04.907349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.902 qpair failed and we were unable to recover it. 00:32:20.902 [2024-11-26 07:42:04.907668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.902 [2024-11-26 07:42:04.907676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.902 qpair failed and we were unable to recover it. 00:32:20.902 [2024-11-26 07:42:04.908018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.902 [2024-11-26 07:42:04.908026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.902 qpair failed and we were unable to recover it. 00:32:20.902 [2024-11-26 07:42:04.908331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.902 [2024-11-26 07:42:04.908339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.902 qpair failed and we were unable to recover it. 00:32:20.902 [2024-11-26 07:42:04.908505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.902 [2024-11-26 07:42:04.908513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.902 qpair failed and we were unable to recover it. 00:32:20.902 [2024-11-26 07:42:04.908736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.902 [2024-11-26 07:42:04.908744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.902 qpair failed and we were unable to recover it. 00:32:20.902 [2024-11-26 07:42:04.909061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.902 [2024-11-26 07:42:04.909070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.902 qpair failed and we were unable to recover it. 00:32:20.902 [2024-11-26 07:42:04.909389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.902 [2024-11-26 07:42:04.909397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.902 qpair failed and we were unable to recover it. 00:32:20.902 [2024-11-26 07:42:04.909558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.902 [2024-11-26 07:42:04.909567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.902 qpair failed and we were unable to recover it. 00:32:20.902 [2024-11-26 07:42:04.909900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.902 [2024-11-26 07:42:04.909908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.902 qpair failed and we were unable to recover it. 00:32:20.902 [2024-11-26 07:42:04.910105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.902 [2024-11-26 07:42:04.910114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.902 qpair failed and we were unable to recover it. 00:32:20.902 [2024-11-26 07:42:04.910297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.902 [2024-11-26 07:42:04.910305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.902 qpair failed and we were unable to recover it. 00:32:20.902 [2024-11-26 07:42:04.910561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.903 [2024-11-26 07:42:04.910570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.903 qpair failed and we were unable to recover it. 00:32:20.903 [2024-11-26 07:42:04.910910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.903 [2024-11-26 07:42:04.910920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.903 qpair failed and we were unable to recover it. 00:32:20.903 [2024-11-26 07:42:04.911219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.903 [2024-11-26 07:42:04.911227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.903 qpair failed and we were unable to recover it. 00:32:20.903 [2024-11-26 07:42:04.911417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.903 [2024-11-26 07:42:04.911425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.903 qpair failed and we were unable to recover it. 00:32:20.903 [2024-11-26 07:42:04.911771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.903 [2024-11-26 07:42:04.911779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.903 qpair failed and we were unable to recover it. 00:32:20.903 [2024-11-26 07:42:04.911932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.903 [2024-11-26 07:42:04.911942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.903 qpair failed and we were unable to recover it. 00:32:20.903 [2024-11-26 07:42:04.912273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.903 [2024-11-26 07:42:04.912281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.903 qpair failed and we were unable to recover it. 00:32:20.903 [2024-11-26 07:42:04.912576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.903 [2024-11-26 07:42:04.912584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.903 qpair failed and we were unable to recover it. 00:32:20.903 [2024-11-26 07:42:04.912899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.903 [2024-11-26 07:42:04.912907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.903 qpair failed and we were unable to recover it. 00:32:20.903 [2024-11-26 07:42:04.913209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.903 [2024-11-26 07:42:04.913218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.903 qpair failed and we were unable to recover it. 00:32:20.903 [2024-11-26 07:42:04.913588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.903 [2024-11-26 07:42:04.913596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.903 qpair failed and we were unable to recover it. 00:32:20.903 [2024-11-26 07:42:04.913877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.903 [2024-11-26 07:42:04.913885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.903 qpair failed and we were unable to recover it. 00:32:20.903 [2024-11-26 07:42:04.914083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.903 [2024-11-26 07:42:04.914091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.903 qpair failed and we were unable to recover it. 00:32:20.903 [2024-11-26 07:42:04.914257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.903 [2024-11-26 07:42:04.914264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.903 qpair failed and we were unable to recover it. 00:32:20.903 [2024-11-26 07:42:04.914522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.903 [2024-11-26 07:42:04.914531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.903 qpair failed and we were unable to recover it. 00:32:20.903 [2024-11-26 07:42:04.914710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.903 [2024-11-26 07:42:04.914719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.903 qpair failed and we were unable to recover it. 00:32:20.903 [2024-11-26 07:42:04.915034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.903 [2024-11-26 07:42:04.915042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.903 qpair failed and we were unable to recover it. 00:32:20.903 [2024-11-26 07:42:04.915376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.903 [2024-11-26 07:42:04.915384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.903 qpair failed and we were unable to recover it. 00:32:20.903 [2024-11-26 07:42:04.915538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.903 [2024-11-26 07:42:04.915547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.903 qpair failed and we were unable to recover it. 00:32:20.903 [2024-11-26 07:42:04.915840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.903 [2024-11-26 07:42:04.915848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.903 qpair failed and we were unable to recover it. 00:32:20.903 [2024-11-26 07:42:04.916014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.903 [2024-11-26 07:42:04.916023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.903 qpair failed and we were unable to recover it. 00:32:20.903 [2024-11-26 07:42:04.916344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.903 [2024-11-26 07:42:04.916352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.903 qpair failed and we were unable to recover it. 00:32:20.903 [2024-11-26 07:42:04.916726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.903 [2024-11-26 07:42:04.916735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.903 qpair failed and we were unable to recover it. 00:32:20.903 [2024-11-26 07:42:04.917043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.903 [2024-11-26 07:42:04.917052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.903 qpair failed and we were unable to recover it. 00:32:20.903 [2024-11-26 07:42:04.917381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.903 [2024-11-26 07:42:04.917389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.903 qpair failed and we were unable to recover it. 00:32:20.903 [2024-11-26 07:42:04.917578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.903 [2024-11-26 07:42:04.917586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.903 qpair failed and we were unable to recover it. 00:32:20.903 [2024-11-26 07:42:04.917750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.903 [2024-11-26 07:42:04.917759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.903 qpair failed and we were unable to recover it. 00:32:20.903 [2024-11-26 07:42:04.918048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.903 [2024-11-26 07:42:04.918056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.903 qpair failed and we were unable to recover it. 00:32:20.903 [2024-11-26 07:42:04.918367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.903 [2024-11-26 07:42:04.918375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.903 qpair failed and we were unable to recover it. 00:32:20.903 [2024-11-26 07:42:04.918703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.903 [2024-11-26 07:42:04.918711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.903 qpair failed and we were unable to recover it. 00:32:20.903 [2024-11-26 07:42:04.919025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.903 [2024-11-26 07:42:04.919033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.903 qpair failed and we were unable to recover it. 00:32:20.903 [2024-11-26 07:42:04.919212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.903 [2024-11-26 07:42:04.919221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.903 qpair failed and we were unable to recover it. 00:32:20.903 [2024-11-26 07:42:04.919544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.903 [2024-11-26 07:42:04.919553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.903 qpair failed and we were unable to recover it. 00:32:20.903 [2024-11-26 07:42:04.919874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.903 [2024-11-26 07:42:04.919882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.903 qpair failed and we were unable to recover it. 00:32:20.903 [2024-11-26 07:42:04.920155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.903 [2024-11-26 07:42:04.920164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.903 qpair failed and we were unable to recover it. 00:32:20.903 [2024-11-26 07:42:04.920327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.903 [2024-11-26 07:42:04.920336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.903 qpair failed and we were unable to recover it. 00:32:20.903 [2024-11-26 07:42:04.920665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.903 [2024-11-26 07:42:04.920673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.903 qpair failed and we were unable to recover it. 00:32:20.903 [2024-11-26 07:42:04.920748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.903 [2024-11-26 07:42:04.920754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.903 qpair failed and we were unable to recover it. 00:32:20.903 [2024-11-26 07:42:04.921073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.904 [2024-11-26 07:42:04.921081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.904 qpair failed and we were unable to recover it. 00:32:20.904 [2024-11-26 07:42:04.921277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.904 [2024-11-26 07:42:04.921285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.904 qpair failed and we were unable to recover it. 00:32:20.904 [2024-11-26 07:42:04.921606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.904 [2024-11-26 07:42:04.921614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.904 qpair failed and we were unable to recover it. 00:32:20.904 [2024-11-26 07:42:04.921998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.904 [2024-11-26 07:42:04.922006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.904 qpair failed and we were unable to recover it. 00:32:20.904 [2024-11-26 07:42:04.922349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.904 [2024-11-26 07:42:04.922358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.904 qpair failed and we were unable to recover it. 00:32:20.904 [2024-11-26 07:42:04.922525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.904 [2024-11-26 07:42:04.922533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.904 qpair failed and we were unable to recover it. 00:32:20.904 [2024-11-26 07:42:04.922572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.904 [2024-11-26 07:42:04.922578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.904 qpair failed and we were unable to recover it. 00:32:20.904 [2024-11-26 07:42:04.922857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.904 [2024-11-26 07:42:04.922870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.904 qpair failed and we were unable to recover it. 00:32:20.904 [2024-11-26 07:42:04.923185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.904 [2024-11-26 07:42:04.923194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.904 qpair failed and we were unable to recover it. 00:32:20.904 [2024-11-26 07:42:04.923507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.904 [2024-11-26 07:42:04.923515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.904 qpair failed and we were unable to recover it. 00:32:20.904 [2024-11-26 07:42:04.923845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.904 [2024-11-26 07:42:04.923853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.904 qpair failed and we were unable to recover it. 00:32:20.904 [2024-11-26 07:42:04.924162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.904 [2024-11-26 07:42:04.924170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.904 qpair failed and we were unable to recover it. 00:32:20.904 [2024-11-26 07:42:04.924345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.904 [2024-11-26 07:42:04.924354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.904 qpair failed and we were unable to recover it. 00:32:20.904 [2024-11-26 07:42:04.924725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.904 [2024-11-26 07:42:04.924733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.904 qpair failed and we were unable to recover it. 00:32:20.904 [2024-11-26 07:42:04.925048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.904 [2024-11-26 07:42:04.925057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.904 qpair failed and we were unable to recover it. 00:32:20.904 [2024-11-26 07:42:04.925382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.904 [2024-11-26 07:42:04.925390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.904 qpair failed and we were unable to recover it. 00:32:20.904 [2024-11-26 07:42:04.925687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.904 [2024-11-26 07:42:04.925695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.904 qpair failed and we were unable to recover it. 00:32:20.904 [2024-11-26 07:42:04.925878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.904 [2024-11-26 07:42:04.925886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.904 qpair failed and we were unable to recover it. 00:32:20.904 [2024-11-26 07:42:04.926174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.904 [2024-11-26 07:42:04.926182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.904 qpair failed and we were unable to recover it. 00:32:20.904 [2024-11-26 07:42:04.926469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.904 [2024-11-26 07:42:04.926477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.904 qpair failed and we were unable to recover it. 00:32:20.904 [2024-11-26 07:42:04.926541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.904 [2024-11-26 07:42:04.926548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.904 qpair failed and we were unable to recover it. 00:32:20.904 [2024-11-26 07:42:04.926698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.904 [2024-11-26 07:42:04.926707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.904 qpair failed and we were unable to recover it. 00:32:20.904 [2024-11-26 07:42:04.926868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.904 [2024-11-26 07:42:04.926877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.904 qpair failed and we were unable to recover it. 00:32:20.904 [2024-11-26 07:42:04.927153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.904 [2024-11-26 07:42:04.927161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.904 qpair failed and we were unable to recover it. 00:32:20.904 [2024-11-26 07:42:04.927481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.904 [2024-11-26 07:42:04.927489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.904 qpair failed and we were unable to recover it. 00:32:20.904 [2024-11-26 07:42:04.927701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.904 [2024-11-26 07:42:04.927709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.904 qpair failed and we were unable to recover it. 00:32:20.904 [2024-11-26 07:42:04.927888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.904 [2024-11-26 07:42:04.927896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.904 qpair failed and we were unable to recover it. 00:32:20.904 [2024-11-26 07:42:04.928207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.904 [2024-11-26 07:42:04.928215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.904 qpair failed and we were unable to recover it. 00:32:20.904 [2024-11-26 07:42:04.928534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.904 [2024-11-26 07:42:04.928542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.904 qpair failed and we were unable to recover it. 00:32:20.904 [2024-11-26 07:42:04.928840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.904 [2024-11-26 07:42:04.928848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.904 qpair failed and we were unable to recover it. 00:32:20.904 [2024-11-26 07:42:04.929141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.904 [2024-11-26 07:42:04.929149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.904 qpair failed and we were unable to recover it. 00:32:20.904 [2024-11-26 07:42:04.929460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.904 [2024-11-26 07:42:04.929468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.904 qpair failed and we were unable to recover it. 00:32:20.904 [2024-11-26 07:42:04.929789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.904 [2024-11-26 07:42:04.929798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.904 qpair failed and we were unable to recover it. 00:32:20.904 [2024-11-26 07:42:04.929981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.904 [2024-11-26 07:42:04.929990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.904 qpair failed and we were unable to recover it. 00:32:20.904 [2024-11-26 07:42:04.930167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.904 [2024-11-26 07:42:04.930175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.904 qpair failed and we were unable to recover it. 00:32:20.904 [2024-11-26 07:42:04.930366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.904 [2024-11-26 07:42:04.930375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.904 qpair failed and we were unable to recover it. 00:32:20.904 [2024-11-26 07:42:04.930677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.904 [2024-11-26 07:42:04.930685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.904 qpair failed and we were unable to recover it. 00:32:20.904 [2024-11-26 07:42:04.931001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.904 [2024-11-26 07:42:04.931010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.904 qpair failed and we were unable to recover it. 00:32:20.905 [2024-11-26 07:42:04.931356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.905 [2024-11-26 07:42:04.931364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.905 qpair failed and we were unable to recover it. 00:32:20.905 [2024-11-26 07:42:04.931538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.905 [2024-11-26 07:42:04.931547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.905 qpair failed and we were unable to recover it. 00:32:20.905 [2024-11-26 07:42:04.931917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.905 [2024-11-26 07:42:04.931926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.905 qpair failed and we were unable to recover it. 00:32:20.905 [2024-11-26 07:42:04.932233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.905 [2024-11-26 07:42:04.932241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.905 qpair failed and we were unable to recover it. 00:32:20.905 [2024-11-26 07:42:04.932429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.905 [2024-11-26 07:42:04.932437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.905 qpair failed and we were unable to recover it. 00:32:20.905 [2024-11-26 07:42:04.932590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.905 [2024-11-26 07:42:04.932599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.905 qpair failed and we were unable to recover it. 00:32:20.905 [2024-11-26 07:42:04.932792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.905 [2024-11-26 07:42:04.932800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.905 qpair failed and we were unable to recover it. 00:32:20.905 [2024-11-26 07:42:04.932969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.905 [2024-11-26 07:42:04.932977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.905 qpair failed and we were unable to recover it. 00:32:20.905 [2024-11-26 07:42:04.933203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.905 [2024-11-26 07:42:04.933211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.905 qpair failed and we were unable to recover it. 00:32:20.905 [2024-11-26 07:42:04.933404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.905 [2024-11-26 07:42:04.933413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.905 qpair failed and we were unable to recover it. 00:32:20.905 [2024-11-26 07:42:04.933569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.905 [2024-11-26 07:42:04.933577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.905 qpair failed and we were unable to recover it. 00:32:20.905 [2024-11-26 07:42:04.933891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.905 [2024-11-26 07:42:04.933899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.905 qpair failed and we were unable to recover it. 00:32:20.905 [2024-11-26 07:42:04.934079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.905 [2024-11-26 07:42:04.934088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.905 qpair failed and we were unable to recover it. 00:32:20.905 [2024-11-26 07:42:04.934252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.905 [2024-11-26 07:42:04.934260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.905 qpair failed and we were unable to recover it. 00:32:20.905 [2024-11-26 07:42:04.934578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.905 [2024-11-26 07:42:04.934587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.905 qpair failed and we were unable to recover it. 00:32:20.905 [2024-11-26 07:42:04.934767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.905 [2024-11-26 07:42:04.934776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.905 qpair failed and we were unable to recover it. 00:32:20.905 [2024-11-26 07:42:04.935046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.905 [2024-11-26 07:42:04.935053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.905 qpair failed and we were unable to recover it. 00:32:20.905 [2024-11-26 07:42:04.935387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.905 [2024-11-26 07:42:04.935395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.905 qpair failed and we were unable to recover it. 00:32:20.905 [2024-11-26 07:42:04.935698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.905 [2024-11-26 07:42:04.935706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.905 qpair failed and we were unable to recover it. 00:32:20.905 [2024-11-26 07:42:04.936020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.905 [2024-11-26 07:42:04.936028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.905 qpair failed and we were unable to recover it. 00:32:20.905 [2024-11-26 07:42:04.936349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.905 [2024-11-26 07:42:04.936357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.905 qpair failed and we were unable to recover it. 00:32:20.905 [2024-11-26 07:42:04.936708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.905 [2024-11-26 07:42:04.936716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.905 qpair failed and we were unable to recover it. 00:32:20.905 [2024-11-26 07:42:04.937033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.905 [2024-11-26 07:42:04.937042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.905 qpair failed and we were unable to recover it. 00:32:20.905 [2024-11-26 07:42:04.937220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.905 [2024-11-26 07:42:04.937229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.905 qpair failed and we were unable to recover it. 00:32:20.905 [2024-11-26 07:42:04.937556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.905 [2024-11-26 07:42:04.937564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.905 qpair failed and we were unable to recover it. 00:32:20.905 [2024-11-26 07:42:04.937865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.905 [2024-11-26 07:42:04.937875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.905 qpair failed and we were unable to recover it. 00:32:20.905 [2024-11-26 07:42:04.938209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.905 [2024-11-26 07:42:04.938217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.905 qpair failed and we were unable to recover it. 00:32:20.905 [2024-11-26 07:42:04.938551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.905 [2024-11-26 07:42:04.938559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.905 qpair failed and we were unable to recover it. 00:32:20.905 [2024-11-26 07:42:04.938846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.905 [2024-11-26 07:42:04.938854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.905 qpair failed and we were unable to recover it. 00:32:20.905 [2024-11-26 07:42:04.939026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.905 [2024-11-26 07:42:04.939035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.905 qpair failed and we were unable to recover it. 00:32:20.905 [2024-11-26 07:42:04.939223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.905 [2024-11-26 07:42:04.939231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.905 qpair failed and we were unable to recover it. 00:32:20.905 [2024-11-26 07:42:04.939508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.905 [2024-11-26 07:42:04.939516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.905 qpair failed and we were unable to recover it. 00:32:20.905 [2024-11-26 07:42:04.939834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.905 [2024-11-26 07:42:04.939841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.905 qpair failed and we were unable to recover it. 00:32:20.905 [2024-11-26 07:42:04.940057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.905 [2024-11-26 07:42:04.940066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.905 qpair failed and we were unable to recover it. 00:32:20.905 [2024-11-26 07:42:04.940380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.905 [2024-11-26 07:42:04.940389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.905 qpair failed and we were unable to recover it. 00:32:20.905 [2024-11-26 07:42:04.940562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.905 [2024-11-26 07:42:04.940571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.905 qpair failed and we were unable to recover it. 00:32:20.905 [2024-11-26 07:42:04.940957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.905 [2024-11-26 07:42:04.941053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f0000b90 with addr=10.0.0.2, port=4420 00:32:20.905 qpair failed and we were unable to recover it. 00:32:20.905 [2024-11-26 07:42:04.941343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.905 [2024-11-26 07:42:04.941382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f0000b90 with addr=10.0.0.2, port=4420 00:32:20.906 qpair failed and we were unable to recover it. 00:32:20.906 [2024-11-26 07:42:04.941586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.906 [2024-11-26 07:42:04.941596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.906 qpair failed and we were unable to recover it. 00:32:20.906 [2024-11-26 07:42:04.941895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.906 [2024-11-26 07:42:04.941903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.906 qpair failed and we were unable to recover it. 00:32:20.906 [2024-11-26 07:42:04.942279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.906 [2024-11-26 07:42:04.942287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.906 qpair failed and we were unable to recover it. 00:32:20.906 [2024-11-26 07:42:04.942630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.906 [2024-11-26 07:42:04.942637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.906 qpair failed and we were unable to recover it. 00:32:20.906 [2024-11-26 07:42:04.942814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.906 [2024-11-26 07:42:04.942822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.906 qpair failed and we were unable to recover it. 00:32:20.906 [2024-11-26 07:42:04.943111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.906 [2024-11-26 07:42:04.943119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.906 qpair failed and we were unable to recover it. 00:32:20.906 [2024-11-26 07:42:04.943355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.906 [2024-11-26 07:42:04.943363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.906 qpair failed and we were unable to recover it. 00:32:20.906 [2024-11-26 07:42:04.943544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.906 [2024-11-26 07:42:04.943553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.906 qpair failed and we were unable to recover it. 00:32:20.906 [2024-11-26 07:42:04.943726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.906 [2024-11-26 07:42:04.943734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.906 qpair failed and we were unable to recover it. 00:32:20.906 [2024-11-26 07:42:04.943910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.906 [2024-11-26 07:42:04.943918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.906 qpair failed and we were unable to recover it. 00:32:20.906 [2024-11-26 07:42:04.944218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.906 [2024-11-26 07:42:04.944225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.906 qpair failed and we were unable to recover it. 00:32:20.906 [2024-11-26 07:42:04.944556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.906 [2024-11-26 07:42:04.944566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.906 qpair failed and we were unable to recover it. 00:32:20.906 [2024-11-26 07:42:04.944750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.906 [2024-11-26 07:42:04.944759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.906 qpair failed and we were unable to recover it. 00:32:20.906 [2024-11-26 07:42:04.944881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.906 [2024-11-26 07:42:04.944889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.906 qpair failed and we were unable to recover it. 00:32:20.906 [2024-11-26 07:42:04.945112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.906 [2024-11-26 07:42:04.945121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.906 qpair failed and we were unable to recover it. 00:32:20.906 [2024-11-26 07:42:04.945458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.906 [2024-11-26 07:42:04.945466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.906 qpair failed and we were unable to recover it. 00:32:20.906 [2024-11-26 07:42:04.945797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.906 [2024-11-26 07:42:04.945805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.906 qpair failed and we were unable to recover it. 00:32:20.906 [2024-11-26 07:42:04.946109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.906 [2024-11-26 07:42:04.946117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.906 qpair failed and we were unable to recover it. 00:32:20.906 [2024-11-26 07:42:04.946453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.906 [2024-11-26 07:42:04.946462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.906 qpair failed and we were unable to recover it. 00:32:20.906 [2024-11-26 07:42:04.946793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.906 [2024-11-26 07:42:04.946801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.906 qpair failed and we were unable to recover it. 00:32:20.906 [2024-11-26 07:42:04.947146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.906 [2024-11-26 07:42:04.947154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.906 qpair failed and we were unable to recover it. 00:32:20.906 [2024-11-26 07:42:04.947484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.906 [2024-11-26 07:42:04.947493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.906 qpair failed and we were unable to recover it. 00:32:20.906 [2024-11-26 07:42:04.947659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.906 [2024-11-26 07:42:04.947667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.906 qpair failed and we were unable to recover it. 00:32:20.906 [2024-11-26 07:42:04.947711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.906 [2024-11-26 07:42:04.947719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.906 qpair failed and we were unable to recover it. 00:32:20.906 [2024-11-26 07:42:04.947905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.906 [2024-11-26 07:42:04.947913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.906 qpair failed and we were unable to recover it. 00:32:20.906 [2024-11-26 07:42:04.948197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.906 [2024-11-26 07:42:04.948205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.906 qpair failed and we were unable to recover it. 00:32:20.906 [2024-11-26 07:42:04.948397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.906 [2024-11-26 07:42:04.948405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.906 qpair failed and we were unable to recover it. 00:32:20.906 [2024-11-26 07:42:04.948733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.906 [2024-11-26 07:42:04.948741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.906 qpair failed and we were unable to recover it. 00:32:20.906 [2024-11-26 07:42:04.948903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.906 [2024-11-26 07:42:04.948911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.906 qpair failed and we were unable to recover it. 00:32:20.906 [2024-11-26 07:42:04.949216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.906 [2024-11-26 07:42:04.949223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.906 qpair failed and we were unable to recover it. 00:32:20.906 [2024-11-26 07:42:04.949533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.906 [2024-11-26 07:42:04.949541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.906 qpair failed and we were unable to recover it. 00:32:20.906 [2024-11-26 07:42:04.949865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.906 [2024-11-26 07:42:04.949873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.906 qpair failed and we were unable to recover it. 00:32:20.906 [2024-11-26 07:42:04.950184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.906 [2024-11-26 07:42:04.950192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.906 qpair failed and we were unable to recover it. 00:32:20.906 [2024-11-26 07:42:04.950510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.906 [2024-11-26 07:42:04.950518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.906 qpair failed and we were unable to recover it. 00:32:20.906 [2024-11-26 07:42:04.950703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.906 [2024-11-26 07:42:04.950712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.906 qpair failed and we were unable to recover it. 00:32:20.906 [2024-11-26 07:42:04.951027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.906 [2024-11-26 07:42:04.951036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.906 qpair failed and we were unable to recover it. 00:32:20.906 [2024-11-26 07:42:04.951233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.906 [2024-11-26 07:42:04.951241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.906 qpair failed and we were unable to recover it. 00:32:20.906 [2024-11-26 07:42:04.951433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.906 [2024-11-26 07:42:04.951441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.907 qpair failed and we were unable to recover it. 00:32:20.907 [2024-11-26 07:42:04.951713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.907 [2024-11-26 07:42:04.951722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.907 qpair failed and we were unable to recover it. 00:32:20.907 [2024-11-26 07:42:04.952034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.907 [2024-11-26 07:42:04.952044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.907 qpair failed and we were unable to recover it. 00:32:20.907 [2024-11-26 07:42:04.952377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.907 [2024-11-26 07:42:04.952386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.907 qpair failed and we were unable to recover it. 00:32:20.907 [2024-11-26 07:42:04.952682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.907 [2024-11-26 07:42:04.952691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.907 qpair failed and we were unable to recover it. 00:32:20.907 [2024-11-26 07:42:04.953004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.907 [2024-11-26 07:42:04.953013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.907 qpair failed and we were unable to recover it. 00:32:20.907 [2024-11-26 07:42:04.953185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.907 [2024-11-26 07:42:04.953195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.907 qpair failed and we were unable to recover it. 00:32:20.907 [2024-11-26 07:42:04.953539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.907 [2024-11-26 07:42:04.953548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.907 qpair failed and we were unable to recover it. 00:32:20.907 [2024-11-26 07:42:04.953868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.907 [2024-11-26 07:42:04.953876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.907 qpair failed and we were unable to recover it. 00:32:20.907 [2024-11-26 07:42:04.953936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.907 [2024-11-26 07:42:04.953942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.907 qpair failed and we were unable to recover it. 00:32:20.907 [2024-11-26 07:42:04.954268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.907 [2024-11-26 07:42:04.954276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.907 qpair failed and we were unable to recover it. 00:32:20.907 [2024-11-26 07:42:04.954570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.907 [2024-11-26 07:42:04.954578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.907 qpair failed and we were unable to recover it. 00:32:20.907 [2024-11-26 07:42:04.954896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.907 [2024-11-26 07:42:04.954904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.907 qpair failed and we were unable to recover it. 00:32:20.907 [2024-11-26 07:42:04.955286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.907 [2024-11-26 07:42:04.955295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.907 qpair failed and we were unable to recover it. 00:32:20.907 [2024-11-26 07:42:04.955594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.907 [2024-11-26 07:42:04.955605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.907 qpair failed and we were unable to recover it. 00:32:20.907 [2024-11-26 07:42:04.955767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.907 [2024-11-26 07:42:04.955776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.907 qpair failed and we were unable to recover it. 00:32:20.907 [2024-11-26 07:42:04.956083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.907 [2024-11-26 07:42:04.956092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.907 qpair failed and we were unable to recover it. 00:32:20.907 [2024-11-26 07:42:04.956314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.907 [2024-11-26 07:42:04.956322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.907 qpair failed and we were unable to recover it. 00:32:20.907 [2024-11-26 07:42:04.956589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.907 [2024-11-26 07:42:04.956597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.907 qpair failed and we were unable to recover it. 00:32:20.907 [2024-11-26 07:42:04.956823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.907 [2024-11-26 07:42:04.956831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.907 qpair failed and we were unable to recover it. 00:32:20.907 [2024-11-26 07:42:04.957007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.907 [2024-11-26 07:42:04.957016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.907 qpair failed and we were unable to recover it. 00:32:20.907 [2024-11-26 07:42:04.957336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.907 [2024-11-26 07:42:04.957344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.907 qpair failed and we were unable to recover it. 00:32:20.907 [2024-11-26 07:42:04.957656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.907 [2024-11-26 07:42:04.957664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.907 qpair failed and we were unable to recover it. 00:32:20.907 [2024-11-26 07:42:04.957836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.907 [2024-11-26 07:42:04.957845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.907 qpair failed and we were unable to recover it. 00:32:20.907 [2024-11-26 07:42:04.958114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.907 [2024-11-26 07:42:04.958122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.907 qpair failed and we were unable to recover it. 00:32:20.907 [2024-11-26 07:42:04.958556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.907 [2024-11-26 07:42:04.958564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.907 qpair failed and we were unable to recover it. 00:32:20.907 [2024-11-26 07:42:04.958772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.907 [2024-11-26 07:42:04.958780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.907 qpair failed and we were unable to recover it. 00:32:20.907 [2024-11-26 07:42:04.959103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.907 [2024-11-26 07:42:04.959111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.907 qpair failed and we were unable to recover it. 00:32:20.907 [2024-11-26 07:42:04.959297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.907 [2024-11-26 07:42:04.959306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.907 qpair failed and we were unable to recover it. 00:32:20.907 [2024-11-26 07:42:04.959424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.907 [2024-11-26 07:42:04.959432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.907 qpair failed and we were unable to recover it. 00:32:20.907 [2024-11-26 07:42:04.959651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.907 [2024-11-26 07:42:04.959659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.907 qpair failed and we were unable to recover it. 00:32:20.907 [2024-11-26 07:42:04.959977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.907 [2024-11-26 07:42:04.959985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.907 qpair failed and we were unable to recover it. 00:32:20.907 [2024-11-26 07:42:04.960321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.907 [2024-11-26 07:42:04.960329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.907 qpair failed and we were unable to recover it. 00:32:20.907 [2024-11-26 07:42:04.960489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.907 [2024-11-26 07:42:04.960498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.907 qpair failed and we were unable to recover it. 00:32:20.907 [2024-11-26 07:42:04.960693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.908 [2024-11-26 07:42:04.960702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.908 qpair failed and we were unable to recover it. 00:32:20.908 [2024-11-26 07:42:04.960877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.908 [2024-11-26 07:42:04.960887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.908 qpair failed and we were unable to recover it. 00:32:20.908 [2024-11-26 07:42:04.961208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.908 [2024-11-26 07:42:04.961216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.908 qpair failed and we were unable to recover it. 00:32:20.908 [2024-11-26 07:42:04.961512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.908 [2024-11-26 07:42:04.961520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.908 qpair failed and we were unable to recover it. 00:32:20.908 [2024-11-26 07:42:04.961692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.908 [2024-11-26 07:42:04.961700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.908 qpair failed and we were unable to recover it. 00:32:20.908 [2024-11-26 07:42:04.961903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.908 [2024-11-26 07:42:04.961912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.908 qpair failed and we were unable to recover it. 00:32:20.908 [2024-11-26 07:42:04.962093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.908 [2024-11-26 07:42:04.962102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.908 qpair failed and we were unable to recover it. 00:32:20.908 [2024-11-26 07:42:04.962399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.908 [2024-11-26 07:42:04.962407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.908 qpair failed and we were unable to recover it. 00:32:20.908 [2024-11-26 07:42:04.962595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.908 [2024-11-26 07:42:04.962603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.908 qpair failed and we were unable to recover it. 00:32:20.908 [2024-11-26 07:42:04.962920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.908 [2024-11-26 07:42:04.962928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.908 qpair failed and we were unable to recover it. 00:32:20.908 [2024-11-26 07:42:04.963260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.908 [2024-11-26 07:42:04.963268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.908 qpair failed and we were unable to recover it. 00:32:20.908 [2024-11-26 07:42:04.963457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.908 [2024-11-26 07:42:04.963465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.908 qpair failed and we were unable to recover it. 00:32:20.908 [2024-11-26 07:42:04.963819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.908 [2024-11-26 07:42:04.963827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.908 qpair failed and we were unable to recover it. 00:32:20.908 [2024-11-26 07:42:04.964121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.908 [2024-11-26 07:42:04.964129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.908 qpair failed and we were unable to recover it. 00:32:20.908 [2024-11-26 07:42:04.964447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.908 [2024-11-26 07:42:04.964455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.908 qpair failed and we were unable to recover it. 00:32:20.908 [2024-11-26 07:42:04.964768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.908 [2024-11-26 07:42:04.964776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.908 qpair failed and we were unable to recover it. 00:32:20.908 [2024-11-26 07:42:04.965099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.908 [2024-11-26 07:42:04.965107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.908 qpair failed and we were unable to recover it. 00:32:20.908 [2024-11-26 07:42:04.965462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.908 [2024-11-26 07:42:04.965471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.908 qpair failed and we were unable to recover it. 00:32:20.908 [2024-11-26 07:42:04.965780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.908 [2024-11-26 07:42:04.965788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.908 qpair failed and we were unable to recover it. 00:32:20.908 [2024-11-26 07:42:04.966176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.908 [2024-11-26 07:42:04.966184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.908 qpair failed and we were unable to recover it. 00:32:20.908 [2024-11-26 07:42:04.966382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.908 [2024-11-26 07:42:04.966392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.908 qpair failed and we were unable to recover it. 00:32:20.908 [2024-11-26 07:42:04.966731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.908 [2024-11-26 07:42:04.966739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.908 qpair failed and we were unable to recover it. 00:32:20.908 [2024-11-26 07:42:04.967035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.908 [2024-11-26 07:42:04.967043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.908 qpair failed and we were unable to recover it. 00:32:20.908 [2024-11-26 07:42:04.967357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.908 [2024-11-26 07:42:04.967365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.908 qpair failed and we were unable to recover it. 00:32:20.908 [2024-11-26 07:42:04.967675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.908 [2024-11-26 07:42:04.967683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.908 qpair failed and we were unable to recover it. 00:32:20.908 [2024-11-26 07:42:04.967838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.908 [2024-11-26 07:42:04.967847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.908 qpair failed and we were unable to recover it. 00:32:20.908 [2024-11-26 07:42:04.968172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.908 [2024-11-26 07:42:04.968181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.908 qpair failed and we were unable to recover it. 00:32:20.908 [2024-11-26 07:42:04.968499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.908 [2024-11-26 07:42:04.968506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.908 qpair failed and we were unable to recover it. 00:32:20.908 [2024-11-26 07:42:04.968865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.908 [2024-11-26 07:42:04.968874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.908 qpair failed and we were unable to recover it. 00:32:20.908 [2024-11-26 07:42:04.969140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.908 [2024-11-26 07:42:04.969149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.908 qpair failed and we were unable to recover it. 00:32:20.908 [2024-11-26 07:42:04.969333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.908 [2024-11-26 07:42:04.969340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.908 qpair failed and we were unable to recover it. 00:32:20.908 [2024-11-26 07:42:04.969558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.908 [2024-11-26 07:42:04.969566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.908 qpair failed and we were unable to recover it. 00:32:20.908 [2024-11-26 07:42:04.969886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.908 [2024-11-26 07:42:04.969894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.908 qpair failed and we were unable to recover it. 00:32:20.908 [2024-11-26 07:42:04.969965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.908 [2024-11-26 07:42:04.969971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.908 qpair failed and we were unable to recover it. 00:32:20.908 [2024-11-26 07:42:04.970257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.908 [2024-11-26 07:42:04.970265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.908 qpair failed and we were unable to recover it. 00:32:20.908 [2024-11-26 07:42:04.970540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.908 [2024-11-26 07:42:04.970548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.908 qpair failed and we were unable to recover it. 00:32:20.908 [2024-11-26 07:42:04.970881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.908 [2024-11-26 07:42:04.970890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.908 qpair failed and we were unable to recover it. 00:32:20.908 [2024-11-26 07:42:04.971209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.908 [2024-11-26 07:42:04.971217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.908 qpair failed and we were unable to recover it. 00:32:20.908 [2024-11-26 07:42:04.971411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.909 [2024-11-26 07:42:04.971419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.909 qpair failed and we were unable to recover it. 00:32:20.909 [2024-11-26 07:42:04.971591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.909 [2024-11-26 07:42:04.971599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.909 qpair failed and we were unable to recover it. 00:32:20.909 [2024-11-26 07:42:04.971885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.909 [2024-11-26 07:42:04.971894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.909 qpair failed and we were unable to recover it. 00:32:20.909 [2024-11-26 07:42:04.971931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.909 [2024-11-26 07:42:04.971938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.909 qpair failed and we were unable to recover it. 00:32:20.909 [2024-11-26 07:42:04.972230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.909 [2024-11-26 07:42:04.972239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.909 qpair failed and we were unable to recover it. 00:32:20.909 [2024-11-26 07:42:04.972554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.909 [2024-11-26 07:42:04.972562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.909 qpair failed and we were unable to recover it. 00:32:20.909 [2024-11-26 07:42:04.972696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.909 [2024-11-26 07:42:04.972704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.909 qpair failed and we were unable to recover it. 00:32:20.909 [2024-11-26 07:42:04.973036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.909 [2024-11-26 07:42:04.973044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.909 qpair failed and we were unable to recover it. 00:32:20.909 [2024-11-26 07:42:04.973385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.909 [2024-11-26 07:42:04.973394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.909 qpair failed and we were unable to recover it. 00:32:20.909 [2024-11-26 07:42:04.973728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.909 [2024-11-26 07:42:04.973736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.909 qpair failed and we were unable to recover it. 00:32:20.909 [2024-11-26 07:42:04.973988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.909 [2024-11-26 07:42:04.973996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.909 qpair failed and we were unable to recover it. 00:32:20.909 [2024-11-26 07:42:04.974179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.909 [2024-11-26 07:42:04.974187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.909 qpair failed and we were unable to recover it. 00:32:20.909 [2024-11-26 07:42:04.974501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.909 [2024-11-26 07:42:04.974509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.909 qpair failed and we were unable to recover it. 00:32:20.909 [2024-11-26 07:42:04.974855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.909 [2024-11-26 07:42:04.974872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.909 qpair failed and we were unable to recover it. 00:32:20.909 [2024-11-26 07:42:04.975183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.909 [2024-11-26 07:42:04.975191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.909 qpair failed and we were unable to recover it. 00:32:20.909 [2024-11-26 07:42:04.975506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.909 [2024-11-26 07:42:04.975514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.909 qpair failed and we were unable to recover it. 00:32:20.909 [2024-11-26 07:42:04.975583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.909 [2024-11-26 07:42:04.975590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.909 qpair failed and we were unable to recover it. 00:32:20.909 [2024-11-26 07:42:04.975880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.909 [2024-11-26 07:42:04.975889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.909 qpair failed and we were unable to recover it. 00:32:20.909 [2024-11-26 07:42:04.976216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.909 [2024-11-26 07:42:04.976224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.909 qpair failed and we were unable to recover it. 00:32:20.909 [2024-11-26 07:42:04.976377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.909 [2024-11-26 07:42:04.976385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.909 qpair failed and we were unable to recover it. 00:32:20.909 [2024-11-26 07:42:04.976573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.909 [2024-11-26 07:42:04.976581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.909 qpair failed and we were unable to recover it. 00:32:20.909 [2024-11-26 07:42:04.976849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.909 [2024-11-26 07:42:04.976857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.909 qpair failed and we were unable to recover it. 00:32:20.909 [2024-11-26 07:42:04.977210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.909 [2024-11-26 07:42:04.977222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.909 qpair failed and we were unable to recover it. 00:32:20.909 [2024-11-26 07:42:04.977532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.909 [2024-11-26 07:42:04.977541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.909 qpair failed and we were unable to recover it. 00:32:20.909 [2024-11-26 07:42:04.977854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.909 [2024-11-26 07:42:04.977866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.909 qpair failed and we were unable to recover it. 00:32:20.909 [2024-11-26 07:42:04.978060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.909 [2024-11-26 07:42:04.978068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.909 qpair failed and we were unable to recover it. 00:32:20.909 [2024-11-26 07:42:04.978246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.909 [2024-11-26 07:42:04.978255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.909 qpair failed and we were unable to recover it. 00:32:20.909 [2024-11-26 07:42:04.978307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.909 [2024-11-26 07:42:04.978315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.909 qpair failed and we were unable to recover it. 00:32:20.909 [2024-11-26 07:42:04.978471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.909 [2024-11-26 07:42:04.978479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.909 qpair failed and we were unable to recover it. 00:32:20.909 [2024-11-26 07:42:04.978672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.909 [2024-11-26 07:42:04.978680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.909 qpair failed and we were unable to recover it. 00:32:20.909 [2024-11-26 07:42:04.979012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.909 [2024-11-26 07:42:04.979021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.909 qpair failed and we were unable to recover it. 00:32:20.909 [2024-11-26 07:42:04.979341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.909 [2024-11-26 07:42:04.979349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.909 qpair failed and we were unable to recover it. 00:32:20.909 [2024-11-26 07:42:04.979664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.909 [2024-11-26 07:42:04.979672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.909 qpair failed and we were unable to recover it. 00:32:20.909 [2024-11-26 07:42:04.980027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.909 [2024-11-26 07:42:04.980036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.909 qpair failed and we were unable to recover it. 00:32:20.909 [2024-11-26 07:42:04.980372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.909 [2024-11-26 07:42:04.980380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.909 qpair failed and we were unable to recover it. 00:32:20.909 [2024-11-26 07:42:04.980692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.909 [2024-11-26 07:42:04.980700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.909 qpair failed and we were unable to recover it. 00:32:20.909 [2024-11-26 07:42:04.980858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.909 [2024-11-26 07:42:04.980873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.909 qpair failed and we were unable to recover it. 00:32:20.909 [2024-11-26 07:42:04.981225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.909 [2024-11-26 07:42:04.981233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.909 qpair failed and we were unable to recover it. 00:32:20.909 [2024-11-26 07:42:04.981581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.910 [2024-11-26 07:42:04.981590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.910 qpair failed and we were unable to recover it. 00:32:20.910 [2024-11-26 07:42:04.981829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.910 [2024-11-26 07:42:04.981838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.910 qpair failed and we were unable to recover it. 00:32:20.910 [2024-11-26 07:42:04.982156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.910 [2024-11-26 07:42:04.982164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.910 qpair failed and we were unable to recover it. 00:32:20.910 [2024-11-26 07:42:04.982378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.910 [2024-11-26 07:42:04.982385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.910 qpair failed and we were unable to recover it. 00:32:20.910 [2024-11-26 07:42:04.982695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.910 [2024-11-26 07:42:04.982703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.910 qpair failed and we were unable to recover it. 00:32:20.910 [2024-11-26 07:42:04.982855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.910 [2024-11-26 07:42:04.982867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.910 qpair failed and we were unable to recover it. 00:32:20.910 [2024-11-26 07:42:04.983051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.910 [2024-11-26 07:42:04.983060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.910 qpair failed and we were unable to recover it. 00:32:20.910 [2024-11-26 07:42:04.983359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.910 [2024-11-26 07:42:04.983366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.910 qpair failed and we were unable to recover it. 00:32:20.910 [2024-11-26 07:42:04.983494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.910 [2024-11-26 07:42:04.983502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.910 qpair failed and we were unable to recover it. 00:32:20.910 [2024-11-26 07:42:04.983670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.910 [2024-11-26 07:42:04.983678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.910 qpair failed and we were unable to recover it. 00:32:20.910 [2024-11-26 07:42:04.983981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.910 [2024-11-26 07:42:04.983989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.910 qpair failed and we were unable to recover it. 00:32:20.910 [2024-11-26 07:42:04.984150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.910 [2024-11-26 07:42:04.984159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.910 qpair failed and we were unable to recover it. 00:32:20.910 [2024-11-26 07:42:04.984465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.910 [2024-11-26 07:42:04.984473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.910 qpair failed and we were unable to recover it. 00:32:20.910 [2024-11-26 07:42:04.984630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.910 [2024-11-26 07:42:04.984639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.910 qpair failed and we were unable to recover it. 00:32:20.910 [2024-11-26 07:42:04.984929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.910 [2024-11-26 07:42:04.984937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.910 qpair failed and we were unable to recover it. 00:32:20.910 [2024-11-26 07:42:04.985287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.910 [2024-11-26 07:42:04.985295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.910 qpair failed and we were unable to recover it. 00:32:20.910 [2024-11-26 07:42:04.985612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.910 [2024-11-26 07:42:04.985620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.910 qpair failed and we were unable to recover it. 00:32:20.910 [2024-11-26 07:42:04.985914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.910 [2024-11-26 07:42:04.985922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.910 qpair failed and we were unable to recover it. 00:32:20.910 [2024-11-26 07:42:04.986232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.910 [2024-11-26 07:42:04.986240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.910 qpair failed and we were unable to recover it. 00:32:20.910 [2024-11-26 07:42:04.986453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.910 [2024-11-26 07:42:04.986461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.910 qpair failed and we were unable to recover it. 00:32:20.910 [2024-11-26 07:42:04.986641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.910 [2024-11-26 07:42:04.986649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.910 qpair failed and we were unable to recover it. 00:32:20.910 [2024-11-26 07:42:04.986964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.910 [2024-11-26 07:42:04.986972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.910 qpair failed and we were unable to recover it. 00:32:20.910 [2024-11-26 07:42:04.987264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.910 [2024-11-26 07:42:04.987272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.910 qpair failed and we were unable to recover it. 00:32:20.910 [2024-11-26 07:42:04.987429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.910 [2024-11-26 07:42:04.987437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.910 qpair failed and we were unable to recover it. 00:32:20.910 [2024-11-26 07:42:04.987760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.910 [2024-11-26 07:42:04.987769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.910 qpair failed and we were unable to recover it. 00:32:20.910 [2024-11-26 07:42:04.988074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.910 [2024-11-26 07:42:04.988083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.910 qpair failed and we were unable to recover it. 00:32:20.910 [2024-11-26 07:42:04.988258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.910 [2024-11-26 07:42:04.988266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.910 qpair failed and we were unable to recover it. 00:32:20.910 [2024-11-26 07:42:04.988648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.910 [2024-11-26 07:42:04.988656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.910 qpair failed and we were unable to recover it. 00:32:20.910 [2024-11-26 07:42:04.988971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.910 [2024-11-26 07:42:04.988979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.910 qpair failed and we were unable to recover it. 00:32:20.910 [2024-11-26 07:42:04.989202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.910 [2024-11-26 07:42:04.989210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.910 qpair failed and we were unable to recover it. 00:32:20.910 [2024-11-26 07:42:04.989472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.910 [2024-11-26 07:42:04.989481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.910 qpair failed and we were unable to recover it. 00:32:20.910 [2024-11-26 07:42:04.989822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.910 [2024-11-26 07:42:04.989829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.910 qpair failed and we were unable to recover it. 00:32:20.910 [2024-11-26 07:42:04.990134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.910 [2024-11-26 07:42:04.990142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.910 qpair failed and we were unable to recover it. 00:32:20.910 [2024-11-26 07:42:04.990329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.910 [2024-11-26 07:42:04.990338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.910 qpair failed and we were unable to recover it. 00:32:20.910 [2024-11-26 07:42:04.990492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.910 [2024-11-26 07:42:04.990500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.910 qpair failed and we were unable to recover it. 00:32:20.910 [2024-11-26 07:42:04.990834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.910 [2024-11-26 07:42:04.990843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.910 qpair failed and we were unable to recover it. 00:32:20.910 [2024-11-26 07:42:04.991017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.910 [2024-11-26 07:42:04.991025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.910 qpair failed and we were unable to recover it. 00:32:20.910 [2024-11-26 07:42:04.991349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.910 [2024-11-26 07:42:04.991357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.910 qpair failed and we were unable to recover it. 00:32:20.911 [2024-11-26 07:42:04.991544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.911 [2024-11-26 07:42:04.991553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.911 qpair failed and we were unable to recover it. 00:32:20.911 [2024-11-26 07:42:04.991853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.911 [2024-11-26 07:42:04.991869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.911 qpair failed and we were unable to recover it. 00:32:20.911 [2024-11-26 07:42:04.992143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.911 [2024-11-26 07:42:04.992151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:20.911 qpair failed and we were unable to recover it. 00:32:21.189 [2024-11-26 07:42:04.992485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.189 [2024-11-26 07:42:04.992493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.189 qpair failed and we were unable to recover it. 00:32:21.189 [2024-11-26 07:42:04.992681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.189 [2024-11-26 07:42:04.992691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.189 qpair failed and we were unable to recover it. 00:32:21.189 [2024-11-26 07:42:04.992874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.189 [2024-11-26 07:42:04.992882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.189 qpair failed and we were unable to recover it. 00:32:21.189 [2024-11-26 07:42:04.993038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.189 [2024-11-26 07:42:04.993047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.189 qpair failed and we were unable to recover it. 00:32:21.189 [2024-11-26 07:42:04.993349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.189 [2024-11-26 07:42:04.993357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.189 qpair failed and we were unable to recover it. 00:32:21.189 [2024-11-26 07:42:04.993680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.189 [2024-11-26 07:42:04.993688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.189 qpair failed and we were unable to recover it. 00:32:21.189 [2024-11-26 07:42:04.994002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.189 [2024-11-26 07:42:04.994010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.189 qpair failed and we were unable to recover it. 00:32:21.189 [2024-11-26 07:42:04.994341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.189 [2024-11-26 07:42:04.994350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.189 qpair failed and we were unable to recover it. 00:32:21.189 [2024-11-26 07:42:04.994430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.189 [2024-11-26 07:42:04.994437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.189 qpair failed and we were unable to recover it. 00:32:21.189 [2024-11-26 07:42:04.994570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.189 [2024-11-26 07:42:04.994578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.189 qpair failed and we were unable to recover it. 00:32:21.189 [2024-11-26 07:42:04.994920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.189 [2024-11-26 07:42:04.994930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.189 qpair failed and we were unable to recover it. 00:32:21.189 [2024-11-26 07:42:04.995106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.189 [2024-11-26 07:42:04.995115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.189 qpair failed and we were unable to recover it. 00:32:21.189 [2024-11-26 07:42:04.995272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.189 [2024-11-26 07:42:04.995280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.189 qpair failed and we were unable to recover it. 00:32:21.189 [2024-11-26 07:42:04.995456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.189 [2024-11-26 07:42:04.995465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.189 qpair failed and we were unable to recover it. 00:32:21.189 [2024-11-26 07:42:04.995652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.189 [2024-11-26 07:42:04.995659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.189 qpair failed and we were unable to recover it. 00:32:21.189 [2024-11-26 07:42:04.995982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.189 [2024-11-26 07:42:04.995990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.189 qpair failed and we were unable to recover it. 00:32:21.189 [2024-11-26 07:42:04.996327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.189 [2024-11-26 07:42:04.996336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.189 qpair failed and we were unable to recover it. 00:32:21.189 [2024-11-26 07:42:04.996621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.189 [2024-11-26 07:42:04.996629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.189 qpair failed and we were unable to recover it. 00:32:21.189 [2024-11-26 07:42:04.996944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.189 [2024-11-26 07:42:04.996953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.189 qpair failed and we were unable to recover it. 00:32:21.189 [2024-11-26 07:42:04.997268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.189 [2024-11-26 07:42:04.997276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.189 qpair failed and we were unable to recover it. 00:32:21.189 [2024-11-26 07:42:04.997463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.189 [2024-11-26 07:42:04.997472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.189 qpair failed and we were unable to recover it. 00:32:21.189 [2024-11-26 07:42:04.997743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.189 [2024-11-26 07:42:04.997751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.189 qpair failed and we were unable to recover it. 00:32:21.189 [2024-11-26 07:42:04.998058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.190 [2024-11-26 07:42:04.998066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.190 qpair failed and we were unable to recover it. 00:32:21.190 [2024-11-26 07:42:04.998379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.190 [2024-11-26 07:42:04.998386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.190 qpair failed and we were unable to recover it. 00:32:21.190 [2024-11-26 07:42:04.998707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.190 [2024-11-26 07:42:04.998714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.190 qpair failed and we were unable to recover it. 00:32:21.190 [2024-11-26 07:42:04.999071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.190 [2024-11-26 07:42:04.999079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.190 qpair failed and we were unable to recover it. 00:32:21.190 [2024-11-26 07:42:04.999420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.190 [2024-11-26 07:42:04.999428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.190 qpair failed and we were unable to recover it. 00:32:21.190 [2024-11-26 07:42:04.999793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.190 [2024-11-26 07:42:04.999801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.190 qpair failed and we were unable to recover it. 00:32:21.190 [2024-11-26 07:42:05.000198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.190 [2024-11-26 07:42:05.000207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.190 qpair failed and we were unable to recover it. 00:32:21.190 [2024-11-26 07:42:05.000382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.190 [2024-11-26 07:42:05.000391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.190 qpair failed and we were unable to recover it. 00:32:21.190 [2024-11-26 07:42:05.000549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.190 [2024-11-26 07:42:05.000556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.190 qpair failed and we were unable to recover it. 00:32:21.190 [2024-11-26 07:42:05.000873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.190 [2024-11-26 07:42:05.000882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.190 qpair failed and we were unable to recover it. 00:32:21.190 [2024-11-26 07:42:05.001201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.190 [2024-11-26 07:42:05.001209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.190 qpair failed and we were unable to recover it. 00:32:21.190 [2024-11-26 07:42:05.001368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.190 [2024-11-26 07:42:05.001376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.190 qpair failed and we were unable to recover it. 00:32:21.190 [2024-11-26 07:42:05.001698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.190 [2024-11-26 07:42:05.001707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.190 qpair failed and we were unable to recover it. 00:32:21.190 [2024-11-26 07:42:05.002084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.190 [2024-11-26 07:42:05.002093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.190 qpair failed and we were unable to recover it. 00:32:21.190 [2024-11-26 07:42:05.002411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.190 [2024-11-26 07:42:05.002418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.190 qpair failed and we were unable to recover it. 00:32:21.190 [2024-11-26 07:42:05.002634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.190 [2024-11-26 07:42:05.002642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.190 qpair failed and we were unable to recover it. 00:32:21.190 [2024-11-26 07:42:05.002965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.190 [2024-11-26 07:42:05.002973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.190 qpair failed and we were unable to recover it. 00:32:21.190 [2024-11-26 07:42:05.003148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.190 [2024-11-26 07:42:05.003156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.190 qpair failed and we were unable to recover it. 00:32:21.190 [2024-11-26 07:42:05.003473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.190 [2024-11-26 07:42:05.003480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.190 qpair failed and we were unable to recover it. 00:32:21.190 [2024-11-26 07:42:05.003667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.190 [2024-11-26 07:42:05.003675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.190 qpair failed and we were unable to recover it. 00:32:21.190 [2024-11-26 07:42:05.004004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.190 [2024-11-26 07:42:05.004012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.190 qpair failed and we were unable to recover it. 00:32:21.190 [2024-11-26 07:42:05.004380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.190 [2024-11-26 07:42:05.004388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.190 qpair failed and we were unable to recover it. 00:32:21.190 [2024-11-26 07:42:05.004563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.190 [2024-11-26 07:42:05.004572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.190 qpair failed and we were unable to recover it. 00:32:21.190 [2024-11-26 07:42:05.004767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.190 [2024-11-26 07:42:05.004775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.190 qpair failed and we were unable to recover it. 00:32:21.190 [2024-11-26 07:42:05.004988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.190 [2024-11-26 07:42:05.004997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.190 qpair failed and we were unable to recover it. 00:32:21.190 [2024-11-26 07:42:05.005339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.190 [2024-11-26 07:42:05.005346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.190 qpair failed and we were unable to recover it. 00:32:21.190 [2024-11-26 07:42:05.005656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.190 [2024-11-26 07:42:05.005664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.191 qpair failed and we were unable to recover it. 00:32:21.191 [2024-11-26 07:42:05.005991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.191 [2024-11-26 07:42:05.005999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.191 qpair failed and we were unable to recover it. 00:32:21.191 [2024-11-26 07:42:05.006210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.191 [2024-11-26 07:42:05.006219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.191 qpair failed and we were unable to recover it. 00:32:21.191 [2024-11-26 07:42:05.006510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.191 [2024-11-26 07:42:05.006519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.191 qpair failed and we were unable to recover it. 00:32:21.191 [2024-11-26 07:42:05.006853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.191 [2024-11-26 07:42:05.006865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.191 qpair failed and we were unable to recover it. 00:32:21.191 [2024-11-26 07:42:05.007082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.191 [2024-11-26 07:42:05.007090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.191 qpair failed and we were unable to recover it. 00:32:21.191 [2024-11-26 07:42:05.007406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.191 [2024-11-26 07:42:05.007414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.191 qpair failed and we were unable to recover it. 00:32:21.191 [2024-11-26 07:42:05.007570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.191 [2024-11-26 07:42:05.007579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.191 qpair failed and we were unable to recover it. 00:32:21.191 [2024-11-26 07:42:05.007924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.191 [2024-11-26 07:42:05.007932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.191 qpair failed and we were unable to recover it. 00:32:21.191 [2024-11-26 07:42:05.008086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.191 [2024-11-26 07:42:05.008094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.191 qpair failed and we were unable to recover it. 00:32:21.191 [2024-11-26 07:42:05.008386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.191 [2024-11-26 07:42:05.008394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.191 qpair failed and we were unable to recover it. 00:32:21.191 [2024-11-26 07:42:05.008432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.191 [2024-11-26 07:42:05.008439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.191 qpair failed and we were unable to recover it. 00:32:21.191 [2024-11-26 07:42:05.008600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.191 [2024-11-26 07:42:05.008609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.191 qpair failed and we were unable to recover it. 00:32:21.191 [2024-11-26 07:42:05.008985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.191 [2024-11-26 07:42:05.008994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.191 qpair failed and we were unable to recover it. 00:32:21.191 [2024-11-26 07:42:05.009180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.191 [2024-11-26 07:42:05.009188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.191 qpair failed and we were unable to recover it. 00:32:21.191 [2024-11-26 07:42:05.009506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.191 [2024-11-26 07:42:05.009514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.191 qpair failed and we were unable to recover it. 00:32:21.191 [2024-11-26 07:42:05.009828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.191 [2024-11-26 07:42:05.009836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.191 qpair failed and we were unable to recover it. 00:32:21.191 [2024-11-26 07:42:05.010010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.191 [2024-11-26 07:42:05.010019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.191 qpair failed and we were unable to recover it. 00:32:21.191 [2024-11-26 07:42:05.010298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.191 [2024-11-26 07:42:05.010306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.191 qpair failed and we were unable to recover it. 00:32:21.191 [2024-11-26 07:42:05.010597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.191 [2024-11-26 07:42:05.010605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.191 qpair failed and we were unable to recover it. 00:32:21.191 [2024-11-26 07:42:05.010746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.191 [2024-11-26 07:42:05.010753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.191 qpair failed and we were unable to recover it. 00:32:21.191 [2024-11-26 07:42:05.011085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.191 [2024-11-26 07:42:05.011093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.191 qpair failed and we were unable to recover it. 00:32:21.191 [2024-11-26 07:42:05.011378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.191 [2024-11-26 07:42:05.011386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.191 qpair failed and we were unable to recover it. 00:32:21.191 [2024-11-26 07:42:05.011571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.191 [2024-11-26 07:42:05.011579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.191 qpair failed and we were unable to recover it. 00:32:21.191 [2024-11-26 07:42:05.011858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.191 [2024-11-26 07:42:05.011869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.191 qpair failed and we were unable to recover it. 00:32:21.191 [2024-11-26 07:42:05.012070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.191 [2024-11-26 07:42:05.012078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.191 qpair failed and we were unable to recover it. 00:32:21.191 [2024-11-26 07:42:05.012404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.191 [2024-11-26 07:42:05.012413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.191 qpair failed and we were unable to recover it. 00:32:21.192 [2024-11-26 07:42:05.012725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.192 [2024-11-26 07:42:05.012734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.192 qpair failed and we were unable to recover it. 00:32:21.192 [2024-11-26 07:42:05.012917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.192 [2024-11-26 07:42:05.012926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.192 qpair failed and we were unable to recover it. 00:32:21.192 [2024-11-26 07:42:05.013332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.192 [2024-11-26 07:42:05.013340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.192 qpair failed and we were unable to recover it. 00:32:21.192 [2024-11-26 07:42:05.013521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.192 [2024-11-26 07:42:05.013530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.192 qpair failed and we were unable to recover it. 00:32:21.192 [2024-11-26 07:42:05.013828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.192 [2024-11-26 07:42:05.013835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.192 qpair failed and we were unable to recover it. 00:32:21.192 [2024-11-26 07:42:05.014168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.192 [2024-11-26 07:42:05.014176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.192 qpair failed and we were unable to recover it. 00:32:21.192 [2024-11-26 07:42:05.014550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.192 [2024-11-26 07:42:05.014558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.192 qpair failed and we were unable to recover it. 00:32:21.192 [2024-11-26 07:42:05.014868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.192 [2024-11-26 07:42:05.014876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.192 qpair failed and we were unable to recover it. 00:32:21.192 [2024-11-26 07:42:05.015042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.192 [2024-11-26 07:42:05.015050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.192 qpair failed and we were unable to recover it. 00:32:21.192 [2024-11-26 07:42:05.015387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.192 [2024-11-26 07:42:05.015395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.192 qpair failed and we were unable to recover it. 00:32:21.192 [2024-11-26 07:42:05.015710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.192 [2024-11-26 07:42:05.015718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.192 qpair failed and we were unable to recover it. 00:32:21.192 [2024-11-26 07:42:05.015883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.192 [2024-11-26 07:42:05.015891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.192 qpair failed and we were unable to recover it. 00:32:21.192 [2024-11-26 07:42:05.016212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.192 [2024-11-26 07:42:05.016221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.192 qpair failed and we were unable to recover it. 00:32:21.192 [2024-11-26 07:42:05.016294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.192 [2024-11-26 07:42:05.016303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.192 qpair failed and we were unable to recover it. 00:32:21.192 [2024-11-26 07:42:05.016532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.192 [2024-11-26 07:42:05.016539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.192 qpair failed and we were unable to recover it. 00:32:21.192 [2024-11-26 07:42:05.016887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.192 [2024-11-26 07:42:05.016897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.192 qpair failed and we were unable to recover it. 00:32:21.192 [2024-11-26 07:42:05.017215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.192 [2024-11-26 07:42:05.017223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.192 qpair failed and we were unable to recover it. 00:32:21.192 [2024-11-26 07:42:05.017389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.192 [2024-11-26 07:42:05.017397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.192 qpair failed and we were unable to recover it. 00:32:21.192 [2024-11-26 07:42:05.017588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.192 [2024-11-26 07:42:05.017596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.192 qpair failed and we were unable to recover it. 00:32:21.192 [2024-11-26 07:42:05.017822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.192 [2024-11-26 07:42:05.017829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.192 qpair failed and we were unable to recover it. 00:32:21.192 [2024-11-26 07:42:05.018109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.192 [2024-11-26 07:42:05.018117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.192 qpair failed and we were unable to recover it. 00:32:21.192 [2024-11-26 07:42:05.018296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.192 [2024-11-26 07:42:05.018305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.192 qpair failed and we were unable to recover it. 00:32:21.192 [2024-11-26 07:42:05.018579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.192 [2024-11-26 07:42:05.018588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.192 qpair failed and we were unable to recover it. 00:32:21.192 [2024-11-26 07:42:05.018752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.192 [2024-11-26 07:42:05.018762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.192 qpair failed and we were unable to recover it. 00:32:21.192 [2024-11-26 07:42:05.018996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.192 [2024-11-26 07:42:05.019004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.192 qpair failed and we were unable to recover it. 00:32:21.192 [2024-11-26 07:42:05.019187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.192 [2024-11-26 07:42:05.019196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.192 qpair failed and we were unable to recover it. 00:32:21.192 [2024-11-26 07:42:05.019515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.193 [2024-11-26 07:42:05.019522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.193 qpair failed and we were unable to recover it. 00:32:21.193 [2024-11-26 07:42:05.019855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.193 [2024-11-26 07:42:05.019867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.193 qpair failed and we were unable to recover it. 00:32:21.193 [2024-11-26 07:42:05.020058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.193 [2024-11-26 07:42:05.020067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.193 qpair failed and we were unable to recover it. 00:32:21.193 [2024-11-26 07:42:05.020193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.193 [2024-11-26 07:42:05.020201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.193 qpair failed and we were unable to recover it. 00:32:21.193 [2024-11-26 07:42:05.020500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.193 [2024-11-26 07:42:05.020508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.193 qpair failed and we were unable to recover it. 00:32:21.193 [2024-11-26 07:42:05.020712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.193 [2024-11-26 07:42:05.020720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.193 qpair failed and we were unable to recover it. 00:32:21.193 [2024-11-26 07:42:05.020896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.193 [2024-11-26 07:42:05.020904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.193 qpair failed and we were unable to recover it. 00:32:21.193 [2024-11-26 07:42:05.021203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.193 [2024-11-26 07:42:05.021211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.193 qpair failed and we were unable to recover it. 00:32:21.193 [2024-11-26 07:42:05.021246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.193 [2024-11-26 07:42:05.021253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.193 qpair failed and we were unable to recover it. 00:32:21.193 [2024-11-26 07:42:05.021583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.193 [2024-11-26 07:42:05.021591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.193 qpair failed and we were unable to recover it. 00:32:21.193 [2024-11-26 07:42:05.021908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.193 [2024-11-26 07:42:05.021917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.193 qpair failed and we were unable to recover it. 00:32:21.193 [2024-11-26 07:42:05.022268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.193 [2024-11-26 07:42:05.022276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.193 qpair failed and we were unable to recover it. 00:32:21.193 [2024-11-26 07:42:05.022577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.193 [2024-11-26 07:42:05.022585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.193 qpair failed and we were unable to recover it. 00:32:21.193 [2024-11-26 07:42:05.022764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.193 [2024-11-26 07:42:05.022773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.193 qpair failed and we were unable to recover it. 00:32:21.193 [2024-11-26 07:42:05.023159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.193 [2024-11-26 07:42:05.023167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.193 qpair failed and we were unable to recover it. 00:32:21.193 [2024-11-26 07:42:05.023438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.193 [2024-11-26 07:42:05.023447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.193 qpair failed and we were unable to recover it. 00:32:21.193 [2024-11-26 07:42:05.023580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.193 [2024-11-26 07:42:05.023588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.193 qpair failed and we were unable to recover it. 00:32:21.193 [2024-11-26 07:42:05.023762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.193 [2024-11-26 07:42:05.023771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.193 qpair failed and we were unable to recover it. 00:32:21.193 [2024-11-26 07:42:05.023824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.193 [2024-11-26 07:42:05.023833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.193 qpair failed and we were unable to recover it. 00:32:21.193 [2024-11-26 07:42:05.024137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.193 [2024-11-26 07:42:05.024145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.193 qpair failed and we were unable to recover it. 00:32:21.193 [2024-11-26 07:42:05.024349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.193 [2024-11-26 07:42:05.024357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.193 qpair failed and we were unable to recover it. 00:32:21.193 [2024-11-26 07:42:05.024683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.193 [2024-11-26 07:42:05.024692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.193 qpair failed and we were unable to recover it. 00:32:21.193 [2024-11-26 07:42:05.025049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.193 [2024-11-26 07:42:05.025057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.193 qpair failed and we were unable to recover it. 00:32:21.193 [2024-11-26 07:42:05.025399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.193 [2024-11-26 07:42:05.025407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.193 qpair failed and we were unable to recover it. 00:32:21.193 [2024-11-26 07:42:05.025586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.193 [2024-11-26 07:42:05.025595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.193 qpair failed and we were unable to recover it. 00:32:21.193 [2024-11-26 07:42:05.025966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.193 [2024-11-26 07:42:05.025974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.193 qpair failed and we were unable to recover it. 00:32:21.193 [2024-11-26 07:42:05.026147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.194 [2024-11-26 07:42:05.026156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.194 qpair failed and we were unable to recover it. 00:32:21.194 [2024-11-26 07:42:05.026429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.194 [2024-11-26 07:42:05.026436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.194 qpair failed and we were unable to recover it. 00:32:21.194 [2024-11-26 07:42:05.026481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.194 [2024-11-26 07:42:05.026487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.194 qpair failed and we were unable to recover it. 00:32:21.194 [2024-11-26 07:42:05.026785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.194 [2024-11-26 07:42:05.026794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.194 qpair failed and we were unable to recover it. 00:32:21.194 [2024-11-26 07:42:05.026988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.194 [2024-11-26 07:42:05.026996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.194 qpair failed and we were unable to recover it. 00:32:21.194 [2024-11-26 07:42:05.027221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.194 [2024-11-26 07:42:05.027230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.194 qpair failed and we were unable to recover it. 00:32:21.194 [2024-11-26 07:42:05.027554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.194 [2024-11-26 07:42:05.027562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.194 qpair failed and we were unable to recover it. 00:32:21.194 [2024-11-26 07:42:05.027893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.194 [2024-11-26 07:42:05.027901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.194 qpair failed and we were unable to recover it. 00:32:21.194 [2024-11-26 07:42:05.028163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.194 [2024-11-26 07:42:05.028172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.194 qpair failed and we were unable to recover it. 00:32:21.194 [2024-11-26 07:42:05.028357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.194 [2024-11-26 07:42:05.028365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.194 qpair failed and we were unable to recover it. 00:32:21.194 [2024-11-26 07:42:05.028674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.194 [2024-11-26 07:42:05.028681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.194 qpair failed and we were unable to recover it. 00:32:21.194 [2024-11-26 07:42:05.029012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.194 [2024-11-26 07:42:05.029020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.194 qpair failed and we were unable to recover it. 00:32:21.194 [2024-11-26 07:42:05.029210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.194 [2024-11-26 07:42:05.029218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.194 qpair failed and we were unable to recover it. 00:32:21.194 [2024-11-26 07:42:05.029578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.194 [2024-11-26 07:42:05.029585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.194 qpair failed and we were unable to recover it. 00:32:21.194 [2024-11-26 07:42:05.029913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.194 [2024-11-26 07:42:05.029921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.194 qpair failed and we were unable to recover it. 00:32:21.194 [2024-11-26 07:42:05.030077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.194 [2024-11-26 07:42:05.030087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.194 qpair failed and we were unable to recover it. 00:32:21.194 [2024-11-26 07:42:05.030395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.194 [2024-11-26 07:42:05.030404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.194 qpair failed and we were unable to recover it. 00:32:21.194 [2024-11-26 07:42:05.030717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.194 [2024-11-26 07:42:05.030725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.194 qpair failed and we were unable to recover it. 00:32:21.194 [2024-11-26 07:42:05.030937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.194 [2024-11-26 07:42:05.030945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.194 qpair failed and we were unable to recover it. 00:32:21.194 [2024-11-26 07:42:05.031137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.194 [2024-11-26 07:42:05.031145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.194 qpair failed and we were unable to recover it. 00:32:21.194 [2024-11-26 07:42:05.031456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.194 [2024-11-26 07:42:05.031464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.194 qpair failed and we were unable to recover it. 00:32:21.194 [2024-11-26 07:42:05.031653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.194 [2024-11-26 07:42:05.031660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.194 qpair failed and we were unable to recover it. 00:32:21.194 [2024-11-26 07:42:05.031996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.194 [2024-11-26 07:42:05.032004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.194 qpair failed and we were unable to recover it. 00:32:21.194 [2024-11-26 07:42:05.032170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.194 [2024-11-26 07:42:05.032179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.194 qpair failed and we were unable to recover it. 00:32:21.194 [2024-11-26 07:42:05.032501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.194 [2024-11-26 07:42:05.032509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.194 qpair failed and we were unable to recover it. 00:32:21.194 [2024-11-26 07:42:05.032680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.194 [2024-11-26 07:42:05.032689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.194 qpair failed and we were unable to recover it. 00:32:21.194 [2024-11-26 07:42:05.032995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.194 [2024-11-26 07:42:05.033004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.194 qpair failed and we were unable to recover it. 00:32:21.195 [2024-11-26 07:42:05.033184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.195 [2024-11-26 07:42:05.033193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.195 qpair failed and we were unable to recover it. 00:32:21.195 [2024-11-26 07:42:05.033380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.195 [2024-11-26 07:42:05.033388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.195 qpair failed and we were unable to recover it. 00:32:21.195 [2024-11-26 07:42:05.033695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.195 [2024-11-26 07:42:05.033704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.195 qpair failed and we were unable to recover it. 00:32:21.195 [2024-11-26 07:42:05.033889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.195 [2024-11-26 07:42:05.033898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.195 qpair failed and we were unable to recover it. 00:32:21.195 [2024-11-26 07:42:05.034235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.195 [2024-11-26 07:42:05.034243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.195 qpair failed and we were unable to recover it. 00:32:21.195 [2024-11-26 07:42:05.034555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.195 [2024-11-26 07:42:05.034563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.195 qpair failed and we were unable to recover it. 00:32:21.195 [2024-11-26 07:42:05.034714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.195 [2024-11-26 07:42:05.034722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.195 qpair failed and we were unable to recover it. 00:32:21.195 [2024-11-26 07:42:05.035030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.195 [2024-11-26 07:42:05.035038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.195 qpair failed and we were unable to recover it. 00:32:21.195 [2024-11-26 07:42:05.035353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.195 [2024-11-26 07:42:05.035361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.195 qpair failed and we were unable to recover it. 00:32:21.195 [2024-11-26 07:42:05.035690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.195 [2024-11-26 07:42:05.035698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.195 qpair failed and we were unable to recover it. 00:32:21.195 [2024-11-26 07:42:05.036010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.195 [2024-11-26 07:42:05.036019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.195 qpair failed and we were unable to recover it. 00:32:21.195 [2024-11-26 07:42:05.036348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.195 [2024-11-26 07:42:05.036355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.195 qpair failed and we were unable to recover it. 00:32:21.195 [2024-11-26 07:42:05.036537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.195 [2024-11-26 07:42:05.036545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.195 qpair failed and we were unable to recover it. 00:32:21.195 [2024-11-26 07:42:05.036857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.195 [2024-11-26 07:42:05.036868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.195 qpair failed and we were unable to recover it. 00:32:21.195 [2024-11-26 07:42:05.037044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.195 [2024-11-26 07:42:05.037052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.195 qpair failed and we were unable to recover it. 00:32:21.195 [2024-11-26 07:42:05.037245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.195 [2024-11-26 07:42:05.037253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.195 qpair failed and we were unable to recover it. 00:32:21.195 [2024-11-26 07:42:05.037427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.195 [2024-11-26 07:42:05.037437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.195 qpair failed and we were unable to recover it. 00:32:21.195 [2024-11-26 07:42:05.037750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.195 [2024-11-26 07:42:05.037757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.195 qpair failed and we were unable to recover it. 00:32:21.195 [2024-11-26 07:42:05.038077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.195 [2024-11-26 07:42:05.038085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.195 qpair failed and we were unable to recover it. 00:32:21.195 [2024-11-26 07:42:05.038409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.195 [2024-11-26 07:42:05.038417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.195 qpair failed and we were unable to recover it. 00:32:21.195 [2024-11-26 07:42:05.038751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.195 [2024-11-26 07:42:05.038760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.195 qpair failed and we were unable to recover it. 00:32:21.195 [2024-11-26 07:42:05.039133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.196 [2024-11-26 07:42:05.039141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.196 qpair failed and we were unable to recover it. 00:32:21.196 [2024-11-26 07:42:05.039352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.196 [2024-11-26 07:42:05.039360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.196 qpair failed and we were unable to recover it. 00:32:21.196 [2024-11-26 07:42:05.039699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.196 [2024-11-26 07:42:05.039706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.196 qpair failed and we were unable to recover it. 00:32:21.196 [2024-11-26 07:42:05.040038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.196 [2024-11-26 07:42:05.040046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.196 qpair failed and we were unable to recover it. 00:32:21.196 [2024-11-26 07:42:05.040232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.196 [2024-11-26 07:42:05.040241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.196 qpair failed and we were unable to recover it. 00:32:21.196 [2024-11-26 07:42:05.040620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.196 [2024-11-26 07:42:05.040628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.196 qpair failed and we were unable to recover it. 00:32:21.196 [2024-11-26 07:42:05.040830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.196 [2024-11-26 07:42:05.040838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.196 qpair failed and we were unable to recover it. 00:32:21.196 [2024-11-26 07:42:05.041029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.196 [2024-11-26 07:42:05.041036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.196 qpair failed and we were unable to recover it. 00:32:21.196 [2024-11-26 07:42:05.041223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.196 [2024-11-26 07:42:05.041231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.196 qpair failed and we were unable to recover it. 00:32:21.196 [2024-11-26 07:42:05.041544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.196 [2024-11-26 07:42:05.041552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.196 qpair failed and we were unable to recover it. 00:32:21.196 [2024-11-26 07:42:05.041837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.196 [2024-11-26 07:42:05.041845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.196 qpair failed and we were unable to recover it. 00:32:21.196 [2024-11-26 07:42:05.042055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.196 [2024-11-26 07:42:05.042063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.196 qpair failed and we were unable to recover it. 00:32:21.196 [2024-11-26 07:42:05.042385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.196 [2024-11-26 07:42:05.042393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.196 qpair failed and we were unable to recover it. 00:32:21.196 [2024-11-26 07:42:05.042577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.196 [2024-11-26 07:42:05.042585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.196 qpair failed and we were unable to recover it. 00:32:21.196 [2024-11-26 07:42:05.042868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.196 [2024-11-26 07:42:05.042876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.196 qpair failed and we were unable to recover it. 00:32:21.196 [2024-11-26 07:42:05.043177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.196 [2024-11-26 07:42:05.043186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.196 qpair failed and we were unable to recover it. 00:32:21.196 [2024-11-26 07:42:05.043550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.196 [2024-11-26 07:42:05.043559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.196 qpair failed and we were unable to recover it. 00:32:21.196 [2024-11-26 07:42:05.043753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.196 [2024-11-26 07:42:05.043762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.196 qpair failed and we were unable to recover it. 00:32:21.196 [2024-11-26 07:42:05.044035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.196 [2024-11-26 07:42:05.044043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.196 qpair failed and we were unable to recover it. 00:32:21.196 [2024-11-26 07:42:05.044267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.196 [2024-11-26 07:42:05.044274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.196 qpair failed and we were unable to recover it. 00:32:21.196 [2024-11-26 07:42:05.044409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.196 [2024-11-26 07:42:05.044416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.196 qpair failed and we were unable to recover it. 00:32:21.196 [2024-11-26 07:42:05.044687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.196 [2024-11-26 07:42:05.044695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.196 qpair failed and we were unable to recover it. 00:32:21.196 [2024-11-26 07:42:05.045007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.196 [2024-11-26 07:42:05.045016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.196 qpair failed and we were unable to recover it. 00:32:21.196 [2024-11-26 07:42:05.045320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.196 [2024-11-26 07:42:05.045328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.196 qpair failed and we were unable to recover it. 00:32:21.196 [2024-11-26 07:42:05.045664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.196 [2024-11-26 07:42:05.045672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.196 qpair failed and we were unable to recover it. 00:32:21.196 [2024-11-26 07:42:05.045846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.196 [2024-11-26 07:42:05.045856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.196 qpair failed and we were unable to recover it. 00:32:21.196 [2024-11-26 07:42:05.046174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.197 [2024-11-26 07:42:05.046182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.197 qpair failed and we were unable to recover it. 00:32:21.197 [2024-11-26 07:42:05.046497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.197 [2024-11-26 07:42:05.046505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.197 qpair failed and we were unable to recover it. 00:32:21.197 [2024-11-26 07:42:05.046818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.197 [2024-11-26 07:42:05.046826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.197 qpair failed and we were unable to recover it. 00:32:21.197 [2024-11-26 07:42:05.047002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.197 [2024-11-26 07:42:05.047011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.197 qpair failed and we were unable to recover it. 00:32:21.197 [2024-11-26 07:42:05.047342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.197 [2024-11-26 07:42:05.047351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.197 qpair failed and we were unable to recover it. 00:32:21.197 [2024-11-26 07:42:05.047562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.197 [2024-11-26 07:42:05.047570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.197 qpair failed and we were unable to recover it. 00:32:21.197 [2024-11-26 07:42:05.047923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.197 [2024-11-26 07:42:05.047931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.197 qpair failed and we were unable to recover it. 00:32:21.197 [2024-11-26 07:42:05.048116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.197 [2024-11-26 07:42:05.048125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.197 qpair failed and we were unable to recover it. 00:32:21.197 [2024-11-26 07:42:05.048309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.197 [2024-11-26 07:42:05.048317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.197 qpair failed and we were unable to recover it. 00:32:21.197 [2024-11-26 07:42:05.048592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.197 [2024-11-26 07:42:05.048602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.197 qpair failed and we were unable to recover it. 00:32:21.197 [2024-11-26 07:42:05.048933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.197 [2024-11-26 07:42:05.048941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.197 qpair failed and we were unable to recover it. 00:32:21.197 [2024-11-26 07:42:05.049270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.197 [2024-11-26 07:42:05.049278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.197 qpair failed and we were unable to recover it. 00:32:21.197 [2024-11-26 07:42:05.049609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.197 [2024-11-26 07:42:05.049617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.197 qpair failed and we were unable to recover it. 00:32:21.197 [2024-11-26 07:42:05.049947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.197 [2024-11-26 07:42:05.049955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.197 qpair failed and we were unable to recover it. 00:32:21.197 [2024-11-26 07:42:05.050292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.197 [2024-11-26 07:42:05.050300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.197 qpair failed and we were unable to recover it. 00:32:21.197 [2024-11-26 07:42:05.050446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.197 [2024-11-26 07:42:05.050455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.197 qpair failed and we were unable to recover it. 00:32:21.197 [2024-11-26 07:42:05.050621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.197 [2024-11-26 07:42:05.050630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.197 qpair failed and we were unable to recover it. 00:32:21.197 [2024-11-26 07:42:05.050938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.197 [2024-11-26 07:42:05.050947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.197 qpair failed and we were unable to recover it. 00:32:21.197 [2024-11-26 07:42:05.050990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.197 [2024-11-26 07:42:05.050998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.197 qpair failed and we were unable to recover it. 00:32:21.197 [2024-11-26 07:42:05.051286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.197 [2024-11-26 07:42:05.051294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.197 qpair failed and we were unable to recover it. 00:32:21.197 [2024-11-26 07:42:05.051605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.197 [2024-11-26 07:42:05.051613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.197 qpair failed and we were unable to recover it. 00:32:21.197 [2024-11-26 07:42:05.051907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.197 [2024-11-26 07:42:05.051915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.197 qpair failed and we were unable to recover it. 00:32:21.197 [2024-11-26 07:42:05.052133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.197 [2024-11-26 07:42:05.052140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.197 qpair failed and we were unable to recover it. 00:32:21.197 [2024-11-26 07:42:05.052400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.197 [2024-11-26 07:42:05.052408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.197 qpair failed and we were unable to recover it. 00:32:21.197 [2024-11-26 07:42:05.052556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.197 [2024-11-26 07:42:05.052564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.197 qpair failed and we were unable to recover it. 00:32:21.197 [2024-11-26 07:42:05.052757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.197 [2024-11-26 07:42:05.052764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.197 qpair failed and we were unable to recover it. 00:32:21.197 [2024-11-26 07:42:05.052804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.197 [2024-11-26 07:42:05.052811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.197 qpair failed and we were unable to recover it. 00:32:21.198 [2024-11-26 07:42:05.053069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.198 [2024-11-26 07:42:05.053077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.198 qpair failed and we were unable to recover it. 00:32:21.198 [2024-11-26 07:42:05.053387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.198 [2024-11-26 07:42:05.053395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.198 qpair failed and we were unable to recover it. 00:32:21.198 [2024-11-26 07:42:05.053698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.198 [2024-11-26 07:42:05.053707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.198 qpair failed and we were unable to recover it. 00:32:21.198 [2024-11-26 07:42:05.053859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.198 [2024-11-26 07:42:05.053873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.198 qpair failed and we were unable to recover it. 00:32:21.198 [2024-11-26 07:42:05.054174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.198 [2024-11-26 07:42:05.054183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.198 qpair failed and we were unable to recover it. 00:32:21.198 [2024-11-26 07:42:05.054496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.198 [2024-11-26 07:42:05.054504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.198 qpair failed and we were unable to recover it. 00:32:21.198 [2024-11-26 07:42:05.054844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.198 [2024-11-26 07:42:05.054852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.198 qpair failed and we were unable to recover it. 00:32:21.198 [2024-11-26 07:42:05.055135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.198 [2024-11-26 07:42:05.055143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.198 qpair failed and we were unable to recover it. 00:32:21.198 [2024-11-26 07:42:05.055363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.198 [2024-11-26 07:42:05.055371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.198 qpair failed and we were unable to recover it. 00:32:21.198 [2024-11-26 07:42:05.055680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.198 [2024-11-26 07:42:05.055688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.198 qpair failed and we were unable to recover it. 00:32:21.198 [2024-11-26 07:42:05.056004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.198 [2024-11-26 07:42:05.056012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.198 qpair failed and we were unable to recover it. 00:32:21.198 [2024-11-26 07:42:05.056197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.198 [2024-11-26 07:42:05.056206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.198 qpair failed and we were unable to recover it. 00:32:21.198 [2024-11-26 07:42:05.056501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.198 [2024-11-26 07:42:05.056509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.198 qpair failed and we were unable to recover it. 00:32:21.198 [2024-11-26 07:42:05.056705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.198 [2024-11-26 07:42:05.056713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.198 qpair failed and we were unable to recover it. 00:32:21.198 [2024-11-26 07:42:05.057000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.198 [2024-11-26 07:42:05.057009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.198 qpair failed and we were unable to recover it. 00:32:21.198 [2024-11-26 07:42:05.057335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.198 [2024-11-26 07:42:05.057342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.198 qpair failed and we were unable to recover it. 00:32:21.198 [2024-11-26 07:42:05.057515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.198 [2024-11-26 07:42:05.057524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.198 qpair failed and we were unable to recover it. 00:32:21.198 [2024-11-26 07:42:05.057683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.198 [2024-11-26 07:42:05.057690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.198 qpair failed and we were unable to recover it. 00:32:21.198 [2024-11-26 07:42:05.058037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.198 [2024-11-26 07:42:05.058045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.198 qpair failed and we were unable to recover it. 00:32:21.198 [2024-11-26 07:42:05.058212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.198 [2024-11-26 07:42:05.058220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.198 qpair failed and we were unable to recover it. 00:32:21.198 [2024-11-26 07:42:05.058546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.198 [2024-11-26 07:42:05.058554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.198 qpair failed and we were unable to recover it. 00:32:21.198 [2024-11-26 07:42:05.058854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.198 [2024-11-26 07:42:05.058866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.198 qpair failed and we were unable to recover it. 00:32:21.198 [2024-11-26 07:42:05.059170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.198 [2024-11-26 07:42:05.059179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.198 qpair failed and we were unable to recover it. 00:32:21.198 [2024-11-26 07:42:05.059369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.198 [2024-11-26 07:42:05.059377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.198 qpair failed and we were unable to recover it. 00:32:21.198 [2024-11-26 07:42:05.059550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.198 [2024-11-26 07:42:05.059558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.198 qpair failed and we were unable to recover it. 00:32:21.198 [2024-11-26 07:42:05.059763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.198 [2024-11-26 07:42:05.059771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.198 qpair failed and we were unable to recover it. 00:32:21.198 [2024-11-26 07:42:05.060096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.199 [2024-11-26 07:42:05.060104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.199 qpair failed and we were unable to recover it. 00:32:21.199 [2024-11-26 07:42:05.060412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.199 [2024-11-26 07:42:05.060419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.199 qpair failed and we were unable to recover it. 00:32:21.199 [2024-11-26 07:42:05.060665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.199 [2024-11-26 07:42:05.060673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.199 qpair failed and we were unable to recover it. 00:32:21.199 [2024-11-26 07:42:05.060983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.199 [2024-11-26 07:42:05.060991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.199 qpair failed and we were unable to recover it. 00:32:21.199 [2024-11-26 07:42:05.061335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.199 [2024-11-26 07:42:05.061343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.199 qpair failed and we were unable to recover it. 00:32:21.199 [2024-11-26 07:42:05.061378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.199 [2024-11-26 07:42:05.061384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.199 qpair failed and we were unable to recover it. 00:32:21.199 [2024-11-26 07:42:05.061553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.199 [2024-11-26 07:42:05.061562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.199 qpair failed and we were unable to recover it. 00:32:21.199 [2024-11-26 07:42:05.061758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.199 [2024-11-26 07:42:05.061766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.199 qpair failed and we were unable to recover it. 00:32:21.199 [2024-11-26 07:42:05.062080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.199 [2024-11-26 07:42:05.062088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.199 qpair failed and we were unable to recover it. 00:32:21.199 [2024-11-26 07:42:05.062442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.199 [2024-11-26 07:42:05.062450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.199 qpair failed and we were unable to recover it. 00:32:21.199 [2024-11-26 07:42:05.062511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.199 [2024-11-26 07:42:05.062518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.199 qpair failed and we were unable to recover it. 00:32:21.199 [2024-11-26 07:42:05.062673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.199 [2024-11-26 07:42:05.062680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.199 qpair failed and we were unable to recover it. 00:32:21.199 [2024-11-26 07:42:05.063006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.199 [2024-11-26 07:42:05.063014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.199 qpair failed and we were unable to recover it. 00:32:21.199 [2024-11-26 07:42:05.063297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.199 [2024-11-26 07:42:05.063305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.199 qpair failed and we were unable to recover it. 00:32:21.199 [2024-11-26 07:42:05.063655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.199 [2024-11-26 07:42:05.063664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.199 qpair failed and we were unable to recover it. 00:32:21.199 [2024-11-26 07:42:05.063837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.199 [2024-11-26 07:42:05.063845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.199 qpair failed and we were unable to recover it. 00:32:21.199 [2024-11-26 07:42:05.064150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.199 [2024-11-26 07:42:05.064160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.199 qpair failed and we were unable to recover it. 00:32:21.199 [2024-11-26 07:42:05.064332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.199 [2024-11-26 07:42:05.064341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.199 qpair failed and we were unable to recover it. 00:32:21.199 [2024-11-26 07:42:05.064593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.199 [2024-11-26 07:42:05.064601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.199 qpair failed and we were unable to recover it. 00:32:21.199 [2024-11-26 07:42:05.064894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.199 [2024-11-26 07:42:05.064903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.199 qpair failed and we were unable to recover it. 00:32:21.199 [2024-11-26 07:42:05.065222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.199 [2024-11-26 07:42:05.065229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.199 qpair failed and we were unable to recover it. 00:32:21.199 [2024-11-26 07:42:05.065524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.199 [2024-11-26 07:42:05.065532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.199 qpair failed and we were unable to recover it. 00:32:21.199 [2024-11-26 07:42:05.065805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.199 [2024-11-26 07:42:05.065813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.199 qpair failed and we were unable to recover it. 00:32:21.199 [2024-11-26 07:42:05.065989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.199 [2024-11-26 07:42:05.065998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.199 qpair failed and we were unable to recover it. 00:32:21.199 [2024-11-26 07:42:05.066176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.199 [2024-11-26 07:42:05.066184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.199 qpair failed and we were unable to recover it. 00:32:21.199 [2024-11-26 07:42:05.066355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.199 [2024-11-26 07:42:05.066364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.199 qpair failed and we were unable to recover it. 00:32:21.199 [2024-11-26 07:42:05.066673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.199 [2024-11-26 07:42:05.066680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.200 qpair failed and we were unable to recover it. 00:32:21.200 [2024-11-26 07:42:05.067018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.200 [2024-11-26 07:42:05.067027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.200 qpair failed and we were unable to recover it. 00:32:21.200 [2024-11-26 07:42:05.067390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.200 [2024-11-26 07:42:05.067397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.200 qpair failed and we were unable to recover it. 00:32:21.200 [2024-11-26 07:42:05.067569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.200 [2024-11-26 07:42:05.067577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.200 qpair failed and we were unable to recover it. 00:32:21.200 [2024-11-26 07:42:05.068024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.200 [2024-11-26 07:42:05.068033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.200 qpair failed and we were unable to recover it. 00:32:21.200 [2024-11-26 07:42:05.068372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.200 [2024-11-26 07:42:05.068380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.200 qpair failed and we were unable to recover it. 00:32:21.200 [2024-11-26 07:42:05.068696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.200 [2024-11-26 07:42:05.068704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.200 qpair failed and we were unable to recover it. 00:32:21.200 [2024-11-26 07:42:05.069023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.200 [2024-11-26 07:42:05.069031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.200 qpair failed and we were unable to recover it. 00:32:21.200 [2024-11-26 07:42:05.069216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.200 [2024-11-26 07:42:05.069225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.200 qpair failed and we were unable to recover it. 00:32:21.200 [2024-11-26 07:42:05.069542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.200 [2024-11-26 07:42:05.069550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.200 qpair failed and we were unable to recover it. 00:32:21.200 [2024-11-26 07:42:05.069865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.200 [2024-11-26 07:42:05.069875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.200 qpair failed and we were unable to recover it. 00:32:21.200 [2024-11-26 07:42:05.070060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.200 [2024-11-26 07:42:05.070069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.200 qpair failed and we were unable to recover it. 00:32:21.200 [2024-11-26 07:42:05.070373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.200 [2024-11-26 07:42:05.070381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.200 qpair failed and we were unable to recover it. 00:32:21.200 [2024-11-26 07:42:05.070555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.200 [2024-11-26 07:42:05.070564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.200 qpair failed and we were unable to recover it. 00:32:21.200 [2024-11-26 07:42:05.070853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.200 [2024-11-26 07:42:05.070860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.200 qpair failed and we were unable to recover it. 00:32:21.200 [2024-11-26 07:42:05.071165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.200 [2024-11-26 07:42:05.071173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.200 qpair failed and we were unable to recover it. 00:32:21.200 [2024-11-26 07:42:05.071485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.200 [2024-11-26 07:42:05.071492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.200 qpair failed and we were unable to recover it. 00:32:21.200 [2024-11-26 07:42:05.071802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.200 [2024-11-26 07:42:05.071810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.200 qpair failed and we were unable to recover it. 00:32:21.200 [2024-11-26 07:42:05.071979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.200 [2024-11-26 07:42:05.071989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.200 qpair failed and we were unable to recover it. 00:32:21.200 [2024-11-26 07:42:05.072287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.200 [2024-11-26 07:42:05.072295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.200 qpair failed and we were unable to recover it. 00:32:21.200 [2024-11-26 07:42:05.072625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.200 [2024-11-26 07:42:05.072633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.200 qpair failed and we were unable to recover it. 00:32:21.200 [2024-11-26 07:42:05.072946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.200 [2024-11-26 07:42:05.072954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.200 qpair failed and we were unable to recover it. 00:32:21.200 [2024-11-26 07:42:05.073140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.200 [2024-11-26 07:42:05.073149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.200 qpair failed and we were unable to recover it. 00:32:21.200 [2024-11-26 07:42:05.073184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.200 [2024-11-26 07:42:05.073193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.200 qpair failed and we were unable to recover it. 00:32:21.200 [2024-11-26 07:42:05.073484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.200 [2024-11-26 07:42:05.073492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.200 qpair failed and we were unable to recover it. 00:32:21.200 [2024-11-26 07:42:05.073804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.200 [2024-11-26 07:42:05.073812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.200 qpair failed and we were unable to recover it. 00:32:21.200 [2024-11-26 07:42:05.074136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.200 [2024-11-26 07:42:05.074144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.200 qpair failed and we were unable to recover it. 00:32:21.201 [2024-11-26 07:42:05.074471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.201 [2024-11-26 07:42:05.074479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.201 qpair failed and we were unable to recover it. 00:32:21.201 [2024-11-26 07:42:05.074798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.201 [2024-11-26 07:42:05.074806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.201 qpair failed and we were unable to recover it. 00:32:21.201 [2024-11-26 07:42:05.074969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.201 [2024-11-26 07:42:05.074979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.201 qpair failed and we were unable to recover it. 00:32:21.201 [2024-11-26 07:42:05.075269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.201 [2024-11-26 07:42:05.075277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.201 qpair failed and we were unable to recover it. 00:32:21.201 [2024-11-26 07:42:05.075542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.201 [2024-11-26 07:42:05.075550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.201 qpair failed and we were unable to recover it. 00:32:21.201 [2024-11-26 07:42:05.075764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.201 [2024-11-26 07:42:05.075772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.201 qpair failed and we were unable to recover it. 00:32:21.201 [2024-11-26 07:42:05.075934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.201 [2024-11-26 07:42:05.075942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.201 qpair failed and we were unable to recover it. 00:32:21.201 [2024-11-26 07:42:05.076092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.201 [2024-11-26 07:42:05.076100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.201 qpair failed and we were unable to recover it. 00:32:21.201 [2024-11-26 07:42:05.076415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.201 [2024-11-26 07:42:05.076424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.201 qpair failed and we were unable to recover it. 00:32:21.201 [2024-11-26 07:42:05.076474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.201 [2024-11-26 07:42:05.076482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.201 qpair failed and we were unable to recover it. 00:32:21.201 [2024-11-26 07:42:05.076807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.201 [2024-11-26 07:42:05.076815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.201 qpair failed and we were unable to recover it. 00:32:21.201 [2024-11-26 07:42:05.076992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.201 [2024-11-26 07:42:05.077001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.201 qpair failed and we were unable to recover it. 00:32:21.201 [2024-11-26 07:42:05.077167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.201 [2024-11-26 07:42:05.077174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.201 qpair failed and we were unable to recover it. 00:32:21.201 [2024-11-26 07:42:05.077486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.201 [2024-11-26 07:42:05.077494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.201 qpair failed and we were unable to recover it. 00:32:21.201 [2024-11-26 07:42:05.077807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.201 [2024-11-26 07:42:05.077815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.201 qpair failed and we were unable to recover it. 00:32:21.201 [2024-11-26 07:42:05.078014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.201 [2024-11-26 07:42:05.078022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.201 qpair failed and we were unable to recover it. 00:32:21.201 [2024-11-26 07:42:05.078303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.201 [2024-11-26 07:42:05.078311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.201 qpair failed and we were unable to recover it. 00:32:21.201 [2024-11-26 07:42:05.078630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.201 [2024-11-26 07:42:05.078639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.201 qpair failed and we were unable to recover it. 00:32:21.201 [2024-11-26 07:42:05.078938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.201 [2024-11-26 07:42:05.078946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.201 qpair failed and we were unable to recover it. 00:32:21.201 [2024-11-26 07:42:05.079284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.201 [2024-11-26 07:42:05.079293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.201 qpair failed and we were unable to recover it. 00:32:21.201 [2024-11-26 07:42:05.079614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.201 [2024-11-26 07:42:05.079621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.201 qpair failed and we were unable to recover it. 00:32:21.201 [2024-11-26 07:42:05.079922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.201 [2024-11-26 07:42:05.079930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.201 qpair failed and we were unable to recover it. 00:32:21.201 [2024-11-26 07:42:05.080303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.201 [2024-11-26 07:42:05.080311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.201 qpair failed and we were unable to recover it. 00:32:21.201 [2024-11-26 07:42:05.080476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.201 [2024-11-26 07:42:05.080485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.201 qpair failed and we were unable to recover it. 00:32:21.201 [2024-11-26 07:42:05.080766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.201 [2024-11-26 07:42:05.080774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.201 qpair failed and we were unable to recover it. 00:32:21.201 [2024-11-26 07:42:05.080952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.201 [2024-11-26 07:42:05.080962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.201 qpair failed and we were unable to recover it. 00:32:21.202 [2024-11-26 07:42:05.081248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.202 [2024-11-26 07:42:05.081256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.202 qpair failed and we were unable to recover it. 00:32:21.202 [2024-11-26 07:42:05.081411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.202 [2024-11-26 07:42:05.081420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.202 qpair failed and we were unable to recover it. 00:32:21.202 [2024-11-26 07:42:05.081714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.202 [2024-11-26 07:42:05.081722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.202 qpair failed and we were unable to recover it. 00:32:21.202 [2024-11-26 07:42:05.082037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.202 [2024-11-26 07:42:05.082045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.202 qpair failed and we were unable to recover it. 00:32:21.202 [2024-11-26 07:42:05.082387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.202 [2024-11-26 07:42:05.082395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.202 qpair failed and we were unable to recover it. 00:32:21.202 [2024-11-26 07:42:05.082555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.202 [2024-11-26 07:42:05.082564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.202 qpair failed and we were unable to recover it. 00:32:21.202 [2024-11-26 07:42:05.082877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.202 [2024-11-26 07:42:05.082885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.202 qpair failed and we were unable to recover it. 00:32:21.202 [2024-11-26 07:42:05.083262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.202 [2024-11-26 07:42:05.083270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.202 qpair failed and we were unable to recover it. 00:32:21.202 [2024-11-26 07:42:05.083571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.202 [2024-11-26 07:42:05.083580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.202 qpair failed and we were unable to recover it. 00:32:21.202 [2024-11-26 07:42:05.083807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.202 [2024-11-26 07:42:05.083814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.202 qpair failed and we were unable to recover it. 00:32:21.202 [2024-11-26 07:42:05.084116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.202 [2024-11-26 07:42:05.084124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.202 qpair failed and we were unable to recover it. 00:32:21.202 [2024-11-26 07:42:05.084305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.202 [2024-11-26 07:42:05.084314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.202 qpair failed and we were unable to recover it. 00:32:21.202 [2024-11-26 07:42:05.084528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.202 [2024-11-26 07:42:05.084537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.202 qpair failed and we were unable to recover it. 00:32:21.202 [2024-11-26 07:42:05.084685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.202 [2024-11-26 07:42:05.084693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.202 qpair failed and we were unable to recover it. 00:32:21.202 [2024-11-26 07:42:05.084887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.202 [2024-11-26 07:42:05.084896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.202 qpair failed and we were unable to recover it. 00:32:21.202 [2024-11-26 07:42:05.085084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.202 [2024-11-26 07:42:05.085092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.202 qpair failed and we were unable to recover it. 00:32:21.202 [2024-11-26 07:42:05.085397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.202 [2024-11-26 07:42:05.085405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.202 qpair failed and we were unable to recover it. 00:32:21.202 [2024-11-26 07:42:05.085720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.202 [2024-11-26 07:42:05.085728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.202 qpair failed and we were unable to recover it. 00:32:21.202 [2024-11-26 07:42:05.086082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.202 [2024-11-26 07:42:05.086090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.202 qpair failed and we were unable to recover it. 00:32:21.202 [2024-11-26 07:42:05.086388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.202 [2024-11-26 07:42:05.086396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.202 qpair failed and we were unable to recover it. 00:32:21.202 [2024-11-26 07:42:05.086712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.202 [2024-11-26 07:42:05.086720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.203 qpair failed and we were unable to recover it. 00:32:21.203 [2024-11-26 07:42:05.086911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.203 [2024-11-26 07:42:05.086920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.203 qpair failed and we were unable to recover it. 00:32:21.203 [2024-11-26 07:42:05.087094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.203 [2024-11-26 07:42:05.087103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.203 qpair failed and we were unable to recover it. 00:32:21.203 [2024-11-26 07:42:05.087453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.203 [2024-11-26 07:42:05.087461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.203 qpair failed and we were unable to recover it. 00:32:21.203 [2024-11-26 07:42:05.087775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.203 [2024-11-26 07:42:05.087783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.203 qpair failed and we were unable to recover it. 00:32:21.203 [2024-11-26 07:42:05.088043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.203 [2024-11-26 07:42:05.088051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.203 qpair failed and we were unable to recover it. 00:32:21.203 [2024-11-26 07:42:05.088355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.203 [2024-11-26 07:42:05.088363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.203 qpair failed and we were unable to recover it. 00:32:21.203 [2024-11-26 07:42:05.088676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.203 [2024-11-26 07:42:05.088684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.203 qpair failed and we were unable to recover it. 00:32:21.203 [2024-11-26 07:42:05.089002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.203 [2024-11-26 07:42:05.089010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.203 qpair failed and we were unable to recover it. 00:32:21.203 [2024-11-26 07:42:05.089324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.203 [2024-11-26 07:42:05.089333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.203 qpair failed and we were unable to recover it. 00:32:21.203 [2024-11-26 07:42:05.089492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.203 [2024-11-26 07:42:05.089501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.203 qpair failed and we were unable to recover it. 00:32:21.203 [2024-11-26 07:42:05.089574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.203 [2024-11-26 07:42:05.089583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.203 qpair failed and we were unable to recover it. 00:32:21.203 [2024-11-26 07:42:05.089871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.203 [2024-11-26 07:42:05.089879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.203 qpair failed and we were unable to recover it. 00:32:21.203 [2024-11-26 07:42:05.090063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.203 [2024-11-26 07:42:05.090071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.203 qpair failed and we were unable to recover it. 00:32:21.203 [2024-11-26 07:42:05.090382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.203 [2024-11-26 07:42:05.090390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.203 qpair failed and we were unable to recover it. 00:32:21.203 [2024-11-26 07:42:05.090734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.203 [2024-11-26 07:42:05.090742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.203 qpair failed and we were unable to recover it. 00:32:21.203 [2024-11-26 07:42:05.091035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.203 [2024-11-26 07:42:05.091043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.203 qpair failed and we were unable to recover it. 00:32:21.203 [2024-11-26 07:42:05.091233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.203 [2024-11-26 07:42:05.091242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.203 qpair failed and we were unable to recover it. 00:32:21.203 [2024-11-26 07:42:05.091570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.203 [2024-11-26 07:42:05.091578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.203 qpair failed and we were unable to recover it. 00:32:21.203 [2024-11-26 07:42:05.091892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.203 [2024-11-26 07:42:05.091901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.203 qpair failed and we were unable to recover it. 00:32:21.203 [2024-11-26 07:42:05.092092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.203 [2024-11-26 07:42:05.092100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.203 qpair failed and we were unable to recover it. 00:32:21.203 [2024-11-26 07:42:05.092418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.203 [2024-11-26 07:42:05.092426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.203 qpair failed and we were unable to recover it. 00:32:21.203 [2024-11-26 07:42:05.092746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.203 [2024-11-26 07:42:05.092754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.203 qpair failed and we were unable to recover it. 00:32:21.203 [2024-11-26 07:42:05.093082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.203 [2024-11-26 07:42:05.093090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.203 qpair failed and we were unable to recover it. 00:32:21.203 [2024-11-26 07:42:05.093277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.203 [2024-11-26 07:42:05.093285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.203 qpair failed and we were unable to recover it. 00:32:21.203 [2024-11-26 07:42:05.093469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.203 [2024-11-26 07:42:05.093477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.203 qpair failed and we were unable to recover it. 00:32:21.203 [2024-11-26 07:42:05.093676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.203 [2024-11-26 07:42:05.093684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.203 qpair failed and we were unable to recover it. 00:32:21.204 [2024-11-26 07:42:05.094002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.204 [2024-11-26 07:42:05.094010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.204 qpair failed and we were unable to recover it. 00:32:21.204 [2024-11-26 07:42:05.094338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.204 [2024-11-26 07:42:05.094347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.204 qpair failed and we were unable to recover it. 00:32:21.204 [2024-11-26 07:42:05.094661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.204 [2024-11-26 07:42:05.094669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.204 qpair failed and we were unable to recover it. 00:32:21.204 [2024-11-26 07:42:05.094970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.204 [2024-11-26 07:42:05.094978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.204 qpair failed and we were unable to recover it. 00:32:21.204 [2024-11-26 07:42:05.095020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.204 [2024-11-26 07:42:05.095026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.204 qpair failed and we were unable to recover it. 00:32:21.204 [2024-11-26 07:42:05.095358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.204 [2024-11-26 07:42:05.095366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.204 qpair failed and we were unable to recover it. 00:32:21.204 [2024-11-26 07:42:05.095675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.204 [2024-11-26 07:42:05.095683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.204 qpair failed and we were unable to recover it. 00:32:21.204 [2024-11-26 07:42:05.096020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.204 [2024-11-26 07:42:05.096029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.204 qpair failed and we were unable to recover it. 00:32:21.204 [2024-11-26 07:42:05.096333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.204 [2024-11-26 07:42:05.096341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.204 qpair failed and we were unable to recover it. 00:32:21.204 [2024-11-26 07:42:05.096646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.204 [2024-11-26 07:42:05.096654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.204 qpair failed and we were unable to recover it. 00:32:21.204 [2024-11-26 07:42:05.096987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.204 [2024-11-26 07:42:05.096996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.204 qpair failed and we were unable to recover it. 00:32:21.204 [2024-11-26 07:42:05.097156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.204 [2024-11-26 07:42:05.097165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.204 qpair failed and we were unable to recover it. 00:32:21.204 [2024-11-26 07:42:05.097435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.204 [2024-11-26 07:42:05.097443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.204 qpair failed and we were unable to recover it. 00:32:21.204 [2024-11-26 07:42:05.097618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.204 [2024-11-26 07:42:05.097627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.204 qpair failed and we were unable to recover it. 00:32:21.204 [2024-11-26 07:42:05.097932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.204 [2024-11-26 07:42:05.097940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.204 qpair failed and we were unable to recover it. 00:32:21.204 [2024-11-26 07:42:05.098267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.204 [2024-11-26 07:42:05.098275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.204 qpair failed and we were unable to recover it. 00:32:21.204 [2024-11-26 07:42:05.098572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.204 [2024-11-26 07:42:05.098579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.204 qpair failed and we were unable to recover it. 00:32:21.204 [2024-11-26 07:42:05.098905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.204 [2024-11-26 07:42:05.098913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.204 qpair failed and we were unable to recover it. 00:32:21.204 [2024-11-26 07:42:05.099175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.204 [2024-11-26 07:42:05.099183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.204 qpair failed and we were unable to recover it. 00:32:21.204 [2024-11-26 07:42:05.099233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.204 [2024-11-26 07:42:05.099239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.204 qpair failed and we were unable to recover it. 00:32:21.204 [2024-11-26 07:42:05.099398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.204 [2024-11-26 07:42:05.099406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.204 qpair failed and we were unable to recover it. 00:32:21.204 [2024-11-26 07:42:05.099730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.204 [2024-11-26 07:42:05.099738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.204 qpair failed and we were unable to recover it. 00:32:21.204 [2024-11-26 07:42:05.100053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.204 [2024-11-26 07:42:05.100061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.204 qpair failed and we were unable to recover it. 00:32:21.204 [2024-11-26 07:42:05.100248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.204 [2024-11-26 07:42:05.100256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.204 qpair failed and we were unable to recover it. 00:32:21.204 [2024-11-26 07:42:05.100451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.204 [2024-11-26 07:42:05.100459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.204 qpair failed and we were unable to recover it. 00:32:21.204 [2024-11-26 07:42:05.100791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.204 [2024-11-26 07:42:05.100799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.204 qpair failed and we were unable to recover it. 00:32:21.204 [2024-11-26 07:42:05.101106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.205 [2024-11-26 07:42:05.101114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.205 qpair failed and we were unable to recover it. 00:32:21.205 [2024-11-26 07:42:05.101457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.205 [2024-11-26 07:42:05.101465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.205 qpair failed and we were unable to recover it. 00:32:21.205 [2024-11-26 07:42:05.101788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.205 [2024-11-26 07:42:05.101797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.205 qpair failed and we were unable to recover it. 00:32:21.205 [2024-11-26 07:42:05.102109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.205 [2024-11-26 07:42:05.102117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.205 qpair failed and we were unable to recover it. 00:32:21.205 [2024-11-26 07:42:05.102450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.205 [2024-11-26 07:42:05.102460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.205 qpair failed and we were unable to recover it. 00:32:21.205 [2024-11-26 07:42:05.102656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.205 [2024-11-26 07:42:05.102664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.205 qpair failed and we were unable to recover it. 00:32:21.205 [2024-11-26 07:42:05.102943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.205 [2024-11-26 07:42:05.102951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.205 qpair failed and we were unable to recover it. 00:32:21.205 [2024-11-26 07:42:05.102989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.205 [2024-11-26 07:42:05.102996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.205 qpair failed and we were unable to recover it. 00:32:21.205 [2024-11-26 07:42:05.103288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.205 [2024-11-26 07:42:05.103296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.205 qpair failed and we were unable to recover it. 00:32:21.205 [2024-11-26 07:42:05.103613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.205 [2024-11-26 07:42:05.103621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.205 qpair failed and we were unable to recover it. 00:32:21.205 [2024-11-26 07:42:05.103941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.205 [2024-11-26 07:42:05.103950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.205 qpair failed and we were unable to recover it. 00:32:21.205 [2024-11-26 07:42:05.104141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.205 [2024-11-26 07:42:05.104150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.205 qpair failed and we were unable to recover it. 00:32:21.205 [2024-11-26 07:42:05.104435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.205 [2024-11-26 07:42:05.104443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.205 qpair failed and we were unable to recover it. 00:32:21.205 [2024-11-26 07:42:05.104600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.205 [2024-11-26 07:42:05.104609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.205 qpair failed and we were unable to recover it. 00:32:21.205 [2024-11-26 07:42:05.104876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.205 [2024-11-26 07:42:05.104884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.205 qpair failed and we were unable to recover it. 00:32:21.205 [2024-11-26 07:42:05.105064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.205 [2024-11-26 07:42:05.105073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.205 qpair failed and we were unable to recover it. 00:32:21.205 [2024-11-26 07:42:05.105389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.205 [2024-11-26 07:42:05.105397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.205 qpair failed and we were unable to recover it. 00:32:21.205 [2024-11-26 07:42:05.105747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.205 [2024-11-26 07:42:05.105755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.205 qpair failed and we were unable to recover it. 00:32:21.205 [2024-11-26 07:42:05.106144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.205 [2024-11-26 07:42:05.106152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.205 qpair failed and we were unable to recover it. 00:32:21.205 [2024-11-26 07:42:05.106492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.205 [2024-11-26 07:42:05.106500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.205 qpair failed and we were unable to recover it. 00:32:21.205 [2024-11-26 07:42:05.106811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.205 [2024-11-26 07:42:05.106819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.205 qpair failed and we were unable to recover it. 00:32:21.205 [2024-11-26 07:42:05.107210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.205 [2024-11-26 07:42:05.107218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.205 qpair failed and we were unable to recover it. 00:32:21.205 [2024-11-26 07:42:05.107512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.205 [2024-11-26 07:42:05.107520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.205 qpair failed and we were unable to recover it. 00:32:21.205 [2024-11-26 07:42:05.107699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.205 [2024-11-26 07:42:05.107709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.205 qpair failed and we were unable to recover it. 00:32:21.205 [2024-11-26 07:42:05.107900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.205 [2024-11-26 07:42:05.107908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.205 qpair failed and we were unable to recover it. 00:32:21.205 [2024-11-26 07:42:05.108218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.205 [2024-11-26 07:42:05.108225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.205 qpair failed and we were unable to recover it. 00:32:21.205 [2024-11-26 07:42:05.108417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.206 [2024-11-26 07:42:05.108425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.206 qpair failed and we were unable to recover it. 00:32:21.206 [2024-11-26 07:42:05.108729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.206 [2024-11-26 07:42:05.108737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.206 qpair failed and we were unable to recover it. 00:32:21.206 [2024-11-26 07:42:05.108923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.206 [2024-11-26 07:42:05.108932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.206 qpair failed and we were unable to recover it. 00:32:21.206 [2024-11-26 07:42:05.109097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.206 [2024-11-26 07:42:05.109105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.206 qpair failed and we were unable to recover it. 00:32:21.206 [2024-11-26 07:42:05.109265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.206 [2024-11-26 07:42:05.109273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.206 qpair failed and we were unable to recover it. 00:32:21.206 [2024-11-26 07:42:05.109563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.206 [2024-11-26 07:42:05.109571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.206 qpair failed and we were unable to recover it. 00:32:21.206 [2024-11-26 07:42:05.109887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.206 [2024-11-26 07:42:05.109895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.206 qpair failed and we were unable to recover it. 00:32:21.206 [2024-11-26 07:42:05.110171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.206 [2024-11-26 07:42:05.110179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.206 qpair failed and we were unable to recover it. 00:32:21.206 [2024-11-26 07:42:05.110474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.206 [2024-11-26 07:42:05.110482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.206 qpair failed and we were unable to recover it. 00:32:21.206 [2024-11-26 07:42:05.110819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.206 [2024-11-26 07:42:05.110827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.206 qpair failed and we were unable to recover it. 00:32:21.206 [2024-11-26 07:42:05.110985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.206 [2024-11-26 07:42:05.110994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.206 qpair failed and we were unable to recover it. 00:32:21.206 [2024-11-26 07:42:05.111326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.206 [2024-11-26 07:42:05.111334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.206 qpair failed and we were unable to recover it. 00:32:21.206 [2024-11-26 07:42:05.111648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.206 [2024-11-26 07:42:05.111657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.206 qpair failed and we were unable to recover it. 00:32:21.206 [2024-11-26 07:42:05.111832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.206 [2024-11-26 07:42:05.111841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.206 qpair failed and we were unable to recover it. 00:32:21.206 [2024-11-26 07:42:05.112156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.206 [2024-11-26 07:42:05.112164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.206 qpair failed and we were unable to recover it. 00:32:21.206 [2024-11-26 07:42:05.112509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.206 [2024-11-26 07:42:05.112518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.206 qpair failed and we were unable to recover it. 00:32:21.206 [2024-11-26 07:42:05.112833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.206 [2024-11-26 07:42:05.112842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.206 qpair failed and we were unable to recover it. 00:32:21.206 [2024-11-26 07:42:05.113032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.206 [2024-11-26 07:42:05.113041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.206 qpair failed and we were unable to recover it. 00:32:21.206 [2024-11-26 07:42:05.113178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.206 [2024-11-26 07:42:05.113188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.206 qpair failed and we were unable to recover it. 00:32:21.206 [2024-11-26 07:42:05.113491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.206 [2024-11-26 07:42:05.113499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.206 qpair failed and we were unable to recover it. 00:32:21.206 [2024-11-26 07:42:05.113792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.206 [2024-11-26 07:42:05.113800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.206 qpair failed and we were unable to recover it. 00:32:21.206 [2024-11-26 07:42:05.113975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.206 [2024-11-26 07:42:05.113983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.206 qpair failed and we were unable to recover it. 00:32:21.206 [2024-11-26 07:42:05.114198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.206 [2024-11-26 07:42:05.114206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.206 qpair failed and we were unable to recover it. 00:32:21.206 [2024-11-26 07:42:05.114370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.206 [2024-11-26 07:42:05.114379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.206 qpair failed and we were unable to recover it. 00:32:21.206 [2024-11-26 07:42:05.114526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.206 [2024-11-26 07:42:05.114533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.206 qpair failed and we were unable to recover it. 00:32:21.206 [2024-11-26 07:42:05.114837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.206 [2024-11-26 07:42:05.114845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.206 qpair failed and we were unable to recover it. 00:32:21.206 [2024-11-26 07:42:05.115164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.207 [2024-11-26 07:42:05.115173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.207 qpair failed and we were unable to recover it. 00:32:21.207 [2024-11-26 07:42:05.115487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.207 [2024-11-26 07:42:05.115494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.207 qpair failed and we were unable to recover it. 00:32:21.207 [2024-11-26 07:42:05.115792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.207 [2024-11-26 07:42:05.115801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.207 qpair failed and we were unable to recover it. 00:32:21.207 [2024-11-26 07:42:05.115979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.207 [2024-11-26 07:42:05.115988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.207 qpair failed and we were unable to recover it. 00:32:21.207 [2024-11-26 07:42:05.116312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.207 [2024-11-26 07:42:05.116320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.207 qpair failed and we were unable to recover it. 00:32:21.207 [2024-11-26 07:42:05.116652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.207 [2024-11-26 07:42:05.116661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.207 qpair failed and we were unable to recover it. 00:32:21.207 [2024-11-26 07:42:05.116980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.207 [2024-11-26 07:42:05.116989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.207 qpair failed and we were unable to recover it. 00:32:21.207 [2024-11-26 07:42:05.117317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.207 [2024-11-26 07:42:05.117325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.207 qpair failed and we were unable to recover it. 00:32:21.207 [2024-11-26 07:42:05.117620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.207 [2024-11-26 07:42:05.117627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.207 qpair failed and we were unable to recover it. 00:32:21.207 [2024-11-26 07:42:05.117786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.207 [2024-11-26 07:42:05.117795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.207 qpair failed and we were unable to recover it. 00:32:21.207 [2024-11-26 07:42:05.118103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.207 [2024-11-26 07:42:05.118111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.207 qpair failed and we were unable to recover it. 00:32:21.207 [2024-11-26 07:42:05.118458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.207 [2024-11-26 07:42:05.118466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.207 qpair failed and we were unable to recover it. 00:32:21.207 [2024-11-26 07:42:05.118628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.207 [2024-11-26 07:42:05.118637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.207 qpair failed and we were unable to recover it. 00:32:21.207 [2024-11-26 07:42:05.118789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.207 [2024-11-26 07:42:05.118796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.207 qpair failed and we were unable to recover it. 00:32:21.207 [2024-11-26 07:42:05.118984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.207 [2024-11-26 07:42:05.118992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.207 qpair failed and we were unable to recover it. 00:32:21.207 [2024-11-26 07:42:05.119316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.207 [2024-11-26 07:42:05.119324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.207 qpair failed and we were unable to recover it. 00:32:21.207 [2024-11-26 07:42:05.119619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.207 [2024-11-26 07:42:05.119627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.207 qpair failed and we were unable to recover it. 00:32:21.207 [2024-11-26 07:42:05.119911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.207 [2024-11-26 07:42:05.119919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.207 qpair failed and we were unable to recover it. 00:32:21.207 [2024-11-26 07:42:05.120245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.207 [2024-11-26 07:42:05.120253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.207 qpair failed and we were unable to recover it. 00:32:21.207 [2024-11-26 07:42:05.120413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.207 [2024-11-26 07:42:05.120422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.207 qpair failed and we were unable to recover it. 00:32:21.207 [2024-11-26 07:42:05.120465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.207 [2024-11-26 07:42:05.120474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.207 qpair failed and we were unable to recover it. 00:32:21.207 [2024-11-26 07:42:05.120641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.207 [2024-11-26 07:42:05.120650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.207 qpair failed and we were unable to recover it. 00:32:21.207 [2024-11-26 07:42:05.120930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.207 [2024-11-26 07:42:05.120938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.207 qpair failed and we were unable to recover it. 00:32:21.207 [2024-11-26 07:42:05.121292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.207 [2024-11-26 07:42:05.121300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.207 qpair failed and we were unable to recover it. 00:32:21.207 [2024-11-26 07:42:05.121626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.207 [2024-11-26 07:42:05.121634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.207 qpair failed and we were unable to recover it. 00:32:21.207 [2024-11-26 07:42:05.121837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.207 [2024-11-26 07:42:05.121844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.207 qpair failed and we were unable to recover it. 00:32:21.207 [2024-11-26 07:42:05.122035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.207 [2024-11-26 07:42:05.122045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.207 qpair failed and we were unable to recover it. 00:32:21.208 [2024-11-26 07:42:05.122226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.208 [2024-11-26 07:42:05.122233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.208 qpair failed and we were unable to recover it. 00:32:21.208 [2024-11-26 07:42:05.122522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.208 [2024-11-26 07:42:05.122530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.208 qpair failed and we were unable to recover it. 00:32:21.208 [2024-11-26 07:42:05.122855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.208 [2024-11-26 07:42:05.122866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.208 qpair failed and we were unable to recover it. 00:32:21.208 [2024-11-26 07:42:05.123186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.208 [2024-11-26 07:42:05.123194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.208 qpair failed and we were unable to recover it. 00:32:21.208 [2024-11-26 07:42:05.123554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.208 [2024-11-26 07:42:05.123562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.208 qpair failed and we were unable to recover it. 00:32:21.208 [2024-11-26 07:42:05.123756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.208 [2024-11-26 07:42:05.123765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.208 qpair failed and we were unable to recover it. 00:32:21.208 [2024-11-26 07:42:05.123932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.208 [2024-11-26 07:42:05.123940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.208 qpair failed and we were unable to recover it. 00:32:21.208 [2024-11-26 07:42:05.124310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.208 [2024-11-26 07:42:05.124318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.208 qpair failed and we were unable to recover it. 00:32:21.208 [2024-11-26 07:42:05.124668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.208 [2024-11-26 07:42:05.124676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.208 qpair failed and we were unable to recover it. 00:32:21.208 [2024-11-26 07:42:05.124867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.208 [2024-11-26 07:42:05.124875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.208 qpair failed and we were unable to recover it. 00:32:21.208 [2024-11-26 07:42:05.125032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.208 [2024-11-26 07:42:05.125039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.208 qpair failed and we were unable to recover it. 00:32:21.208 [2024-11-26 07:42:05.125315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.208 [2024-11-26 07:42:05.125323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.208 qpair failed and we were unable to recover it. 00:32:21.208 [2024-11-26 07:42:05.125717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.208 [2024-11-26 07:42:05.125725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.208 qpair failed and we were unable to recover it. 00:32:21.208 [2024-11-26 07:42:05.125977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.208 [2024-11-26 07:42:05.125985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.208 qpair failed and we were unable to recover it. 00:32:21.208 [2024-11-26 07:42:05.126311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.208 [2024-11-26 07:42:05.126319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.208 qpair failed and we were unable to recover it. 00:32:21.208 [2024-11-26 07:42:05.126643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.208 [2024-11-26 07:42:05.126651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.208 qpair failed and we were unable to recover it. 00:32:21.208 [2024-11-26 07:42:05.126687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.208 [2024-11-26 07:42:05.126694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.208 qpair failed and we were unable to recover it. 00:32:21.208 [2024-11-26 07:42:05.126856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.208 [2024-11-26 07:42:05.126868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.208 qpair failed and we were unable to recover it. 00:32:21.208 [2024-11-26 07:42:05.127073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.208 [2024-11-26 07:42:05.127081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.208 qpair failed and we were unable to recover it. 00:32:21.208 [2024-11-26 07:42:05.127488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.208 [2024-11-26 07:42:05.127496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.208 qpair failed and we were unable to recover it. 00:32:21.208 [2024-11-26 07:42:05.127684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.208 [2024-11-26 07:42:05.127692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.208 qpair failed and we were unable to recover it. 00:32:21.208 [2024-11-26 07:42:05.128009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.208 [2024-11-26 07:42:05.128017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.209 qpair failed and we were unable to recover it. 00:32:21.209 [2024-11-26 07:42:05.128246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.209 [2024-11-26 07:42:05.128254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.209 qpair failed and we were unable to recover it. 00:32:21.209 [2024-11-26 07:42:05.128519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.209 [2024-11-26 07:42:05.128527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.209 qpair failed and we were unable to recover it. 00:32:21.209 [2024-11-26 07:42:05.128859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.209 [2024-11-26 07:42:05.128870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.209 qpair failed and we were unable to recover it. 00:32:21.209 [2024-11-26 07:42:05.129187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.209 [2024-11-26 07:42:05.129195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.209 qpair failed and we were unable to recover it. 00:32:21.209 [2024-11-26 07:42:05.129546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.209 [2024-11-26 07:42:05.129554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.209 qpair failed and we were unable to recover it. 00:32:21.209 [2024-11-26 07:42:05.129842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.209 [2024-11-26 07:42:05.129850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.209 qpair failed and we were unable to recover it. 00:32:21.209 [2024-11-26 07:42:05.129994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.209 [2024-11-26 07:42:05.130002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.209 qpair failed and we were unable to recover it. 00:32:21.209 [2024-11-26 07:42:05.130186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.209 [2024-11-26 07:42:05.130194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.209 qpair failed and we were unable to recover it. 00:32:21.209 [2024-11-26 07:42:05.130494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.209 [2024-11-26 07:42:05.130502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.209 qpair failed and we were unable to recover it. 00:32:21.209 [2024-11-26 07:42:05.130672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.209 [2024-11-26 07:42:05.130680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.209 qpair failed and we were unable to recover it. 00:32:21.209 [2024-11-26 07:42:05.130833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.209 [2024-11-26 07:42:05.130842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.209 qpair failed and we were unable to recover it. 00:32:21.209 [2024-11-26 07:42:05.131148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.209 [2024-11-26 07:42:05.131157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.209 qpair failed and we were unable to recover it. 00:32:21.209 [2024-11-26 07:42:05.131435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.209 [2024-11-26 07:42:05.131443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.209 qpair failed and we were unable to recover it. 00:32:21.209 [2024-11-26 07:42:05.131669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.209 [2024-11-26 07:42:05.131677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.209 qpair failed and we were unable to recover it. 00:32:21.209 [2024-11-26 07:42:05.131992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.209 [2024-11-26 07:42:05.132000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.209 qpair failed and we were unable to recover it. 00:32:21.209 [2024-11-26 07:42:05.132319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.209 [2024-11-26 07:42:05.132327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.209 qpair failed and we were unable to recover it. 00:32:21.209 [2024-11-26 07:42:05.132657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.209 [2024-11-26 07:42:05.132665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.209 qpair failed and we were unable to recover it. 00:32:21.209 [2024-11-26 07:42:05.132989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.209 [2024-11-26 07:42:05.132997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.209 qpair failed and we were unable to recover it. 00:32:21.209 [2024-11-26 07:42:05.133308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.209 [2024-11-26 07:42:05.133316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.209 qpair failed and we were unable to recover it. 00:32:21.209 [2024-11-26 07:42:05.133515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.209 [2024-11-26 07:42:05.133524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.209 qpair failed and we were unable to recover it. 00:32:21.209 [2024-11-26 07:42:05.133692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.209 [2024-11-26 07:42:05.133700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.209 qpair failed and we were unable to recover it. 00:32:21.209 [2024-11-26 07:42:05.133971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.209 [2024-11-26 07:42:05.133979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.209 qpair failed and we were unable to recover it. 00:32:21.209 [2024-11-26 07:42:05.134279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.209 [2024-11-26 07:42:05.134286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.209 qpair failed and we were unable to recover it. 00:32:21.209 [2024-11-26 07:42:05.134570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.209 [2024-11-26 07:42:05.134578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.209 qpair failed and we were unable to recover it. 00:32:21.209 [2024-11-26 07:42:05.134925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.209 [2024-11-26 07:42:05.134933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.209 qpair failed and we were unable to recover it. 00:32:21.209 [2024-11-26 07:42:05.135140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.209 [2024-11-26 07:42:05.135147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.210 qpair failed and we were unable to recover it. 00:32:21.210 [2024-11-26 07:42:05.135484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.210 [2024-11-26 07:42:05.135492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.210 qpair failed and we were unable to recover it. 00:32:21.210 [2024-11-26 07:42:05.135803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.210 [2024-11-26 07:42:05.135811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.210 qpair failed and we were unable to recover it. 00:32:21.210 [2024-11-26 07:42:05.136124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.210 [2024-11-26 07:42:05.136131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.210 qpair failed and we were unable to recover it. 00:32:21.210 [2024-11-26 07:42:05.136316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.210 [2024-11-26 07:42:05.136325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.210 qpair failed and we were unable to recover it. 00:32:21.210 [2024-11-26 07:42:05.136478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.210 [2024-11-26 07:42:05.136486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.210 qpair failed and we were unable to recover it. 00:32:21.210 [2024-11-26 07:42:05.136803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.210 [2024-11-26 07:42:05.136811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.210 qpair failed and we were unable to recover it. 00:32:21.210 [2024-11-26 07:42:05.137119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.210 [2024-11-26 07:42:05.137128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.210 qpair failed and we were unable to recover it. 00:32:21.210 [2024-11-26 07:42:05.137303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.210 [2024-11-26 07:42:05.137312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.210 qpair failed and we were unable to recover it. 00:32:21.210 [2024-11-26 07:42:05.137635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.210 [2024-11-26 07:42:05.137643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.210 qpair failed and we were unable to recover it. 00:32:21.210 [2024-11-26 07:42:05.137816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.210 [2024-11-26 07:42:05.137825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.210 qpair failed and we were unable to recover it. 00:32:21.210 [2024-11-26 07:42:05.138019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.210 [2024-11-26 07:42:05.138027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.210 qpair failed and we were unable to recover it. 00:32:21.210 [2024-11-26 07:42:05.138181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.210 [2024-11-26 07:42:05.138190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.210 qpair failed and we were unable to recover it. 00:32:21.210 [2024-11-26 07:42:05.138278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.210 [2024-11-26 07:42:05.138286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.210 qpair failed and we were unable to recover it. 00:32:21.210 [2024-11-26 07:42:05.138563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.210 [2024-11-26 07:42:05.138572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.210 qpair failed and we were unable to recover it. 00:32:21.210 [2024-11-26 07:42:05.138912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.210 [2024-11-26 07:42:05.138920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.210 qpair failed and we were unable to recover it. 00:32:21.210 [2024-11-26 07:42:05.139236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.210 [2024-11-26 07:42:05.139244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.210 qpair failed and we were unable to recover it. 00:32:21.210 [2024-11-26 07:42:05.139557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.210 [2024-11-26 07:42:05.139564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.210 qpair failed and we were unable to recover it. 00:32:21.210 [2024-11-26 07:42:05.139884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.210 [2024-11-26 07:42:05.139893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.210 qpair failed and we were unable to recover it. 00:32:21.210 [2024-11-26 07:42:05.140086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.210 [2024-11-26 07:42:05.140095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.210 qpair failed and we were unable to recover it. 00:32:21.210 [2024-11-26 07:42:05.140398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.210 [2024-11-26 07:42:05.140406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.210 qpair failed and we were unable to recover it. 00:32:21.210 [2024-11-26 07:42:05.140585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.210 [2024-11-26 07:42:05.140593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.210 qpair failed and we were unable to recover it. 00:32:21.210 [2024-11-26 07:42:05.140921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.210 [2024-11-26 07:42:05.140929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.210 qpair failed and we were unable to recover it. 00:32:21.210 [2024-11-26 07:42:05.141120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.210 [2024-11-26 07:42:05.141128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.210 qpair failed and we were unable to recover it. 00:32:21.210 [2024-11-26 07:42:05.141302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.210 [2024-11-26 07:42:05.141311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.210 qpair failed and we were unable to recover it. 00:32:21.210 [2024-11-26 07:42:05.141483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.210 [2024-11-26 07:42:05.141493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.210 qpair failed and we were unable to recover it. 00:32:21.210 [2024-11-26 07:42:05.141838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.210 [2024-11-26 07:42:05.141847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.210 qpair failed and we were unable to recover it. 00:32:21.211 [2024-11-26 07:42:05.142161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.211 [2024-11-26 07:42:05.142169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.211 qpair failed and we were unable to recover it. 00:32:21.211 [2024-11-26 07:42:05.142501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.211 [2024-11-26 07:42:05.142509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.211 qpair failed and we were unable to recover it. 00:32:21.211 [2024-11-26 07:42:05.142833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.211 [2024-11-26 07:42:05.142841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.211 qpair failed and we were unable to recover it. 00:32:21.211 [2024-11-26 07:42:05.143161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.211 [2024-11-26 07:42:05.143170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.211 qpair failed and we were unable to recover it. 00:32:21.211 [2024-11-26 07:42:05.143464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.211 [2024-11-26 07:42:05.143472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.211 qpair failed and we were unable to recover it. 00:32:21.211 [2024-11-26 07:42:05.143812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.211 [2024-11-26 07:42:05.143821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.211 qpair failed and we were unable to recover it. 00:32:21.211 [2024-11-26 07:42:05.144016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.211 [2024-11-26 07:42:05.144024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.211 qpair failed and we were unable to recover it. 00:32:21.211 [2024-11-26 07:42:05.144313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.211 [2024-11-26 07:42:05.144321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.211 qpair failed and we were unable to recover it. 00:32:21.211 [2024-11-26 07:42:05.144659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.211 [2024-11-26 07:42:05.144667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.211 qpair failed and we were unable to recover it. 00:32:21.211 [2024-11-26 07:42:05.144846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.211 [2024-11-26 07:42:05.144855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.211 qpair failed and we were unable to recover it. 00:32:21.211 [2024-11-26 07:42:05.145017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.211 [2024-11-26 07:42:05.145025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.211 qpair failed and we were unable to recover it. 00:32:21.211 [2024-11-26 07:42:05.145187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.211 [2024-11-26 07:42:05.145194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.211 qpair failed and we were unable to recover it. 00:32:21.211 [2024-11-26 07:42:05.145533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.211 [2024-11-26 07:42:05.145540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.211 qpair failed and we were unable to recover it. 00:32:21.211 [2024-11-26 07:42:05.145873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.211 [2024-11-26 07:42:05.145881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.211 qpair failed and we were unable to recover it. 00:32:21.211 [2024-11-26 07:42:05.146184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.211 [2024-11-26 07:42:05.146192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.211 qpair failed and we were unable to recover it. 00:32:21.211 [2024-11-26 07:42:05.146490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.211 [2024-11-26 07:42:05.146498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.211 qpair failed and we were unable to recover it. 00:32:21.211 [2024-11-26 07:42:05.146670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.211 [2024-11-26 07:42:05.146679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.211 qpair failed and we were unable to recover it. 00:32:21.211 [2024-11-26 07:42:05.147031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.211 [2024-11-26 07:42:05.147039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.211 qpair failed and we were unable to recover it. 00:32:21.211 [2024-11-26 07:42:05.147375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.211 [2024-11-26 07:42:05.147383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.211 qpair failed and we were unable to recover it. 00:32:21.211 [2024-11-26 07:42:05.147718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.211 [2024-11-26 07:42:05.147726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.211 qpair failed and we were unable to recover it. 00:32:21.211 [2024-11-26 07:42:05.147883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.211 [2024-11-26 07:42:05.147892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.211 qpair failed and we were unable to recover it. 00:32:21.211 [2024-11-26 07:42:05.148248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.211 [2024-11-26 07:42:05.148256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.211 qpair failed and we were unable to recover it. 00:32:21.211 [2024-11-26 07:42:05.148431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.211 [2024-11-26 07:42:05.148439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.211 qpair failed and we were unable to recover it. 00:32:21.211 [2024-11-26 07:42:05.148483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.211 [2024-11-26 07:42:05.148491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.211 qpair failed and we were unable to recover it. 00:32:21.211 [2024-11-26 07:42:05.148643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.211 [2024-11-26 07:42:05.148651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.211 qpair failed and we were unable to recover it. 00:32:21.211 [2024-11-26 07:42:05.148968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.211 [2024-11-26 07:42:05.148976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.211 qpair failed and we were unable to recover it. 00:32:21.211 [2024-11-26 07:42:05.149317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.211 [2024-11-26 07:42:05.149325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.212 qpair failed and we were unable to recover it. 00:32:21.212 [2024-11-26 07:42:05.149625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.212 [2024-11-26 07:42:05.149633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.212 qpair failed and we were unable to recover it. 00:32:21.212 [2024-11-26 07:42:05.149932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.212 [2024-11-26 07:42:05.149940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.212 qpair failed and we were unable to recover it. 00:32:21.212 [2024-11-26 07:42:05.150239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.212 [2024-11-26 07:42:05.150247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.212 qpair failed and we were unable to recover it. 00:32:21.212 [2024-11-26 07:42:05.150416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.212 [2024-11-26 07:42:05.150425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.212 qpair failed and we were unable to recover it. 00:32:21.212 [2024-11-26 07:42:05.150731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.212 [2024-11-26 07:42:05.150740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.212 qpair failed and we were unable to recover it. 00:32:21.212 [2024-11-26 07:42:05.150914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.212 [2024-11-26 07:42:05.150923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.212 qpair failed and we were unable to recover it. 00:32:21.212 [2024-11-26 07:42:05.151244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.212 [2024-11-26 07:42:05.151252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.212 qpair failed and we were unable to recover it. 00:32:21.212 [2024-11-26 07:42:05.151598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.212 [2024-11-26 07:42:05.151606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.212 qpair failed and we were unable to recover it. 00:32:21.212 [2024-11-26 07:42:05.151650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.212 [2024-11-26 07:42:05.151657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.212 qpair failed and we were unable to recover it. 00:32:21.212 [2024-11-26 07:42:05.151871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.212 [2024-11-26 07:42:05.151879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.212 qpair failed and we were unable to recover it. 00:32:21.212 [2024-11-26 07:42:05.152200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.212 [2024-11-26 07:42:05.152208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.212 qpair failed and we were unable to recover it. 00:32:21.212 [2024-11-26 07:42:05.152372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.212 [2024-11-26 07:42:05.152383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.212 qpair failed and we were unable to recover it. 00:32:21.212 [2024-11-26 07:42:05.152716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.212 [2024-11-26 07:42:05.152724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.212 qpair failed and we were unable to recover it. 00:32:21.212 [2024-11-26 07:42:05.153049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.212 [2024-11-26 07:42:05.153057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.212 qpair failed and we were unable to recover it. 00:32:21.212 [2024-11-26 07:42:05.153269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.212 [2024-11-26 07:42:05.153278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.212 qpair failed and we were unable to recover it. 00:32:21.212 [2024-11-26 07:42:05.153611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.212 [2024-11-26 07:42:05.153620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.212 qpair failed and we were unable to recover it. 00:32:21.212 [2024-11-26 07:42:05.153932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.212 [2024-11-26 07:42:05.153940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.212 qpair failed and we were unable to recover it. 00:32:21.212 [2024-11-26 07:42:05.154146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.212 [2024-11-26 07:42:05.154154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.212 qpair failed and we were unable to recover it. 00:32:21.212 [2024-11-26 07:42:05.154197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.212 [2024-11-26 07:42:05.154206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.212 qpair failed and we were unable to recover it. 00:32:21.212 [2024-11-26 07:42:05.154519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.212 [2024-11-26 07:42:05.154527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.212 qpair failed and we were unable to recover it. 00:32:21.212 [2024-11-26 07:42:05.154847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.212 [2024-11-26 07:42:05.154856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.212 qpair failed and we were unable to recover it. 00:32:21.212 [2024-11-26 07:42:05.155167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.212 [2024-11-26 07:42:05.155175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.212 qpair failed and we were unable to recover it. 00:32:21.212 [2024-11-26 07:42:05.155511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.212 [2024-11-26 07:42:05.155519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.212 qpair failed and we were unable to recover it. 00:32:21.212 [2024-11-26 07:42:05.155839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.212 [2024-11-26 07:42:05.155847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.212 qpair failed and we were unable to recover it. 00:32:21.212 [2024-11-26 07:42:05.156085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.212 [2024-11-26 07:42:05.156093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.212 qpair failed and we were unable to recover it. 00:32:21.212 [2024-11-26 07:42:05.156393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.212 [2024-11-26 07:42:05.156402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.212 qpair failed and we were unable to recover it. 00:32:21.212 [2024-11-26 07:42:05.156717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.213 [2024-11-26 07:42:05.156726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.213 qpair failed and we were unable to recover it. 00:32:21.213 [2024-11-26 07:42:05.156933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.213 [2024-11-26 07:42:05.156942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.213 qpair failed and we were unable to recover it. 00:32:21.213 [2024-11-26 07:42:05.157253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.213 [2024-11-26 07:42:05.157261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.213 qpair failed and we were unable to recover it. 00:32:21.213 [2024-11-26 07:42:05.157585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.213 [2024-11-26 07:42:05.157593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.213 qpair failed and we were unable to recover it. 00:32:21.213 [2024-11-26 07:42:05.157748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.213 [2024-11-26 07:42:05.157765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.213 qpair failed and we were unable to recover it. 00:32:21.213 [2024-11-26 07:42:05.157964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.213 [2024-11-26 07:42:05.157972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.213 qpair failed and we were unable to recover it. 00:32:21.213 [2024-11-26 07:42:05.158295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.213 [2024-11-26 07:42:05.158302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.213 qpair failed and we were unable to recover it. 00:32:21.213 [2024-11-26 07:42:05.158642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.213 [2024-11-26 07:42:05.158650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.213 qpair failed and we were unable to recover it. 00:32:21.213 [2024-11-26 07:42:05.158995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.213 [2024-11-26 07:42:05.159003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.213 qpair failed and we were unable to recover it. 00:32:21.213 [2024-11-26 07:42:05.159200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.213 [2024-11-26 07:42:05.159207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.213 qpair failed and we were unable to recover it. 00:32:21.213 [2024-11-26 07:42:05.159400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.213 [2024-11-26 07:42:05.159407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.213 qpair failed and we were unable to recover it. 00:32:21.213 [2024-11-26 07:42:05.159715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.213 [2024-11-26 07:42:05.159723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.213 qpair failed and we were unable to recover it. 00:32:21.213 [2024-11-26 07:42:05.160031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.213 [2024-11-26 07:42:05.160039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.213 qpair failed and we were unable to recover it. 00:32:21.213 [2024-11-26 07:42:05.160379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.213 [2024-11-26 07:42:05.160387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.213 qpair failed and we were unable to recover it. 00:32:21.213 [2024-11-26 07:42:05.160723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.213 [2024-11-26 07:42:05.160731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.213 qpair failed and we were unable to recover it. 00:32:21.213 [2024-11-26 07:42:05.160770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.213 [2024-11-26 07:42:05.160779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.213 qpair failed and we were unable to recover it. 00:32:21.213 [2024-11-26 07:42:05.161083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.213 [2024-11-26 07:42:05.161091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.213 qpair failed and we were unable to recover it. 00:32:21.213 [2024-11-26 07:42:05.161407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.213 [2024-11-26 07:42:05.161415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.213 qpair failed and we were unable to recover it. 00:32:21.213 [2024-11-26 07:42:05.161619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.213 [2024-11-26 07:42:05.161627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.213 qpair failed and we were unable to recover it. 00:32:21.213 [2024-11-26 07:42:05.161769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.213 [2024-11-26 07:42:05.161777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.213 qpair failed and we were unable to recover it. 00:32:21.213 [2024-11-26 07:42:05.162050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.213 [2024-11-26 07:42:05.162058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.213 qpair failed and we were unable to recover it. 00:32:21.213 [2024-11-26 07:42:05.162395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.213 [2024-11-26 07:42:05.162404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.213 qpair failed and we were unable to recover it. 00:32:21.213 [2024-11-26 07:42:05.162722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.213 [2024-11-26 07:42:05.162730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.213 qpair failed and we were unable to recover it. 00:32:21.213 [2024-11-26 07:42:05.163019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.213 [2024-11-26 07:42:05.163028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.213 qpair failed and we were unable to recover it. 00:32:21.213 [2024-11-26 07:42:05.163347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.213 [2024-11-26 07:42:05.163356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.213 qpair failed and we were unable to recover it. 00:32:21.213 [2024-11-26 07:42:05.163546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.213 [2024-11-26 07:42:05.163555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.213 qpair failed and we were unable to recover it. 00:32:21.213 [2024-11-26 07:42:05.163731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.213 [2024-11-26 07:42:05.163739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.213 qpair failed and we were unable to recover it. 00:32:21.213 [2024-11-26 07:42:05.164005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.214 [2024-11-26 07:42:05.164014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.214 qpair failed and we were unable to recover it. 00:32:21.214 [2024-11-26 07:42:05.164327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.214 [2024-11-26 07:42:05.164336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.214 qpair failed and we were unable to recover it. 00:32:21.214 [2024-11-26 07:42:05.164670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.214 [2024-11-26 07:42:05.164678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.214 qpair failed and we were unable to recover it. 00:32:21.214 [2024-11-26 07:42:05.164880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.214 [2024-11-26 07:42:05.164888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.214 qpair failed and we were unable to recover it. 00:32:21.214 [2024-11-26 07:42:05.165083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.214 [2024-11-26 07:42:05.165093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.214 qpair failed and we were unable to recover it. 00:32:21.214 [2024-11-26 07:42:05.165411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.214 [2024-11-26 07:42:05.165419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.214 qpair failed and we were unable to recover it. 00:32:21.214 [2024-11-26 07:42:05.165734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.214 [2024-11-26 07:42:05.165742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.214 qpair failed and we were unable to recover it. 00:32:21.214 [2024-11-26 07:42:05.165947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.214 [2024-11-26 07:42:05.165956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.214 qpair failed and we were unable to recover it. 00:32:21.214 [2024-11-26 07:42:05.165999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.214 [2024-11-26 07:42:05.166009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.214 qpair failed and we were unable to recover it. 00:32:21.214 [2024-11-26 07:42:05.166258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.214 [2024-11-26 07:42:05.166266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.214 qpair failed and we were unable to recover it. 00:32:21.214 [2024-11-26 07:42:05.166443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.214 [2024-11-26 07:42:05.166452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.214 qpair failed and we were unable to recover it. 00:32:21.214 [2024-11-26 07:42:05.166765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.214 [2024-11-26 07:42:05.166773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.214 qpair failed and we were unable to recover it. 00:32:21.214 [2024-11-26 07:42:05.167078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.214 [2024-11-26 07:42:05.167086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.214 qpair failed and we were unable to recover it. 00:32:21.214 [2024-11-26 07:42:05.167154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.214 [2024-11-26 07:42:05.167160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.214 qpair failed and we were unable to recover it. 00:32:21.214 [2024-11-26 07:42:05.167213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.214 [2024-11-26 07:42:05.167221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.214 qpair failed and we were unable to recover it. 00:32:21.214 [2024-11-26 07:42:05.167372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.214 [2024-11-26 07:42:05.167380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.214 qpair failed and we were unable to recover it. 00:32:21.214 [2024-11-26 07:42:05.167559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.214 [2024-11-26 07:42:05.167566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.214 qpair failed and we were unable to recover it. 00:32:21.214 [2024-11-26 07:42:05.167740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.214 [2024-11-26 07:42:05.167747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.214 qpair failed and we were unable to recover it. 00:32:21.214 [2024-11-26 07:42:05.168032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.214 [2024-11-26 07:42:05.168039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.214 qpair failed and we were unable to recover it. 00:32:21.214 [2024-11-26 07:42:05.168297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.214 [2024-11-26 07:42:05.168304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.214 qpair failed and we were unable to recover it. 00:32:21.214 [2024-11-26 07:42:05.168380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.214 [2024-11-26 07:42:05.168387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.214 qpair failed and we were unable to recover it. 00:32:21.214 [2024-11-26 07:42:05.168553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.214 [2024-11-26 07:42:05.168561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.214 qpair failed and we were unable to recover it. 00:32:21.214 [2024-11-26 07:42:05.168933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.214 [2024-11-26 07:42:05.168942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.214 qpair failed and we were unable to recover it. 00:32:21.214 [2024-11-26 07:42:05.169113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.214 [2024-11-26 07:42:05.169121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.214 qpair failed and we were unable to recover it. 00:32:21.214 [2024-11-26 07:42:05.169412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.214 [2024-11-26 07:42:05.169420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.214 qpair failed and we were unable to recover it. 00:32:21.214 [2024-11-26 07:42:05.169734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.214 [2024-11-26 07:42:05.169742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.214 qpair failed and we were unable to recover it. 00:32:21.214 [2024-11-26 07:42:05.170074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.214 [2024-11-26 07:42:05.170082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.215 qpair failed and we were unable to recover it. 00:32:21.215 [2024-11-26 07:42:05.170392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.215 [2024-11-26 07:42:05.170400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.215 qpair failed and we were unable to recover it. 00:32:21.215 [2024-11-26 07:42:05.170587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.215 [2024-11-26 07:42:05.170595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.215 qpair failed and we were unable to recover it. 00:32:21.215 [2024-11-26 07:42:05.170891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.215 [2024-11-26 07:42:05.170899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.215 qpair failed and we were unable to recover it. 00:32:21.215 [2024-11-26 07:42:05.171094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.215 [2024-11-26 07:42:05.171102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.215 qpair failed and we were unable to recover it. 00:32:21.215 [2024-11-26 07:42:05.171327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.215 [2024-11-26 07:42:05.171335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.215 qpair failed and we were unable to recover it. 00:32:21.215 [2024-11-26 07:42:05.171646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.215 [2024-11-26 07:42:05.171655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.215 qpair failed and we were unable to recover it. 00:32:21.215 [2024-11-26 07:42:05.171851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.215 [2024-11-26 07:42:05.171859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.215 qpair failed and we were unable to recover it. 00:32:21.215 [2024-11-26 07:42:05.172189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.215 [2024-11-26 07:42:05.172197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.215 qpair failed and we were unable to recover it. 00:32:21.215 [2024-11-26 07:42:05.172390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.215 [2024-11-26 07:42:05.172398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.215 qpair failed and we were unable to recover it. 00:32:21.215 [2024-11-26 07:42:05.172561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.215 [2024-11-26 07:42:05.172568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.215 qpair failed and we were unable to recover it. 00:32:21.215 [2024-11-26 07:42:05.172927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.215 [2024-11-26 07:42:05.172937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.215 qpair failed and we were unable to recover it. 00:32:21.215 [2024-11-26 07:42:05.173227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.215 [2024-11-26 07:42:05.173237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.215 qpair failed and we were unable to recover it. 00:32:21.215 [2024-11-26 07:42:05.173582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.215 [2024-11-26 07:42:05.173590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.215 qpair failed and we were unable to recover it. 00:32:21.215 [2024-11-26 07:42:05.173917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.215 [2024-11-26 07:42:05.173925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.215 qpair failed and we were unable to recover it. 00:32:21.215 [2024-11-26 07:42:05.174089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.215 [2024-11-26 07:42:05.174100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.215 qpair failed and we were unable to recover it. 00:32:21.215 [2024-11-26 07:42:05.174415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.215 [2024-11-26 07:42:05.174424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.215 qpair failed and we were unable to recover it. 00:32:21.215 [2024-11-26 07:42:05.174598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.215 [2024-11-26 07:42:05.174606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.215 qpair failed and we were unable to recover it. 00:32:21.215 [2024-11-26 07:42:05.174821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.215 [2024-11-26 07:42:05.174829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.215 qpair failed and we were unable to recover it. 00:32:21.215 [2024-11-26 07:42:05.175063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.215 [2024-11-26 07:42:05.175071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.215 qpair failed and we were unable to recover it. 00:32:21.215 [2024-11-26 07:42:05.175399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.215 [2024-11-26 07:42:05.175408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.215 qpair failed and we were unable to recover it. 00:32:21.215 [2024-11-26 07:42:05.175602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.215 [2024-11-26 07:42:05.175610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.215 qpair failed and we were unable to recover it. 00:32:21.216 [2024-11-26 07:42:05.175830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.216 [2024-11-26 07:42:05.175838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.216 qpair failed and we were unable to recover it. 00:32:21.216 [2024-11-26 07:42:05.176176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.216 [2024-11-26 07:42:05.176186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.216 qpair failed and we were unable to recover it. 00:32:21.216 [2024-11-26 07:42:05.176538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.216 [2024-11-26 07:42:05.176547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.216 qpair failed and we were unable to recover it. 00:32:21.216 [2024-11-26 07:42:05.176733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.216 [2024-11-26 07:42:05.176741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.216 qpair failed and we were unable to recover it. 00:32:21.216 [2024-11-26 07:42:05.177045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.216 [2024-11-26 07:42:05.177053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.216 qpair failed and we were unable to recover it. 00:32:21.216 [2024-11-26 07:42:05.177121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.216 [2024-11-26 07:42:05.177127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.216 qpair failed and we were unable to recover it. 00:32:21.216 [2024-11-26 07:42:05.177436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.216 [2024-11-26 07:42:05.177443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.216 qpair failed and we were unable to recover it. 00:32:21.216 [2024-11-26 07:42:05.177761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.216 [2024-11-26 07:42:05.177768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.216 qpair failed and we were unable to recover it. 00:32:21.216 [2024-11-26 07:42:05.178089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.216 [2024-11-26 07:42:05.178097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.216 qpair failed and we were unable to recover it. 00:32:21.216 [2024-11-26 07:42:05.178432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.216 [2024-11-26 07:42:05.178440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.216 qpair failed and we were unable to recover it. 00:32:21.216 [2024-11-26 07:42:05.178651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.216 [2024-11-26 07:42:05.178659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.216 qpair failed and we were unable to recover it. 00:32:21.216 [2024-11-26 07:42:05.178724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.216 [2024-11-26 07:42:05.178732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.216 qpair failed and we were unable to recover it. 00:32:21.216 [2024-11-26 07:42:05.179005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.216 [2024-11-26 07:42:05.179013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.216 qpair failed and we were unable to recover it. 00:32:21.216 [2024-11-26 07:42:05.179302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.216 [2024-11-26 07:42:05.179310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.216 qpair failed and we were unable to recover it. 00:32:21.216 [2024-11-26 07:42:05.179470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.216 [2024-11-26 07:42:05.179478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.216 qpair failed and we were unable to recover it. 00:32:21.216 [2024-11-26 07:42:05.179819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.216 [2024-11-26 07:42:05.179826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.216 qpair failed and we were unable to recover it. 00:32:21.216 [2024-11-26 07:42:05.180145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.216 [2024-11-26 07:42:05.180153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.216 qpair failed and we were unable to recover it. 00:32:21.216 [2024-11-26 07:42:05.180469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.216 [2024-11-26 07:42:05.180477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.216 qpair failed and we were unable to recover it. 00:32:21.216 [2024-11-26 07:42:05.180790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.216 [2024-11-26 07:42:05.180797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.216 qpair failed and we were unable to recover it. 00:32:21.216 [2024-11-26 07:42:05.180979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.216 [2024-11-26 07:42:05.180988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.216 qpair failed and we were unable to recover it. 00:32:21.216 [2024-11-26 07:42:05.181324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.216 [2024-11-26 07:42:05.181332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.216 qpair failed and we were unable to recover it. 00:32:21.216 [2024-11-26 07:42:05.181690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.216 [2024-11-26 07:42:05.181698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.216 qpair failed and we were unable to recover it. 00:32:21.216 [2024-11-26 07:42:05.182014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.216 [2024-11-26 07:42:05.182022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.216 qpair failed and we were unable to recover it. 00:32:21.216 [2024-11-26 07:42:05.182355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.216 [2024-11-26 07:42:05.182363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.216 qpair failed and we were unable to recover it. 00:32:21.216 [2024-11-26 07:42:05.182656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.216 [2024-11-26 07:42:05.182665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.216 qpair failed and we were unable to recover it. 00:32:21.216 [2024-11-26 07:42:05.182849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.216 [2024-11-26 07:42:05.182857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.216 qpair failed and we were unable to recover it. 00:32:21.216 [2024-11-26 07:42:05.183069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.216 [2024-11-26 07:42:05.183076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.216 qpair failed and we were unable to recover it. 00:32:21.216 [2024-11-26 07:42:05.183252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.216 [2024-11-26 07:42:05.183261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.216 qpair failed and we were unable to recover it. 00:32:21.217 [2024-11-26 07:42:05.183579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.217 [2024-11-26 07:42:05.183588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.217 qpair failed and we were unable to recover it. 00:32:21.217 [2024-11-26 07:42:05.183900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.217 [2024-11-26 07:42:05.183908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.217 qpair failed and we were unable to recover it. 00:32:21.217 [2024-11-26 07:42:05.184237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.217 [2024-11-26 07:42:05.184246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.217 qpair failed and we were unable to recover it. 00:32:21.217 [2024-11-26 07:42:05.184586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.217 [2024-11-26 07:42:05.184594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.217 qpair failed and we were unable to recover it. 00:32:21.217 [2024-11-26 07:42:05.184784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.217 [2024-11-26 07:42:05.184792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.217 qpair failed and we were unable to recover it. 00:32:21.217 [2024-11-26 07:42:05.185088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.217 [2024-11-26 07:42:05.185096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.217 qpair failed and we were unable to recover it. 00:32:21.217 [2024-11-26 07:42:05.185404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.217 [2024-11-26 07:42:05.185411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.217 qpair failed and we were unable to recover it. 00:32:21.217 [2024-11-26 07:42:05.185590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.217 [2024-11-26 07:42:05.185597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.217 qpair failed and we were unable to recover it. 00:32:21.217 [2024-11-26 07:42:05.185888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.217 [2024-11-26 07:42:05.185896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.217 qpair failed and we were unable to recover it. 00:32:21.217 [2024-11-26 07:42:05.186100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.217 [2024-11-26 07:42:05.186108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.217 qpair failed and we were unable to recover it. 00:32:21.217 [2024-11-26 07:42:05.186432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.217 [2024-11-26 07:42:05.186441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.217 qpair failed and we were unable to recover it. 00:32:21.217 [2024-11-26 07:42:05.186624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.217 [2024-11-26 07:42:05.186633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.217 qpair failed and we were unable to recover it. 00:32:21.217 [2024-11-26 07:42:05.186935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.217 [2024-11-26 07:42:05.186944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.217 qpair failed and we were unable to recover it. 00:32:21.217 [2024-11-26 07:42:05.187264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.217 [2024-11-26 07:42:05.187272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.217 qpair failed and we were unable to recover it. 00:32:21.217 [2024-11-26 07:42:05.187582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.217 [2024-11-26 07:42:05.187590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.217 qpair failed and we were unable to recover it. 00:32:21.217 [2024-11-26 07:42:05.187889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.217 [2024-11-26 07:42:05.187897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.217 qpair failed and we were unable to recover it. 00:32:21.217 [2024-11-26 07:42:05.188243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.217 [2024-11-26 07:42:05.188251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.217 qpair failed and we were unable to recover it. 00:32:21.217 [2024-11-26 07:42:05.188291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.217 [2024-11-26 07:42:05.188297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.217 qpair failed and we were unable to recover it. 00:32:21.217 [2024-11-26 07:42:05.188340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.217 [2024-11-26 07:42:05.188346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.217 qpair failed and we were unable to recover it. 00:32:21.217 [2024-11-26 07:42:05.188636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.217 [2024-11-26 07:42:05.188644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.217 qpair failed and we were unable to recover it. 00:32:21.217 [2024-11-26 07:42:05.188937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.217 [2024-11-26 07:42:05.188945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.217 qpair failed and we were unable to recover it. 00:32:21.217 [2024-11-26 07:42:05.189263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.217 [2024-11-26 07:42:05.189271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.217 qpair failed and we were unable to recover it. 00:32:21.217 [2024-11-26 07:42:05.189445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.217 [2024-11-26 07:42:05.189454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.217 qpair failed and we were unable to recover it. 00:32:21.217 [2024-11-26 07:42:05.189787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.217 [2024-11-26 07:42:05.189796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.217 qpair failed and we were unable to recover it. 00:32:21.217 [2024-11-26 07:42:05.189993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.217 [2024-11-26 07:42:05.190002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.217 qpair failed and we were unable to recover it. 00:32:21.217 [2024-11-26 07:42:05.190320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.217 [2024-11-26 07:42:05.190328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.217 qpair failed and we were unable to recover it. 00:32:21.217 [2024-11-26 07:42:05.190513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.217 [2024-11-26 07:42:05.190522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.217 qpair failed and we were unable to recover it. 00:32:21.218 [2024-11-26 07:42:05.190564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.218 [2024-11-26 07:42:05.190572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.218 qpair failed and we were unable to recover it. 00:32:21.218 [2024-11-26 07:42:05.190924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.218 [2024-11-26 07:42:05.190932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.218 qpair failed and we were unable to recover it. 00:32:21.218 [2024-11-26 07:42:05.191134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.218 [2024-11-26 07:42:05.191144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.218 qpair failed and we were unable to recover it. 00:32:21.218 [2024-11-26 07:42:05.191330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.218 [2024-11-26 07:42:05.191337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.218 qpair failed and we were unable to recover it. 00:32:21.218 [2024-11-26 07:42:05.191659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.218 [2024-11-26 07:42:05.191666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.218 qpair failed and we were unable to recover it. 00:32:21.218 [2024-11-26 07:42:05.191891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.218 [2024-11-26 07:42:05.191899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.218 qpair failed and we were unable to recover it. 00:32:21.218 [2024-11-26 07:42:05.192233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.218 [2024-11-26 07:42:05.192241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.218 qpair failed and we were unable to recover it. 00:32:21.218 [2024-11-26 07:42:05.192432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.218 [2024-11-26 07:42:05.192440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.218 qpair failed and we were unable to recover it. 00:32:21.218 [2024-11-26 07:42:05.192759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.218 [2024-11-26 07:42:05.192768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.218 qpair failed and we were unable to recover it. 00:32:21.218 [2024-11-26 07:42:05.193056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.218 [2024-11-26 07:42:05.193065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.218 qpair failed and we were unable to recover it. 00:32:21.218 [2024-11-26 07:42:05.193375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.218 [2024-11-26 07:42:05.193383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.218 qpair failed and we were unable to recover it. 00:32:21.218 [2024-11-26 07:42:05.193702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.218 [2024-11-26 07:42:05.193711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.218 qpair failed and we were unable to recover it. 00:32:21.218 [2024-11-26 07:42:05.193873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.218 [2024-11-26 07:42:05.193881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.218 qpair failed and we were unable to recover it. 00:32:21.218 [2024-11-26 07:42:05.194175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.218 [2024-11-26 07:42:05.194183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.218 qpair failed and we were unable to recover it. 00:32:21.218 [2024-11-26 07:42:05.194353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.218 [2024-11-26 07:42:05.194362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.218 qpair failed and we were unable to recover it. 00:32:21.218 [2024-11-26 07:42:05.194670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.218 [2024-11-26 07:42:05.194681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.218 qpair failed and we were unable to recover it. 00:32:21.218 [2024-11-26 07:42:05.194715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.218 [2024-11-26 07:42:05.194723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.218 qpair failed and we were unable to recover it. 00:32:21.218 [2024-11-26 07:42:05.194901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.218 [2024-11-26 07:42:05.194909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.218 qpair failed and we were unable to recover it. 00:32:21.218 [2024-11-26 07:42:05.195294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.218 [2024-11-26 07:42:05.195302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.218 qpair failed and we were unable to recover it. 00:32:21.218 [2024-11-26 07:42:05.195473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.218 [2024-11-26 07:42:05.195483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.218 qpair failed and we were unable to recover it. 00:32:21.218 [2024-11-26 07:42:05.195643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.218 [2024-11-26 07:42:05.195652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.218 qpair failed and we were unable to recover it. 00:32:21.218 [2024-11-26 07:42:05.195880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.218 [2024-11-26 07:42:05.195889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.218 qpair failed and we were unable to recover it. 00:32:21.218 [2024-11-26 07:42:05.196180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.218 [2024-11-26 07:42:05.196189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.218 qpair failed and we were unable to recover it. 00:32:21.218 [2024-11-26 07:42:05.196501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.218 [2024-11-26 07:42:05.196509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.218 qpair failed and we were unable to recover it. 00:32:21.218 [2024-11-26 07:42:05.196830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.218 [2024-11-26 07:42:05.196838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.218 qpair failed and we were unable to recover it. 00:32:21.218 [2024-11-26 07:42:05.197160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.218 [2024-11-26 07:42:05.197169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.218 qpair failed and we were unable to recover it. 00:32:21.218 [2024-11-26 07:42:05.197481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.218 [2024-11-26 07:42:05.197490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.219 qpair failed and we were unable to recover it. 00:32:21.219 [2024-11-26 07:42:05.197805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.219 [2024-11-26 07:42:05.197814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.219 qpair failed and we were unable to recover it. 00:32:21.219 [2024-11-26 07:42:05.197993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.219 [2024-11-26 07:42:05.198002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.219 qpair failed and we were unable to recover it. 00:32:21.219 [2024-11-26 07:42:05.198262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.219 [2024-11-26 07:42:05.198270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.219 qpair failed and we were unable to recover it. 00:32:21.219 [2024-11-26 07:42:05.198578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.219 [2024-11-26 07:42:05.198586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.219 qpair failed and we were unable to recover it. 00:32:21.219 [2024-11-26 07:42:05.198879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.219 [2024-11-26 07:42:05.198888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.219 qpair failed and we were unable to recover it. 00:32:21.219 [2024-11-26 07:42:05.199191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.219 [2024-11-26 07:42:05.199200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.219 qpair failed and we were unable to recover it. 00:32:21.219 [2024-11-26 07:42:05.199399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.219 [2024-11-26 07:42:05.199407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.219 qpair failed and we were unable to recover it. 00:32:21.219 [2024-11-26 07:42:05.199729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.219 [2024-11-26 07:42:05.199738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.219 qpair failed and we were unable to recover it. 00:32:21.219 [2024-11-26 07:42:05.200035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.219 [2024-11-26 07:42:05.200044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.219 qpair failed and we were unable to recover it. 00:32:21.219 [2024-11-26 07:42:05.200374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.219 [2024-11-26 07:42:05.200382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.219 qpair failed and we were unable to recover it. 00:32:21.219 [2024-11-26 07:42:05.200562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.219 [2024-11-26 07:42:05.200570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.219 qpair failed and we were unable to recover it. 00:32:21.219 [2024-11-26 07:42:05.200752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.219 [2024-11-26 07:42:05.200762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.219 qpair failed and we were unable to recover it. 00:32:21.219 [2024-11-26 07:42:05.201094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.219 [2024-11-26 07:42:05.201103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.219 qpair failed and we were unable to recover it. 00:32:21.219 [2024-11-26 07:42:05.201402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.219 [2024-11-26 07:42:05.201411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.219 qpair failed and we were unable to recover it. 00:32:21.219 [2024-11-26 07:42:05.201565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.219 [2024-11-26 07:42:05.201573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.219 qpair failed and we were unable to recover it. 00:32:21.219 [2024-11-26 07:42:05.201737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.219 [2024-11-26 07:42:05.201745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.219 qpair failed and we were unable to recover it. 00:32:21.219 [2024-11-26 07:42:05.201929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.219 [2024-11-26 07:42:05.201939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.219 qpair failed and we were unable to recover it. 00:32:21.219 [2024-11-26 07:42:05.202280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.219 [2024-11-26 07:42:05.202289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.219 qpair failed and we were unable to recover it. 00:32:21.219 [2024-11-26 07:42:05.202431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.219 [2024-11-26 07:42:05.202439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.219 qpair failed and we were unable to recover it. 00:32:21.219 [2024-11-26 07:42:05.202637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.219 [2024-11-26 07:42:05.202645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.219 qpair failed and we were unable to recover it. 00:32:21.219 [2024-11-26 07:42:05.202934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.219 [2024-11-26 07:42:05.202943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.219 qpair failed and we were unable to recover it. 00:32:21.219 [2024-11-26 07:42:05.203267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.219 [2024-11-26 07:42:05.203275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.219 qpair failed and we were unable to recover it. 00:32:21.219 [2024-11-26 07:42:05.203582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.219 [2024-11-26 07:42:05.203590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.219 qpair failed and we were unable to recover it. 00:32:21.219 [2024-11-26 07:42:05.203902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.219 [2024-11-26 07:42:05.203911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.219 qpair failed and we were unable to recover it. 00:32:21.219 [2024-11-26 07:42:05.204225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.219 [2024-11-26 07:42:05.204233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.219 qpair failed and we were unable to recover it. 00:32:21.219 [2024-11-26 07:42:05.204423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.219 [2024-11-26 07:42:05.204431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.219 qpair failed and we were unable to recover it. 00:32:21.219 [2024-11-26 07:42:05.204585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.220 [2024-11-26 07:42:05.204593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.220 qpair failed and we were unable to recover it. 00:32:21.220 [2024-11-26 07:42:05.204909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.220 [2024-11-26 07:42:05.204917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.220 qpair failed and we were unable to recover it. 00:32:21.220 [2024-11-26 07:42:05.205100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.220 [2024-11-26 07:42:05.205110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.220 qpair failed and we were unable to recover it. 00:32:21.220 [2024-11-26 07:42:05.205284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.220 [2024-11-26 07:42:05.205293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.220 qpair failed and we were unable to recover it. 00:32:21.220 [2024-11-26 07:42:05.205607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.220 [2024-11-26 07:42:05.205616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.220 qpair failed and we were unable to recover it. 00:32:21.220 [2024-11-26 07:42:05.205820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.220 [2024-11-26 07:42:05.205829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.220 qpair failed and we were unable to recover it. 00:32:21.220 [2024-11-26 07:42:05.206114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.220 [2024-11-26 07:42:05.206123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.220 qpair failed and we were unable to recover it. 00:32:21.220 [2024-11-26 07:42:05.206302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.220 [2024-11-26 07:42:05.206311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.220 qpair failed and we were unable to recover it. 00:32:21.220 [2024-11-26 07:42:05.206656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.220 [2024-11-26 07:42:05.206665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.220 qpair failed and we were unable to recover it. 00:32:21.220 [2024-11-26 07:42:05.206991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.220 [2024-11-26 07:42:05.206999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.220 qpair failed and we were unable to recover it. 00:32:21.220 [2024-11-26 07:42:05.207321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.220 [2024-11-26 07:42:05.207329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.220 qpair failed and we were unable to recover it. 00:32:21.220 [2024-11-26 07:42:05.207725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.220 [2024-11-26 07:42:05.207733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.220 qpair failed and we were unable to recover it. 00:32:21.220 [2024-11-26 07:42:05.208038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.220 [2024-11-26 07:42:05.208047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.220 qpair failed and we were unable to recover it. 00:32:21.220 [2024-11-26 07:42:05.208371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.220 [2024-11-26 07:42:05.208379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.220 qpair failed and we were unable to recover it. 00:32:21.220 [2024-11-26 07:42:05.208414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.220 [2024-11-26 07:42:05.208421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.220 qpair failed and we were unable to recover it. 00:32:21.220 [2024-11-26 07:42:05.208650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.220 [2024-11-26 07:42:05.208658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.220 qpair failed and we were unable to recover it. 00:32:21.220 [2024-11-26 07:42:05.208972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.220 [2024-11-26 07:42:05.208981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.220 qpair failed and we were unable to recover it. 00:32:21.220 [2024-11-26 07:42:05.209284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.220 [2024-11-26 07:42:05.209293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.220 qpair failed and we were unable to recover it. 00:32:21.220 [2024-11-26 07:42:05.209476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.220 [2024-11-26 07:42:05.209485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.220 qpair failed and we were unable to recover it. 00:32:21.220 [2024-11-26 07:42:05.209775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.220 [2024-11-26 07:42:05.209783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.220 qpair failed and we were unable to recover it. 00:32:21.220 [2024-11-26 07:42:05.209936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.220 [2024-11-26 07:42:05.209945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.220 qpair failed and we were unable to recover it. 00:32:21.220 [2024-11-26 07:42:05.210242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.220 [2024-11-26 07:42:05.210251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.220 qpair failed and we were unable to recover it. 00:32:21.220 [2024-11-26 07:42:05.210581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.220 [2024-11-26 07:42:05.210589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.220 qpair failed and we were unable to recover it. 00:32:21.220 [2024-11-26 07:42:05.210920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.220 [2024-11-26 07:42:05.210928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.220 qpair failed and we were unable to recover it. 00:32:21.220 [2024-11-26 07:42:05.211240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.220 [2024-11-26 07:42:05.211248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.220 qpair failed and we were unable to recover it. 00:32:21.220 [2024-11-26 07:42:05.211530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.220 [2024-11-26 07:42:05.211538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.220 qpair failed and we were unable to recover it. 00:32:21.220 [2024-11-26 07:42:05.211843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.220 [2024-11-26 07:42:05.211851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.221 qpair failed and we were unable to recover it. 00:32:21.221 [2024-11-26 07:42:05.212154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.221 [2024-11-26 07:42:05.212163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.221 qpair failed and we were unable to recover it. 00:32:21.221 [2024-11-26 07:42:05.212323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.221 [2024-11-26 07:42:05.212332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.221 qpair failed and we were unable to recover it. 00:32:21.221 [2024-11-26 07:42:05.212603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.221 [2024-11-26 07:42:05.212611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.221 qpair failed and we were unable to recover it. 00:32:21.221 [2024-11-26 07:42:05.212956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.221 [2024-11-26 07:42:05.212965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.221 qpair failed and we were unable to recover it. 00:32:21.221 [2024-11-26 07:42:05.213291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.221 [2024-11-26 07:42:05.213299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.221 qpair failed and we were unable to recover it. 00:32:21.221 [2024-11-26 07:42:05.213660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.221 [2024-11-26 07:42:05.213669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.221 qpair failed and we were unable to recover it. 00:32:21.221 [2024-11-26 07:42:05.213873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.221 [2024-11-26 07:42:05.213881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.221 qpair failed and we were unable to recover it. 00:32:21.221 [2024-11-26 07:42:05.214190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.221 [2024-11-26 07:42:05.214198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.221 qpair failed and we were unable to recover it. 00:32:21.221 [2024-11-26 07:42:05.214526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.221 [2024-11-26 07:42:05.214534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.221 qpair failed and we were unable to recover it. 00:32:21.221 [2024-11-26 07:42:05.214827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.221 [2024-11-26 07:42:05.214835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.221 qpair failed and we were unable to recover it. 00:32:21.221 [2024-11-26 07:42:05.215167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.221 [2024-11-26 07:42:05.215176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.221 qpair failed and we were unable to recover it. 00:32:21.221 [2024-11-26 07:42:05.215373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.221 [2024-11-26 07:42:05.215382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.221 qpair failed and we were unable to recover it. 00:32:21.221 [2024-11-26 07:42:05.215675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.221 [2024-11-26 07:42:05.215683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.221 qpair failed and we were unable to recover it. 00:32:21.221 [2024-11-26 07:42:05.216004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.221 [2024-11-26 07:42:05.216013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.221 qpair failed and we were unable to recover it. 00:32:21.221 [2024-11-26 07:42:05.216338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.221 [2024-11-26 07:42:05.216346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.221 qpair failed and we were unable to recover it. 00:32:21.221 [2024-11-26 07:42:05.216664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.221 [2024-11-26 07:42:05.216673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.221 qpair failed and we were unable to recover it. 00:32:21.221 [2024-11-26 07:42:05.216876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.221 [2024-11-26 07:42:05.216885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.221 qpair failed and we were unable to recover it. 00:32:21.221 [2024-11-26 07:42:05.217097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.221 [2024-11-26 07:42:05.217106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.221 qpair failed and we were unable to recover it. 00:32:21.221 [2024-11-26 07:42:05.217390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.221 [2024-11-26 07:42:05.217398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.221 qpair failed and we were unable to recover it. 00:32:21.221 [2024-11-26 07:42:05.217716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.221 [2024-11-26 07:42:05.217724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.221 qpair failed and we were unable to recover it. 00:32:21.221 [2024-11-26 07:42:05.218030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.221 [2024-11-26 07:42:05.218039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.221 qpair failed and we were unable to recover it. 00:32:21.221 [2024-11-26 07:42:05.218212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.221 [2024-11-26 07:42:05.218220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.221 qpair failed and we were unable to recover it. 00:32:21.222 [2024-11-26 07:42:05.218517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.222 [2024-11-26 07:42:05.218524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.222 qpair failed and we were unable to recover it. 00:32:21.222 [2024-11-26 07:42:05.218831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.222 [2024-11-26 07:42:05.218840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.222 qpair failed and we were unable to recover it. 00:32:21.222 [2024-11-26 07:42:05.219151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.222 [2024-11-26 07:42:05.219159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.222 qpair failed and we were unable to recover it. 00:32:21.222 [2024-11-26 07:42:05.219229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.222 [2024-11-26 07:42:05.219235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.222 qpair failed and we were unable to recover it. 00:32:21.222 [2024-11-26 07:42:05.219318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.222 [2024-11-26 07:42:05.219326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.222 qpair failed and we were unable to recover it. 00:32:21.222 [2024-11-26 07:42:05.219486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.222 [2024-11-26 07:42:05.219495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.222 qpair failed and we were unable to recover it. 00:32:21.222 [2024-11-26 07:42:05.219691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.222 [2024-11-26 07:42:05.219700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.222 qpair failed and we were unable to recover it. 00:32:21.222 [2024-11-26 07:42:05.220036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.222 [2024-11-26 07:42:05.220043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.222 qpair failed and we were unable to recover it. 00:32:21.222 [2024-11-26 07:42:05.220227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.222 [2024-11-26 07:42:05.220235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.222 qpair failed and we were unable to recover it. 00:32:21.222 [2024-11-26 07:42:05.220514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.222 [2024-11-26 07:42:05.220522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.222 qpair failed and we were unable to recover it. 00:32:21.222 [2024-11-26 07:42:05.220692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.222 [2024-11-26 07:42:05.220700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.222 qpair failed and we were unable to recover it. 00:32:21.222 [2024-11-26 07:42:05.221037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.222 [2024-11-26 07:42:05.221045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.222 qpair failed and we were unable to recover it. 00:32:21.222 [2024-11-26 07:42:05.221225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.222 [2024-11-26 07:42:05.221234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.222 qpair failed and we were unable to recover it. 00:32:21.222 [2024-11-26 07:42:05.221515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.222 [2024-11-26 07:42:05.221523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.222 qpair failed and we were unable to recover it. 00:32:21.222 [2024-11-26 07:42:05.221688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.222 [2024-11-26 07:42:05.221695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.222 qpair failed and we were unable to recover it. 00:32:21.222 [2024-11-26 07:42:05.221886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.222 [2024-11-26 07:42:05.221895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.222 qpair failed and we were unable to recover it. 00:32:21.222 [2024-11-26 07:42:05.222180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.222 [2024-11-26 07:42:05.222188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.222 qpair failed and we were unable to recover it. 00:32:21.222 [2024-11-26 07:42:05.222528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.222 [2024-11-26 07:42:05.222536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.222 qpair failed and we were unable to recover it. 00:32:21.222 [2024-11-26 07:42:05.222698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.222 [2024-11-26 07:42:05.222706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.222 qpair failed and we were unable to recover it. 00:32:21.222 [2024-11-26 07:42:05.222971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.222 [2024-11-26 07:42:05.222979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.222 qpair failed and we were unable to recover it. 00:32:21.222 [2024-11-26 07:42:05.223177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.222 [2024-11-26 07:42:05.223185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.222 qpair failed and we were unable to recover it. 00:32:21.222 [2024-11-26 07:42:05.223339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.222 [2024-11-26 07:42:05.223346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.222 qpair failed and we were unable to recover it. 00:32:21.222 [2024-11-26 07:42:05.223390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.222 [2024-11-26 07:42:05.223396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.222 qpair failed and we were unable to recover it. 00:32:21.222 [2024-11-26 07:42:05.223716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.222 [2024-11-26 07:42:05.223724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.222 qpair failed and we were unable to recover it. 00:32:21.222 [2024-11-26 07:42:05.223917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.222 [2024-11-26 07:42:05.223926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.222 qpair failed and we were unable to recover it. 00:32:21.222 [2024-11-26 07:42:05.224220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.222 [2024-11-26 07:42:05.224228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.222 qpair failed and we were unable to recover it. 00:32:21.222 [2024-11-26 07:42:05.224585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.223 [2024-11-26 07:42:05.224593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.223 qpair failed and we were unable to recover it. 00:32:21.223 [2024-11-26 07:42:05.224775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.223 [2024-11-26 07:42:05.224782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.223 qpair failed and we were unable to recover it. 00:32:21.223 [2024-11-26 07:42:05.224960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.223 [2024-11-26 07:42:05.224968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.223 qpair failed and we were unable to recover it. 00:32:21.223 [2024-11-26 07:42:05.225253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.223 [2024-11-26 07:42:05.225261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.223 qpair failed and we were unable to recover it. 00:32:21.223 [2024-11-26 07:42:05.225306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.223 [2024-11-26 07:42:05.225313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.223 qpair failed and we were unable to recover it. 00:32:21.223 [2024-11-26 07:42:05.225600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.223 [2024-11-26 07:42:05.225608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.223 qpair failed and we were unable to recover it. 00:32:21.223 [2024-11-26 07:42:05.225808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.223 [2024-11-26 07:42:05.225816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.223 qpair failed and we were unable to recover it. 00:32:21.223 [2024-11-26 07:42:05.226173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.223 [2024-11-26 07:42:05.226183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.223 qpair failed and we were unable to recover it. 00:32:21.223 [2024-11-26 07:42:05.226377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.223 [2024-11-26 07:42:05.226385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.223 qpair failed and we were unable to recover it. 00:32:21.223 [2024-11-26 07:42:05.226764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.223 [2024-11-26 07:42:05.226773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.223 qpair failed and we were unable to recover it. 00:32:21.223 [2024-11-26 07:42:05.227167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.223 [2024-11-26 07:42:05.227176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.223 qpair failed and we were unable to recover it. 00:32:21.223 [2024-11-26 07:42:05.227356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.223 [2024-11-26 07:42:05.227364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.223 qpair failed and we were unable to recover it. 00:32:21.223 [2024-11-26 07:42:05.227629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.223 [2024-11-26 07:42:05.227636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.223 qpair failed and we were unable to recover it. 00:32:21.223 [2024-11-26 07:42:05.227969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.223 [2024-11-26 07:42:05.227977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.223 qpair failed and we were unable to recover it. 00:32:21.223 [2024-11-26 07:42:05.228289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.223 [2024-11-26 07:42:05.228297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.223 qpair failed and we were unable to recover it. 00:32:21.223 [2024-11-26 07:42:05.228610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.223 [2024-11-26 07:42:05.228618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.223 qpair failed and we were unable to recover it. 00:32:21.223 [2024-11-26 07:42:05.228795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.223 [2024-11-26 07:42:05.228804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.223 qpair failed and we were unable to recover it. 00:32:21.223 [2024-11-26 07:42:05.229103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.223 [2024-11-26 07:42:05.229112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.223 qpair failed and we were unable to recover it. 00:32:21.223 [2024-11-26 07:42:05.229300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.223 [2024-11-26 07:42:05.229309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.223 qpair failed and we were unable to recover it. 00:32:21.223 [2024-11-26 07:42:05.229576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.223 [2024-11-26 07:42:05.229584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.223 qpair failed and we were unable to recover it. 00:32:21.223 [2024-11-26 07:42:05.229758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.223 [2024-11-26 07:42:05.229766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.223 qpair failed and we were unable to recover it. 00:32:21.223 [2024-11-26 07:42:05.230072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.223 [2024-11-26 07:42:05.230080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.223 qpair failed and we were unable to recover it. 00:32:21.223 [2024-11-26 07:42:05.230401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.223 [2024-11-26 07:42:05.230409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.223 qpair failed and we were unable to recover it. 00:32:21.223 [2024-11-26 07:42:05.230618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.223 [2024-11-26 07:42:05.230627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.223 qpair failed and we were unable to recover it. 00:32:21.223 [2024-11-26 07:42:05.230815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.223 [2024-11-26 07:42:05.230823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.223 qpair failed and we were unable to recover it. 00:32:21.223 [2024-11-26 07:42:05.231112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.223 [2024-11-26 07:42:05.231121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.223 qpair failed and we were unable to recover it. 00:32:21.223 [2024-11-26 07:42:05.231301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.223 [2024-11-26 07:42:05.231309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.223 qpair failed and we were unable to recover it. 00:32:21.224 [2024-11-26 07:42:05.231608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.224 [2024-11-26 07:42:05.231617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.224 qpair failed and we were unable to recover it. 00:32:21.224 [2024-11-26 07:42:05.231914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.224 [2024-11-26 07:42:05.231923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.224 qpair failed and we were unable to recover it. 00:32:21.224 [2024-11-26 07:42:05.232238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.224 [2024-11-26 07:42:05.232245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.224 qpair failed and we were unable to recover it. 00:32:21.224 [2024-11-26 07:42:05.232544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.224 [2024-11-26 07:42:05.232552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.224 qpair failed and we were unable to recover it. 00:32:21.224 [2024-11-26 07:42:05.232706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.224 [2024-11-26 07:42:05.232713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.224 qpair failed and we were unable to recover it. 00:32:21.224 [2024-11-26 07:42:05.233035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.224 [2024-11-26 07:42:05.233043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.224 qpair failed and we were unable to recover it. 00:32:21.224 [2024-11-26 07:42:05.233220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.224 [2024-11-26 07:42:05.233228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.224 qpair failed and we were unable to recover it. 00:32:21.224 [2024-11-26 07:42:05.233549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.224 [2024-11-26 07:42:05.233557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.224 qpair failed and we were unable to recover it. 00:32:21.224 [2024-11-26 07:42:05.233741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.224 [2024-11-26 07:42:05.233750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.224 qpair failed and we were unable to recover it. 00:32:21.224 [2024-11-26 07:42:05.234056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.224 [2024-11-26 07:42:05.234064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.224 qpair failed and we were unable to recover it. 00:32:21.224 [2024-11-26 07:42:05.234375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.224 [2024-11-26 07:42:05.234384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.224 qpair failed and we were unable to recover it. 00:32:21.224 [2024-11-26 07:42:05.234698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.224 [2024-11-26 07:42:05.234707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.224 qpair failed and we were unable to recover it. 00:32:21.224 [2024-11-26 07:42:05.234905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.224 [2024-11-26 07:42:05.234915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.224 qpair failed and we were unable to recover it. 00:32:21.224 [2024-11-26 07:42:05.235092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.224 [2024-11-26 07:42:05.235100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.224 qpair failed and we were unable to recover it. 00:32:21.224 [2024-11-26 07:42:05.235282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.224 [2024-11-26 07:42:05.235291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.224 qpair failed and we were unable to recover it. 00:32:21.224 [2024-11-26 07:42:05.235450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.224 [2024-11-26 07:42:05.235459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.224 qpair failed and we were unable to recover it. 00:32:21.224 [2024-11-26 07:42:05.235801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.224 [2024-11-26 07:42:05.235810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.224 qpair failed and we were unable to recover it. 00:32:21.224 [2024-11-26 07:42:05.236088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.224 [2024-11-26 07:42:05.236097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.224 qpair failed and we were unable to recover it. 00:32:21.224 [2024-11-26 07:42:05.236271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.224 [2024-11-26 07:42:05.236280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.224 qpair failed and we were unable to recover it. 00:32:21.224 [2024-11-26 07:42:05.236547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.224 [2024-11-26 07:42:05.236555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.224 qpair failed and we were unable to recover it. 00:32:21.224 [2024-11-26 07:42:05.236894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.224 [2024-11-26 07:42:05.236905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.224 qpair failed and we were unable to recover it. 00:32:21.224 [2024-11-26 07:42:05.237184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.224 [2024-11-26 07:42:05.237192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.224 qpair failed and we were unable to recover it. 00:32:21.224 [2024-11-26 07:42:05.237507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.224 [2024-11-26 07:42:05.237516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.224 qpair failed and we were unable to recover it. 00:32:21.224 [2024-11-26 07:42:05.237562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.224 [2024-11-26 07:42:05.237568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.224 qpair failed and we were unable to recover it. 00:32:21.224 [2024-11-26 07:42:05.237842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.224 [2024-11-26 07:42:05.237850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.224 qpair failed and we were unable to recover it. 00:32:21.224 [2024-11-26 07:42:05.238053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.224 [2024-11-26 07:42:05.238061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.224 qpair failed and we were unable to recover it. 00:32:21.224 [2024-11-26 07:42:05.238399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.225 [2024-11-26 07:42:05.238407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.225 qpair failed and we were unable to recover it. 00:32:21.225 [2024-11-26 07:42:05.238707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.225 [2024-11-26 07:42:05.238715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.225 qpair failed and we were unable to recover it. 00:32:21.225 [2024-11-26 07:42:05.239034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.225 [2024-11-26 07:42:05.239043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.225 qpair failed and we were unable to recover it. 00:32:21.225 [2024-11-26 07:42:05.239265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.225 [2024-11-26 07:42:05.239274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.225 qpair failed and we were unable to recover it. 00:32:21.225 [2024-11-26 07:42:05.239619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.225 [2024-11-26 07:42:05.239628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.225 qpair failed and we were unable to recover it. 00:32:21.225 [2024-11-26 07:42:05.239993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.225 [2024-11-26 07:42:05.240002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.225 qpair failed and we were unable to recover it. 00:32:21.225 [2024-11-26 07:42:05.240184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.225 [2024-11-26 07:42:05.240193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.225 qpair failed and we were unable to recover it. 00:32:21.225 [2024-11-26 07:42:05.240362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.225 [2024-11-26 07:42:05.240370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.225 qpair failed and we were unable to recover it. 00:32:21.225 [2024-11-26 07:42:05.240675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.225 [2024-11-26 07:42:05.240683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.225 qpair failed and we were unable to recover it. 00:32:21.225 [2024-11-26 07:42:05.240835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.225 [2024-11-26 07:42:05.240843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.225 qpair failed and we were unable to recover it. 00:32:21.225 [2024-11-26 07:42:05.241142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.225 [2024-11-26 07:42:05.241150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.225 qpair failed and we were unable to recover it. 00:32:21.225 [2024-11-26 07:42:05.241447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.225 [2024-11-26 07:42:05.241455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.225 qpair failed and we were unable to recover it. 00:32:21.225 [2024-11-26 07:42:05.241682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.225 [2024-11-26 07:42:05.241690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.225 qpair failed and we were unable to recover it. 00:32:21.225 [2024-11-26 07:42:05.241883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.225 [2024-11-26 07:42:05.241891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.225 qpair failed and we were unable to recover it. 00:32:21.225 [2024-11-26 07:42:05.242112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.225 [2024-11-26 07:42:05.242120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.225 qpair failed and we were unable to recover it. 00:32:21.225 [2024-11-26 07:42:05.242438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.225 [2024-11-26 07:42:05.242447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.225 qpair failed and we were unable to recover it. 00:32:21.225 [2024-11-26 07:42:05.242621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.225 [2024-11-26 07:42:05.242630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.225 qpair failed and we were unable to recover it. 00:32:21.225 [2024-11-26 07:42:05.242801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.225 [2024-11-26 07:42:05.242809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.225 qpair failed and we were unable to recover it. 00:32:21.225 [2024-11-26 07:42:05.243136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.225 [2024-11-26 07:42:05.243145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.225 qpair failed and we were unable to recover it. 00:32:21.225 [2024-11-26 07:42:05.243329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.225 [2024-11-26 07:42:05.243338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.225 qpair failed and we were unable to recover it. 00:32:21.225 [2024-11-26 07:42:05.243642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.225 [2024-11-26 07:42:05.243651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.225 qpair failed and we were unable to recover it. 00:32:21.225 [2024-11-26 07:42:05.243959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.225 [2024-11-26 07:42:05.243968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.225 qpair failed and we were unable to recover it. 00:32:21.225 [2024-11-26 07:42:05.244307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.225 [2024-11-26 07:42:05.244314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.225 qpair failed and we were unable to recover it. 00:32:21.225 [2024-11-26 07:42:05.244663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.225 [2024-11-26 07:42:05.244671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.225 qpair failed and we were unable to recover it. 00:32:21.225 [2024-11-26 07:42:05.244980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.225 [2024-11-26 07:42:05.244988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.225 qpair failed and we were unable to recover it. 00:32:21.225 [2024-11-26 07:42:05.245169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.225 [2024-11-26 07:42:05.245178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.225 qpair failed and we were unable to recover it. 00:32:21.225 [2024-11-26 07:42:05.245499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.225 [2024-11-26 07:42:05.245507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.226 qpair failed and we were unable to recover it. 00:32:21.226 [2024-11-26 07:42:05.245680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.226 [2024-11-26 07:42:05.245689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.226 qpair failed and we were unable to recover it. 00:32:21.226 [2024-11-26 07:42:05.246048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.226 [2024-11-26 07:42:05.246056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.226 qpair failed and we were unable to recover it. 00:32:21.226 [2024-11-26 07:42:05.246282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.226 [2024-11-26 07:42:05.246290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.226 qpair failed and we were unable to recover it. 00:32:21.226 [2024-11-26 07:42:05.246614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.226 [2024-11-26 07:42:05.246622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.226 qpair failed and we were unable to recover it. 00:32:21.226 [2024-11-26 07:42:05.246907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.226 [2024-11-26 07:42:05.246918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.226 qpair failed and we were unable to recover it. 00:32:21.226 [2024-11-26 07:42:05.247195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.226 [2024-11-26 07:42:05.247204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.226 qpair failed and we were unable to recover it. 00:32:21.226 [2024-11-26 07:42:05.247541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.226 [2024-11-26 07:42:05.247549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.226 qpair failed and we were unable to recover it. 00:32:21.226 [2024-11-26 07:42:05.247765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.226 [2024-11-26 07:42:05.247774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.226 qpair failed and we were unable to recover it. 00:32:21.226 [2024-11-26 07:42:05.248088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.226 [2024-11-26 07:42:05.248098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.226 qpair failed and we were unable to recover it. 00:32:21.226 [2024-11-26 07:42:05.248411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.226 [2024-11-26 07:42:05.248420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.226 qpair failed and we were unable to recover it. 00:32:21.226 [2024-11-26 07:42:05.248726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.226 [2024-11-26 07:42:05.248734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.226 qpair failed and we were unable to recover it. 00:32:21.226 [2024-11-26 07:42:05.248784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.226 [2024-11-26 07:42:05.248792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.226 qpair failed and we were unable to recover it. 00:32:21.226 [2024-11-26 07:42:05.249083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.226 [2024-11-26 07:42:05.249091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.226 qpair failed and we were unable to recover it. 00:32:21.226 [2024-11-26 07:42:05.249399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.226 [2024-11-26 07:42:05.249407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.226 qpair failed and we were unable to recover it. 00:32:21.226 [2024-11-26 07:42:05.249733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.226 [2024-11-26 07:42:05.249741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.226 qpair failed and we were unable to recover it. 00:32:21.226 [2024-11-26 07:42:05.250032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.226 [2024-11-26 07:42:05.250039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.226 qpair failed and we were unable to recover it. 00:32:21.226 [2024-11-26 07:42:05.250350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.226 [2024-11-26 07:42:05.250358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.226 qpair failed and we were unable to recover it. 00:32:21.226 [2024-11-26 07:42:05.250671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.226 [2024-11-26 07:42:05.250679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.226 qpair failed and we were unable to recover it. 00:32:21.226 [2024-11-26 07:42:05.250873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.226 [2024-11-26 07:42:05.250882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.226 qpair failed and we were unable to recover it. 00:32:21.226 [2024-11-26 07:42:05.251165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.226 [2024-11-26 07:42:05.251173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.226 qpair failed and we were unable to recover it. 00:32:21.226 [2024-11-26 07:42:05.251367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.226 [2024-11-26 07:42:05.251376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.226 qpair failed and we were unable to recover it. 00:32:21.226 [2024-11-26 07:42:05.251536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.226 [2024-11-26 07:42:05.251544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.226 qpair failed and we were unable to recover it. 00:32:21.226 [2024-11-26 07:42:05.251729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.226 [2024-11-26 07:42:05.251737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.226 qpair failed and we were unable to recover it. 00:32:21.226 [2024-11-26 07:42:05.252048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.226 [2024-11-26 07:42:05.252056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.226 qpair failed and we were unable to recover it. 00:32:21.226 [2024-11-26 07:42:05.252364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.226 [2024-11-26 07:42:05.252372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.226 qpair failed and we were unable to recover it. 00:32:21.226 [2024-11-26 07:42:05.252722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.226 [2024-11-26 07:42:05.252729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.226 qpair failed and we were unable to recover it. 00:32:21.227 [2024-11-26 07:42:05.252896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.227 [2024-11-26 07:42:05.252904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.227 qpair failed and we were unable to recover it. 00:32:21.227 [2024-11-26 07:42:05.253106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.227 [2024-11-26 07:42:05.253113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.227 qpair failed and we were unable to recover it. 00:32:21.227 [2024-11-26 07:42:05.253408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.227 [2024-11-26 07:42:05.253417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.227 qpair failed and we were unable to recover it. 00:32:21.227 [2024-11-26 07:42:05.253739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.227 [2024-11-26 07:42:05.253747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.227 qpair failed and we were unable to recover it. 00:32:21.227 [2024-11-26 07:42:05.254067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.227 [2024-11-26 07:42:05.254076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.227 qpair failed and we were unable to recover it. 00:32:21.227 [2024-11-26 07:42:05.254378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.227 [2024-11-26 07:42:05.254386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.227 qpair failed and we were unable to recover it. 00:32:21.227 [2024-11-26 07:42:05.254705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.227 [2024-11-26 07:42:05.254713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.227 qpair failed and we were unable to recover it. 00:32:21.227 [2024-11-26 07:42:05.255050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.227 [2024-11-26 07:42:05.255058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.227 qpair failed and we were unable to recover it. 00:32:21.227 [2024-11-26 07:42:05.255389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.227 [2024-11-26 07:42:05.255397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.227 qpair failed and we were unable to recover it. 00:32:21.227 [2024-11-26 07:42:05.255586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.227 [2024-11-26 07:42:05.255594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.227 qpair failed and we were unable to recover it. 00:32:21.227 [2024-11-26 07:42:05.255743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.227 [2024-11-26 07:42:05.255751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.227 qpair failed and we were unable to recover it. 00:32:21.227 [2024-11-26 07:42:05.256039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.227 [2024-11-26 07:42:05.256047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.227 qpair failed and we were unable to recover it. 00:32:21.227 [2024-11-26 07:42:05.256358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.227 [2024-11-26 07:42:05.256366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.227 qpair failed and we were unable to recover it. 00:32:21.227 [2024-11-26 07:42:05.256698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.227 [2024-11-26 07:42:05.256706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.227 qpair failed and we were unable to recover it. 00:32:21.227 [2024-11-26 07:42:05.257051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.227 [2024-11-26 07:42:05.257059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.227 qpair failed and we were unable to recover it. 00:32:21.227 [2024-11-26 07:42:05.257250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.227 [2024-11-26 07:42:05.257259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.227 qpair failed and we were unable to recover it. 00:32:21.227 [2024-11-26 07:42:05.257426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.227 [2024-11-26 07:42:05.257434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.227 qpair failed and we were unable to recover it. 00:32:21.227 [2024-11-26 07:42:05.257761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.227 [2024-11-26 07:42:05.257769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.227 qpair failed and we were unable to recover it. 00:32:21.227 [2024-11-26 07:42:05.257955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.227 [2024-11-26 07:42:05.257964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.227 qpair failed and we were unable to recover it. 00:32:21.227 [2024-11-26 07:42:05.258255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.227 [2024-11-26 07:42:05.258263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.227 qpair failed and we were unable to recover it. 00:32:21.227 [2024-11-26 07:42:05.258420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.227 [2024-11-26 07:42:05.258430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.227 qpair failed and we were unable to recover it. 00:32:21.227 [2024-11-26 07:42:05.258615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.227 [2024-11-26 07:42:05.258624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.227 qpair failed and we were unable to recover it. 00:32:21.227 [2024-11-26 07:42:05.258797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.227 [2024-11-26 07:42:05.258806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.227 qpair failed and we were unable to recover it. 00:32:21.227 [2024-11-26 07:42:05.259110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.227 [2024-11-26 07:42:05.259118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.227 qpair failed and we were unable to recover it. 00:32:21.227 [2024-11-26 07:42:05.259441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.227 [2024-11-26 07:42:05.259449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.227 qpair failed and we were unable to recover it. 00:32:21.227 [2024-11-26 07:42:05.259784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.227 [2024-11-26 07:42:05.259793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.227 qpair failed and we were unable to recover it. 00:32:21.227 [2024-11-26 07:42:05.260105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.228 [2024-11-26 07:42:05.260114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.228 qpair failed and we were unable to recover it. 00:32:21.228 [2024-11-26 07:42:05.260430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.228 [2024-11-26 07:42:05.260437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.228 qpair failed and we were unable to recover it. 00:32:21.228 [2024-11-26 07:42:05.260602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.228 [2024-11-26 07:42:05.260610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.228 qpair failed and we were unable to recover it. 00:32:21.228 [2024-11-26 07:42:05.260906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.228 [2024-11-26 07:42:05.260914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.228 qpair failed and we were unable to recover it. 00:32:21.228 [2024-11-26 07:42:05.261091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.228 [2024-11-26 07:42:05.261100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.228 qpair failed and we were unable to recover it. 00:32:21.228 [2024-11-26 07:42:05.261405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.228 [2024-11-26 07:42:05.261413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.228 qpair failed and we were unable to recover it. 00:32:21.228 [2024-11-26 07:42:05.261723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.228 [2024-11-26 07:42:05.261731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.228 qpair failed and we were unable to recover it. 00:32:21.228 [2024-11-26 07:42:05.262033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.228 [2024-11-26 07:42:05.262041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.228 qpair failed and we were unable to recover it. 00:32:21.228 [2024-11-26 07:42:05.262353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.228 [2024-11-26 07:42:05.262361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.228 qpair failed and we were unable to recover it. 00:32:21.228 [2024-11-26 07:42:05.262564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.228 [2024-11-26 07:42:05.262572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.228 qpair failed and we were unable to recover it. 00:32:21.228 [2024-11-26 07:42:05.262606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.228 [2024-11-26 07:42:05.262612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.228 qpair failed and we were unable to recover it. 00:32:21.228 [2024-11-26 07:42:05.262795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.228 [2024-11-26 07:42:05.262803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.228 qpair failed and we were unable to recover it. 00:32:21.228 [2024-11-26 07:42:05.263004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.228 [2024-11-26 07:42:05.263012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.228 qpair failed and we were unable to recover it. 00:32:21.228 [2024-11-26 07:42:05.263228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.228 [2024-11-26 07:42:05.263236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.228 qpair failed and we were unable to recover it. 00:32:21.228 [2024-11-26 07:42:05.263539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.228 [2024-11-26 07:42:05.263547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.228 qpair failed and we were unable to recover it. 00:32:21.228 [2024-11-26 07:42:05.263877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.228 [2024-11-26 07:42:05.263886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.228 qpair failed and we were unable to recover it. 00:32:21.228 [2024-11-26 07:42:05.264169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.228 [2024-11-26 07:42:05.264177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.228 qpair failed and we were unable to recover it. 00:32:21.228 [2024-11-26 07:42:05.264497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.228 [2024-11-26 07:42:05.264505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.228 qpair failed and we were unable to recover it. 00:32:21.228 [2024-11-26 07:42:05.264727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.228 [2024-11-26 07:42:05.264735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.228 qpair failed and we were unable to recover it. 00:32:21.228 [2024-11-26 07:42:05.264895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.228 [2024-11-26 07:42:05.264903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.228 qpair failed and we were unable to recover it. 00:32:21.228 [2024-11-26 07:42:05.265197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.228 [2024-11-26 07:42:05.265205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.228 qpair failed and we were unable to recover it. 00:32:21.228 [2024-11-26 07:42:05.265378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.229 [2024-11-26 07:42:05.265387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.229 qpair failed and we were unable to recover it. 00:32:21.229 [2024-11-26 07:42:05.265573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.229 [2024-11-26 07:42:05.265585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.229 qpair failed and we were unable to recover it. 00:32:21.229 [2024-11-26 07:42:05.265772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.229 [2024-11-26 07:42:05.265781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.229 qpair failed and we were unable to recover it. 00:32:21.229 [2024-11-26 07:42:05.266077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.229 [2024-11-26 07:42:05.266086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.229 qpair failed and we were unable to recover it. 00:32:21.229 [2024-11-26 07:42:05.266404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.229 [2024-11-26 07:42:05.266412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.229 qpair failed and we were unable to recover it. 00:32:21.229 [2024-11-26 07:42:05.266591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.229 [2024-11-26 07:42:05.266600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.229 qpair failed and we were unable to recover it. 00:32:21.229 [2024-11-26 07:42:05.266920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.229 [2024-11-26 07:42:05.266929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.229 qpair failed and we were unable to recover it. 00:32:21.229 [2024-11-26 07:42:05.267258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.229 [2024-11-26 07:42:05.267266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.229 qpair failed and we were unable to recover it. 00:32:21.229 [2024-11-26 07:42:05.267583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.229 [2024-11-26 07:42:05.267591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.229 qpair failed and we were unable to recover it. 00:32:21.229 [2024-11-26 07:42:05.267890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.229 [2024-11-26 07:42:05.267899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.229 qpair failed and we were unable to recover it. 00:32:21.229 [2024-11-26 07:42:05.268245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.229 [2024-11-26 07:42:05.268253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.229 qpair failed and we were unable to recover it. 00:32:21.229 [2024-11-26 07:42:05.268563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.229 [2024-11-26 07:42:05.268572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.229 qpair failed and we were unable to recover it. 00:32:21.229 [2024-11-26 07:42:05.268762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.229 [2024-11-26 07:42:05.268771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.229 qpair failed and we were unable to recover it. 00:32:21.229 [2024-11-26 07:42:05.268925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.229 [2024-11-26 07:42:05.268934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.229 qpair failed and we were unable to recover it. 00:32:21.229 [2024-11-26 07:42:05.269239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.229 [2024-11-26 07:42:05.269247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.229 qpair failed and we were unable to recover it. 00:32:21.229 [2024-11-26 07:42:05.269435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.229 [2024-11-26 07:42:05.269444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.229 qpair failed and we were unable to recover it. 00:32:21.229 [2024-11-26 07:42:05.269769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.229 [2024-11-26 07:42:05.269778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.229 qpair failed and we were unable to recover it. 00:32:21.229 [2024-11-26 07:42:05.269982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.229 [2024-11-26 07:42:05.269991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.229 qpair failed and we were unable to recover it. 00:32:21.229 [2024-11-26 07:42:05.270294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.229 [2024-11-26 07:42:05.270302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.229 qpair failed and we were unable to recover it. 00:32:21.229 [2024-11-26 07:42:05.270339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.229 [2024-11-26 07:42:05.270346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.229 qpair failed and we were unable to recover it. 00:32:21.229 [2024-11-26 07:42:05.270646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.229 [2024-11-26 07:42:05.270654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.229 qpair failed and we were unable to recover it. 00:32:21.229 [2024-11-26 07:42:05.270801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.229 [2024-11-26 07:42:05.270809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.229 qpair failed and we were unable to recover it. 00:32:21.229 [2024-11-26 07:42:05.271109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.229 [2024-11-26 07:42:05.271117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.229 qpair failed and we were unable to recover it. 00:32:21.229 [2024-11-26 07:42:05.271422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.229 [2024-11-26 07:42:05.271431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.229 qpair failed and we were unable to recover it. 00:32:21.229 [2024-11-26 07:42:05.271742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.229 [2024-11-26 07:42:05.271750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.229 qpair failed and we were unable to recover it. 00:32:21.229 [2024-11-26 07:42:05.272083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.229 [2024-11-26 07:42:05.272092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.229 qpair failed and we were unable to recover it. 00:32:21.229 [2024-11-26 07:42:05.272250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.229 [2024-11-26 07:42:05.272259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.230 qpair failed and we were unable to recover it. 00:32:21.230 [2024-11-26 07:42:05.272629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.230 [2024-11-26 07:42:05.272637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.230 qpair failed and we were unable to recover it. 00:32:21.230 [2024-11-26 07:42:05.272791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.230 [2024-11-26 07:42:05.272799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.230 qpair failed and we were unable to recover it. 00:32:21.230 [2024-11-26 07:42:05.272982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.230 [2024-11-26 07:42:05.272990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.230 qpair failed and we were unable to recover it. 00:32:21.230 [2024-11-26 07:42:05.273182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.230 [2024-11-26 07:42:05.273191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.230 qpair failed and we were unable to recover it. 00:32:21.230 [2024-11-26 07:42:05.273531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.230 [2024-11-26 07:42:05.273539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.230 qpair failed and we were unable to recover it. 00:32:21.230 [2024-11-26 07:42:05.273878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.230 [2024-11-26 07:42:05.273887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.230 qpair failed and we were unable to recover it. 00:32:21.230 [2024-11-26 07:42:05.274169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.230 [2024-11-26 07:42:05.274178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.230 qpair failed and we were unable to recover it. 00:32:21.230 [2024-11-26 07:42:05.274473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.230 [2024-11-26 07:42:05.274482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.230 qpair failed and we were unable to recover it. 00:32:21.230 [2024-11-26 07:42:05.274636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.230 [2024-11-26 07:42:05.274645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.230 qpair failed and we were unable to recover it. 00:32:21.230 [2024-11-26 07:42:05.274832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.230 [2024-11-26 07:42:05.274840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.230 qpair failed and we were unable to recover it. 00:32:21.230 [2024-11-26 07:42:05.275146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.230 [2024-11-26 07:42:05.275155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.230 qpair failed and we were unable to recover it. 00:32:21.230 [2024-11-26 07:42:05.275194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.230 [2024-11-26 07:42:05.275202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.230 qpair failed and we were unable to recover it. 00:32:21.230 [2024-11-26 07:42:05.275477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.230 [2024-11-26 07:42:05.275485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.230 qpair failed and we were unable to recover it. 00:32:21.230 [2024-11-26 07:42:05.275822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.230 [2024-11-26 07:42:05.275830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.230 qpair failed and we were unable to recover it. 00:32:21.230 [2024-11-26 07:42:05.276165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.230 [2024-11-26 07:42:05.276176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.230 qpair failed and we were unable to recover it. 00:32:21.230 [2024-11-26 07:42:05.276494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.230 [2024-11-26 07:42:05.276503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.230 qpair failed and we were unable to recover it. 00:32:21.230 [2024-11-26 07:42:05.276682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.230 [2024-11-26 07:42:05.276691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.230 qpair failed and we were unable to recover it. 00:32:21.230 [2024-11-26 07:42:05.276966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.230 [2024-11-26 07:42:05.276974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.230 qpair failed and we were unable to recover it. 00:32:21.230 [2024-11-26 07:42:05.277182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.230 [2024-11-26 07:42:05.277190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.230 qpair failed and we were unable to recover it. 00:32:21.230 [2024-11-26 07:42:05.277535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.230 [2024-11-26 07:42:05.277543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.230 qpair failed and we were unable to recover it. 00:32:21.230 [2024-11-26 07:42:05.277851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.230 [2024-11-26 07:42:05.277860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.230 qpair failed and we were unable to recover it. 00:32:21.230 [2024-11-26 07:42:05.278180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.230 [2024-11-26 07:42:05.278188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.230 qpair failed and we were unable to recover it. 00:32:21.230 [2024-11-26 07:42:05.278437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.230 [2024-11-26 07:42:05.278447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.230 qpair failed and we were unable to recover it. 00:32:21.230 [2024-11-26 07:42:05.278797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.230 [2024-11-26 07:42:05.278806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.230 qpair failed and we were unable to recover it. 00:32:21.230 [2024-11-26 07:42:05.278982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.230 [2024-11-26 07:42:05.278992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.230 qpair failed and we were unable to recover it. 00:32:21.230 [2024-11-26 07:42:05.279183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.230 [2024-11-26 07:42:05.279193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.230 qpair failed and we were unable to recover it. 00:32:21.230 [2024-11-26 07:42:05.279513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.230 [2024-11-26 07:42:05.279522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.230 qpair failed and we were unable to recover it. 00:32:21.230 [2024-11-26 07:42:05.279821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.231 [2024-11-26 07:42:05.279829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.231 qpair failed and we were unable to recover it. 00:32:21.231 [2024-11-26 07:42:05.280040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.231 [2024-11-26 07:42:05.280049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.231 qpair failed and we were unable to recover it. 00:32:21.231 [2024-11-26 07:42:05.280374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.231 [2024-11-26 07:42:05.280382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.231 qpair failed and we were unable to recover it. 00:32:21.231 [2024-11-26 07:42:05.280694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.231 [2024-11-26 07:42:05.280703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.231 qpair failed and we were unable to recover it. 00:32:21.231 [2024-11-26 07:42:05.281019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.231 [2024-11-26 07:42:05.281028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.231 qpair failed and we were unable to recover it. 00:32:21.231 [2024-11-26 07:42:05.281343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.231 [2024-11-26 07:42:05.281351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.231 qpair failed and we were unable to recover it. 00:32:21.231 [2024-11-26 07:42:05.281542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.231 [2024-11-26 07:42:05.281551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.231 qpair failed and we were unable to recover it. 00:32:21.231 [2024-11-26 07:42:05.281847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.231 [2024-11-26 07:42:05.281856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.231 qpair failed and we were unable to recover it. 00:32:21.231 [2024-11-26 07:42:05.282182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.231 [2024-11-26 07:42:05.282191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.231 qpair failed and we were unable to recover it. 00:32:21.231 [2024-11-26 07:42:05.282375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.231 [2024-11-26 07:42:05.282385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.231 qpair failed and we were unable to recover it. 00:32:21.231 [2024-11-26 07:42:05.282590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.231 [2024-11-26 07:42:05.282599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.231 qpair failed and we were unable to recover it. 00:32:21.231 [2024-11-26 07:42:05.282910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.231 [2024-11-26 07:42:05.282919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.231 qpair failed and we were unable to recover it. 00:32:21.231 [2024-11-26 07:42:05.283245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.231 [2024-11-26 07:42:05.283254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.231 qpair failed and we were unable to recover it. 00:32:21.231 [2024-11-26 07:42:05.283558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.231 [2024-11-26 07:42:05.283566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.231 qpair failed and we were unable to recover it. 00:32:21.231 [2024-11-26 07:42:05.283876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.231 [2024-11-26 07:42:05.283885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.231 qpair failed and we were unable to recover it. 00:32:21.231 [2024-11-26 07:42:05.284032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.231 [2024-11-26 07:42:05.284040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.231 qpair failed and we were unable to recover it. 00:32:21.231 [2024-11-26 07:42:05.284352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.231 [2024-11-26 07:42:05.284361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.231 qpair failed and we were unable to recover it. 00:32:21.231 [2024-11-26 07:42:05.284535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.231 [2024-11-26 07:42:05.284545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.231 qpair failed and we were unable to recover it. 00:32:21.231 [2024-11-26 07:42:05.284858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.231 [2024-11-26 07:42:05.284875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.231 qpair failed and we were unable to recover it. 00:32:21.231 [2024-11-26 07:42:05.285190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.231 [2024-11-26 07:42:05.285198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.231 qpair failed and we were unable to recover it. 00:32:21.231 [2024-11-26 07:42:05.285514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.231 [2024-11-26 07:42:05.285522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.231 qpair failed and we were unable to recover it. 00:32:21.231 [2024-11-26 07:42:05.285812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.231 [2024-11-26 07:42:05.285821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.231 qpair failed and we were unable to recover it. 00:32:21.231 [2024-11-26 07:42:05.285968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.231 [2024-11-26 07:42:05.285976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.231 qpair failed and we were unable to recover it. 00:32:21.231 [2024-11-26 07:42:05.286369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.231 [2024-11-26 07:42:05.286378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.231 qpair failed and we were unable to recover it. 00:32:21.231 [2024-11-26 07:42:05.286675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.231 [2024-11-26 07:42:05.286683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.231 qpair failed and we were unable to recover it. 00:32:21.231 [2024-11-26 07:42:05.287002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.231 [2024-11-26 07:42:05.287011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.231 qpair failed and we were unable to recover it. 00:32:21.231 [2024-11-26 07:42:05.287316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.231 [2024-11-26 07:42:05.287324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.231 qpair failed and we were unable to recover it. 00:32:21.231 [2024-11-26 07:42:05.287621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.231 [2024-11-26 07:42:05.287631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.231 qpair failed and we were unable to recover it. 00:32:21.231 [2024-11-26 07:42:05.287824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.231 [2024-11-26 07:42:05.287833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.231 qpair failed and we were unable to recover it. 00:32:21.231 [2024-11-26 07:42:05.288145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.231 [2024-11-26 07:42:05.288154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.232 qpair failed and we were unable to recover it. 00:32:21.232 [2024-11-26 07:42:05.288451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.232 [2024-11-26 07:42:05.288460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.232 qpair failed and we were unable to recover it. 00:32:21.232 [2024-11-26 07:42:05.288790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.232 [2024-11-26 07:42:05.288798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.232 qpair failed and we were unable to recover it. 00:32:21.232 [2024-11-26 07:42:05.289106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.232 [2024-11-26 07:42:05.289114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.232 qpair failed and we were unable to recover it. 00:32:21.232 [2024-11-26 07:42:05.289288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.232 [2024-11-26 07:42:05.289297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.232 qpair failed and we were unable to recover it. 00:32:21.232 [2024-11-26 07:42:05.289517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.232 [2024-11-26 07:42:05.289525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.232 qpair failed and we were unable to recover it. 00:32:21.232 [2024-11-26 07:42:05.289833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.232 [2024-11-26 07:42:05.289842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.232 qpair failed and we were unable to recover it. 00:32:21.232 [2024-11-26 07:42:05.290024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.232 [2024-11-26 07:42:05.290033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.232 qpair failed and we were unable to recover it. 00:32:21.232 [2024-11-26 07:42:05.290314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.232 [2024-11-26 07:42:05.290322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.232 qpair failed and we were unable to recover it. 00:32:21.232 [2024-11-26 07:42:05.290509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.232 [2024-11-26 07:42:05.290518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.232 qpair failed and we were unable to recover it. 00:32:21.232 [2024-11-26 07:42:05.290826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.232 [2024-11-26 07:42:05.290836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.232 qpair failed and we were unable to recover it. 00:32:21.232 [2024-11-26 07:42:05.291152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.232 [2024-11-26 07:42:05.291161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.232 qpair failed and we were unable to recover it. 00:32:21.232 [2024-11-26 07:42:05.291348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.232 [2024-11-26 07:42:05.291357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.232 qpair failed and we were unable to recover it. 00:32:21.232 [2024-11-26 07:42:05.291534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.232 [2024-11-26 07:42:05.291544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.232 qpair failed and we were unable to recover it. 00:32:21.232 [2024-11-26 07:42:05.291815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.232 [2024-11-26 07:42:05.291825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.232 qpair failed and we were unable to recover it. 00:32:21.232 [2024-11-26 07:42:05.292151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.232 [2024-11-26 07:42:05.292160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.232 qpair failed and we were unable to recover it. 00:32:21.232 [2024-11-26 07:42:05.292456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.232 [2024-11-26 07:42:05.292465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.232 qpair failed and we were unable to recover it. 00:32:21.232 [2024-11-26 07:42:05.292645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.232 [2024-11-26 07:42:05.292654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.232 qpair failed and we were unable to recover it. 00:32:21.232 [2024-11-26 07:42:05.293022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.232 [2024-11-26 07:42:05.293031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.232 qpair failed and we were unable to recover it. 00:32:21.232 [2024-11-26 07:42:05.293364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.232 [2024-11-26 07:42:05.293372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.232 qpair failed and we were unable to recover it. 00:32:21.232 [2024-11-26 07:42:05.293654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.232 [2024-11-26 07:42:05.293662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.232 qpair failed and we were unable to recover it. 00:32:21.232 [2024-11-26 07:42:05.293936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.232 [2024-11-26 07:42:05.293945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.232 qpair failed and we were unable to recover it. 00:32:21.232 [2024-11-26 07:42:05.294270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.232 [2024-11-26 07:42:05.294278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.232 qpair failed and we were unable to recover it. 00:32:21.232 [2024-11-26 07:42:05.294439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.232 [2024-11-26 07:42:05.294449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.232 qpair failed and we were unable to recover it. 00:32:21.232 [2024-11-26 07:42:05.295032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.232 [2024-11-26 07:42:05.295126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90fc000b90 with addr=10.0.0.2, port=4420 00:32:21.232 qpair failed and we were unable to recover it. 00:32:21.232 [2024-11-26 07:42:05.295606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.232 [2024-11-26 07:42:05.295645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90fc000b90 with addr=10.0.0.2, port=4420 00:32:21.232 qpair failed and we were unable to recover it. 00:32:21.232 [2024-11-26 07:42:05.296017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.232 [2024-11-26 07:42:05.296026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.232 qpair failed and we were unable to recover it. 00:32:21.232 [2024-11-26 07:42:05.296211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.232 [2024-11-26 07:42:05.296221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.232 qpair failed and we were unable to recover it. 00:32:21.232 [2024-11-26 07:42:05.296614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.232 [2024-11-26 07:42:05.296622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.232 qpair failed and we were unable to recover it. 00:32:21.232 [2024-11-26 07:42:05.297042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.233 [2024-11-26 07:42:05.297051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.233 qpair failed and we were unable to recover it. 00:32:21.233 [2024-11-26 07:42:05.297233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.233 [2024-11-26 07:42:05.297242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.233 qpair failed and we were unable to recover it. 00:32:21.233 [2024-11-26 07:42:05.297523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.233 [2024-11-26 07:42:05.297531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.233 qpair failed and we were unable to recover it. 00:32:21.233 [2024-11-26 07:42:05.297684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.233 [2024-11-26 07:42:05.297693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.233 qpair failed and we were unable to recover it. 00:32:21.233 [2024-11-26 07:42:05.297960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.233 [2024-11-26 07:42:05.297969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.233 qpair failed and we were unable to recover it. 00:32:21.233 [2024-11-26 07:42:05.298264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.233 [2024-11-26 07:42:05.298273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.233 qpair failed and we were unable to recover it. 00:32:21.233 [2024-11-26 07:42:05.298590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.233 [2024-11-26 07:42:05.298600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.233 qpair failed and we were unable to recover it. 00:32:21.233 [2024-11-26 07:42:05.298775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.233 [2024-11-26 07:42:05.298784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.233 qpair failed and we were unable to recover it. 00:32:21.233 [2024-11-26 07:42:05.298959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.233 [2024-11-26 07:42:05.298968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.233 qpair failed and we were unable to recover it. 00:32:21.233 [2024-11-26 07:42:05.299243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.233 [2024-11-26 07:42:05.299253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.233 qpair failed and we were unable to recover it. 00:32:21.233 [2024-11-26 07:42:05.299411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.233 [2024-11-26 07:42:05.299419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.233 qpair failed and we were unable to recover it. 00:32:21.233 [2024-11-26 07:42:05.299651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.233 [2024-11-26 07:42:05.299659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.233 qpair failed and we were unable to recover it. 00:32:21.233 [2024-11-26 07:42:05.300002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.233 [2024-11-26 07:42:05.300012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.233 qpair failed and we were unable to recover it. 00:32:21.233 [2024-11-26 07:42:05.300227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.233 [2024-11-26 07:42:05.300237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.233 qpair failed and we were unable to recover it. 00:32:21.233 [2024-11-26 07:42:05.300548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.233 [2024-11-26 07:42:05.300556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.233 qpair failed and we were unable to recover it. 00:32:21.233 [2024-11-26 07:42:05.300745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.233 [2024-11-26 07:42:05.300753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.233 qpair failed and we were unable to recover it. 00:32:21.233 [2024-11-26 07:42:05.301016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.233 [2024-11-26 07:42:05.301025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.233 qpair failed and we were unable to recover it. 00:32:21.233 [2024-11-26 07:42:05.301223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.233 [2024-11-26 07:42:05.301232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.233 qpair failed and we were unable to recover it. 00:32:21.233 [2024-11-26 07:42:05.301415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.233 [2024-11-26 07:42:05.301423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.233 qpair failed and we were unable to recover it. 00:32:21.233 [2024-11-26 07:42:05.301588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.233 [2024-11-26 07:42:05.301596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.233 qpair failed and we were unable to recover it. 00:32:21.233 [2024-11-26 07:42:05.301762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.233 [2024-11-26 07:42:05.301770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.233 qpair failed and we were unable to recover it. 00:32:21.505 [2024-11-26 07:42:05.301982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.505 [2024-11-26 07:42:05.301991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.505 qpair failed and we were unable to recover it. 00:32:21.505 [2024-11-26 07:42:05.302272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.505 [2024-11-26 07:42:05.302282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.505 qpair failed and we were unable to recover it. 00:32:21.505 [2024-11-26 07:42:05.302611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.505 [2024-11-26 07:42:05.302620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.505 qpair failed and we were unable to recover it. 00:32:21.505 [2024-11-26 07:42:05.302820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.505 [2024-11-26 07:42:05.302830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.505 qpair failed and we were unable to recover it. 00:32:21.505 [2024-11-26 07:42:05.302986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.505 [2024-11-26 07:42:05.302994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.505 qpair failed and we were unable to recover it. 00:32:21.505 [2024-11-26 07:42:05.303029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.505 [2024-11-26 07:42:05.303037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.505 qpair failed and we were unable to recover it. 00:32:21.505 [2024-11-26 07:42:05.303218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.505 [2024-11-26 07:42:05.303226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.505 qpair failed and we were unable to recover it. 00:32:21.505 [2024-11-26 07:42:05.303414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.505 [2024-11-26 07:42:05.303422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.505 qpair failed and we were unable to recover it. 00:32:21.505 [2024-11-26 07:42:05.303725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.505 [2024-11-26 07:42:05.303734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.505 qpair failed and we were unable to recover it. 00:32:21.505 [2024-11-26 07:42:05.303914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.505 [2024-11-26 07:42:05.303923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.505 qpair failed and we were unable to recover it. 00:32:21.505 [2024-11-26 07:42:05.304125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.505 [2024-11-26 07:42:05.304133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.505 qpair failed and we were unable to recover it. 00:32:21.505 [2024-11-26 07:42:05.304413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.505 [2024-11-26 07:42:05.304421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.505 qpair failed and we were unable to recover it. 00:32:21.505 [2024-11-26 07:42:05.304807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.505 [2024-11-26 07:42:05.304816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.505 qpair failed and we were unable to recover it. 00:32:21.505 [2024-11-26 07:42:05.305145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.505 [2024-11-26 07:42:05.305154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.505 qpair failed and we were unable to recover it. 00:32:21.505 [2024-11-26 07:42:05.305471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.505 [2024-11-26 07:42:05.305480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.505 qpair failed and we were unable to recover it. 00:32:21.505 [2024-11-26 07:42:05.305680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.505 [2024-11-26 07:42:05.305690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.505 qpair failed and we were unable to recover it. 00:32:21.505 [2024-11-26 07:42:05.306027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.505 [2024-11-26 07:42:05.306035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.505 qpair failed and we were unable to recover it. 00:32:21.505 [2024-11-26 07:42:05.306420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.505 [2024-11-26 07:42:05.306429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.505 qpair failed and we were unable to recover it. 00:32:21.505 [2024-11-26 07:42:05.306619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.505 [2024-11-26 07:42:05.306627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.505 qpair failed and we were unable to recover it. 00:32:21.505 [2024-11-26 07:42:05.306922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.505 [2024-11-26 07:42:05.306931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.505 qpair failed and we were unable to recover it. 00:32:21.505 [2024-11-26 07:42:05.307152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.506 [2024-11-26 07:42:05.307161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.506 qpair failed and we were unable to recover it. 00:32:21.506 [2024-11-26 07:42:05.307508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.506 [2024-11-26 07:42:05.307516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.506 qpair failed and we were unable to recover it. 00:32:21.506 [2024-11-26 07:42:05.307830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.506 [2024-11-26 07:42:05.307839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.506 qpair failed and we were unable to recover it. 00:32:21.506 [2024-11-26 07:42:05.308030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.506 [2024-11-26 07:42:05.308039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.506 qpair failed and we were unable to recover it. 00:32:21.506 [2024-11-26 07:42:05.308355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.506 [2024-11-26 07:42:05.308363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.506 qpair failed and we were unable to recover it. 00:32:21.506 [2024-11-26 07:42:05.308719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.506 [2024-11-26 07:42:05.308727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.506 qpair failed and we were unable to recover it. 00:32:21.506 [2024-11-26 07:42:05.309033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.506 [2024-11-26 07:42:05.309042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.506 qpair failed and we were unable to recover it. 00:32:21.506 [2024-11-26 07:42:05.309364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.506 [2024-11-26 07:42:05.309373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.506 qpair failed and we were unable to recover it. 00:32:21.506 [2024-11-26 07:42:05.309535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.506 [2024-11-26 07:42:05.309546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.506 qpair failed and we were unable to recover it. 00:32:21.506 [2024-11-26 07:42:05.309727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.506 [2024-11-26 07:42:05.309735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.506 qpair failed and we were unable to recover it. 00:32:21.506 [2024-11-26 07:42:05.310061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.506 [2024-11-26 07:42:05.310070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.506 qpair failed and we were unable to recover it. 00:32:21.506 [2024-11-26 07:42:05.310258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.506 [2024-11-26 07:42:05.310267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.506 qpair failed and we were unable to recover it. 00:32:21.506 [2024-11-26 07:42:05.310321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.506 [2024-11-26 07:42:05.310327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.506 qpair failed and we were unable to recover it. 00:32:21.506 [2024-11-26 07:42:05.310371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.506 [2024-11-26 07:42:05.310377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.506 qpair failed and we were unable to recover it. 00:32:21.506 [2024-11-26 07:42:05.310732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.506 [2024-11-26 07:42:05.310741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.506 qpair failed and we were unable to recover it. 00:32:21.506 [2024-11-26 07:42:05.310905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.506 [2024-11-26 07:42:05.310915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.506 qpair failed and we were unable to recover it. 00:32:21.506 [2024-11-26 07:42:05.311202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.506 [2024-11-26 07:42:05.311210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.506 qpair failed and we were unable to recover it. 00:32:21.506 [2024-11-26 07:42:05.311518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.506 [2024-11-26 07:42:05.311526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.506 qpair failed and we were unable to recover it. 00:32:21.506 [2024-11-26 07:42:05.311723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.506 [2024-11-26 07:42:05.311731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.506 qpair failed and we were unable to recover it. 00:32:21.506 [2024-11-26 07:42:05.311923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.506 [2024-11-26 07:42:05.311932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.506 qpair failed and we were unable to recover it. 00:32:21.506 [2024-11-26 07:42:05.312104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.506 [2024-11-26 07:42:05.312113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.506 qpair failed and we were unable to recover it. 00:32:21.506 [2024-11-26 07:42:05.312372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.506 [2024-11-26 07:42:05.312380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.506 qpair failed and we were unable to recover it. 00:32:21.506 [2024-11-26 07:42:05.312710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.506 [2024-11-26 07:42:05.312719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.506 qpair failed and we were unable to recover it. 00:32:21.506 [2024-11-26 07:42:05.313080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.506 [2024-11-26 07:42:05.313089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.506 qpair failed and we were unable to recover it. 00:32:21.506 [2024-11-26 07:42:05.313243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.506 [2024-11-26 07:42:05.313251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.506 qpair failed and we were unable to recover it. 00:32:21.506 [2024-11-26 07:42:05.313415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.506 [2024-11-26 07:42:05.313424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.506 qpair failed and we were unable to recover it. 00:32:21.506 [2024-11-26 07:42:05.313708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.506 [2024-11-26 07:42:05.313716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.506 qpair failed and we were unable to recover it. 00:32:21.506 [2024-11-26 07:42:05.313900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.506 [2024-11-26 07:42:05.313910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.506 qpair failed and we were unable to recover it. 00:32:21.506 [2024-11-26 07:42:05.314191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.506 [2024-11-26 07:42:05.314200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.506 qpair failed and we were unable to recover it. 00:32:21.506 [2024-11-26 07:42:05.314531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.506 [2024-11-26 07:42:05.314539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.506 qpair failed and we were unable to recover it. 00:32:21.506 [2024-11-26 07:42:05.314852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.506 [2024-11-26 07:42:05.314867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.506 qpair failed and we were unable to recover it. 00:32:21.506 [2024-11-26 07:42:05.315058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.506 [2024-11-26 07:42:05.315067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.506 qpair failed and we were unable to recover it. 00:32:21.506 [2024-11-26 07:42:05.315251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.506 [2024-11-26 07:42:05.315260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.506 qpair failed and we were unable to recover it. 00:32:21.506 [2024-11-26 07:42:05.315546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.506 [2024-11-26 07:42:05.315554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.506 qpair failed and we were unable to recover it. 00:32:21.506 [2024-11-26 07:42:05.315710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.506 [2024-11-26 07:42:05.315718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.506 qpair failed and we were unable to recover it. 00:32:21.506 [2024-11-26 07:42:05.315879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.506 [2024-11-26 07:42:05.315888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.506 qpair failed and we were unable to recover it. 00:32:21.506 [2024-11-26 07:42:05.316213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.506 [2024-11-26 07:42:05.316221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.506 qpair failed and we were unable to recover it. 00:32:21.506 [2024-11-26 07:42:05.316536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.506 [2024-11-26 07:42:05.316544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.506 qpair failed and we were unable to recover it. 00:32:21.507 [2024-11-26 07:42:05.316737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.507 [2024-11-26 07:42:05.316745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.507 qpair failed and we were unable to recover it. 00:32:21.507 [2024-11-26 07:42:05.317036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.507 [2024-11-26 07:42:05.317045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.507 qpair failed and we were unable to recover it. 00:32:21.507 [2024-11-26 07:42:05.317335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.507 [2024-11-26 07:42:05.317344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.507 qpair failed and we were unable to recover it. 00:32:21.507 [2024-11-26 07:42:05.317383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.507 [2024-11-26 07:42:05.317390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.507 qpair failed and we were unable to recover it. 00:32:21.507 [2024-11-26 07:42:05.317688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.507 [2024-11-26 07:42:05.317697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.507 qpair failed and we were unable to recover it. 00:32:21.507 [2024-11-26 07:42:05.317894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.507 [2024-11-26 07:42:05.317904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.507 qpair failed and we were unable to recover it. 00:32:21.507 [2024-11-26 07:42:05.318212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.507 [2024-11-26 07:42:05.318221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.507 qpair failed and we were unable to recover it. 00:32:21.507 [2024-11-26 07:42:05.318505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.507 [2024-11-26 07:42:05.318514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.507 qpair failed and we were unable to recover it. 00:32:21.507 [2024-11-26 07:42:05.318836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.507 [2024-11-26 07:42:05.318845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.507 qpair failed and we were unable to recover it. 00:32:21.507 [2024-11-26 07:42:05.319223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.507 [2024-11-26 07:42:05.319232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.507 qpair failed and we were unable to recover it. 00:32:21.507 [2024-11-26 07:42:05.319565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.507 [2024-11-26 07:42:05.319575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.507 qpair failed and we were unable to recover it. 00:32:21.507 [2024-11-26 07:42:05.319890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.507 [2024-11-26 07:42:05.319899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.507 qpair failed and we were unable to recover it. 00:32:21.507 [2024-11-26 07:42:05.320266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.507 [2024-11-26 07:42:05.320274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.507 qpair failed and we were unable to recover it. 00:32:21.507 [2024-11-26 07:42:05.320309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.507 [2024-11-26 07:42:05.320316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.507 qpair failed and we were unable to recover it. 00:32:21.507 [2024-11-26 07:42:05.320626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.507 [2024-11-26 07:42:05.320635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.507 qpair failed and we were unable to recover it. 00:32:21.507 [2024-11-26 07:42:05.320946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.507 [2024-11-26 07:42:05.320955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.507 qpair failed and we were unable to recover it. 00:32:21.507 [2024-11-26 07:42:05.321286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.507 [2024-11-26 07:42:05.321294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.507 qpair failed and we were unable to recover it. 00:32:21.507 [2024-11-26 07:42:05.321363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.507 [2024-11-26 07:42:05.321370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.507 qpair failed and we were unable to recover it. 00:32:21.507 [2024-11-26 07:42:05.321517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.507 [2024-11-26 07:42:05.321525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.507 qpair failed and we were unable to recover it. 00:32:21.507 [2024-11-26 07:42:05.321828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.507 [2024-11-26 07:42:05.321837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.507 qpair failed and we were unable to recover it. 00:32:21.507 [2024-11-26 07:42:05.322053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.507 [2024-11-26 07:42:05.322061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.507 qpair failed and we were unable to recover it. 00:32:21.507 [2024-11-26 07:42:05.322370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.507 [2024-11-26 07:42:05.322378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.507 qpair failed and we were unable to recover it. 00:32:21.507 [2024-11-26 07:42:05.322547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.507 [2024-11-26 07:42:05.322556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.507 qpair failed and we were unable to recover it. 00:32:21.507 [2024-11-26 07:42:05.322590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.507 [2024-11-26 07:42:05.322598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.507 qpair failed and we were unable to recover it. 00:32:21.507 [2024-11-26 07:42:05.322900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.507 [2024-11-26 07:42:05.322908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.507 qpair failed and we were unable to recover it. 00:32:21.507 [2024-11-26 07:42:05.323242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.507 [2024-11-26 07:42:05.323250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.507 qpair failed and we were unable to recover it. 00:32:21.507 [2024-11-26 07:42:05.323563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.507 [2024-11-26 07:42:05.323570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.507 qpair failed and we were unable to recover it. 00:32:21.507 [2024-11-26 07:42:05.323907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.507 [2024-11-26 07:42:05.323915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.507 qpair failed and we were unable to recover it. 00:32:21.507 [2024-11-26 07:42:05.324093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.507 [2024-11-26 07:42:05.324102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.507 qpair failed and we were unable to recover it. 00:32:21.507 [2024-11-26 07:42:05.324380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.507 [2024-11-26 07:42:05.324387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.507 qpair failed and we were unable to recover it. 00:32:21.507 [2024-11-26 07:42:05.324705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.507 [2024-11-26 07:42:05.324712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.507 qpair failed and we were unable to recover it. 00:32:21.507 [2024-11-26 07:42:05.324887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.507 [2024-11-26 07:42:05.324896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.507 qpair failed and we were unable to recover it. 00:32:21.507 [2024-11-26 07:42:05.325213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.507 [2024-11-26 07:42:05.325220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.507 qpair failed and we were unable to recover it. 00:32:21.507 [2024-11-26 07:42:05.325526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.507 [2024-11-26 07:42:05.325534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.507 qpair failed and we were unable to recover it. 00:32:21.507 [2024-11-26 07:42:05.325826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.507 [2024-11-26 07:42:05.325834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.507 qpair failed and we were unable to recover it. 00:32:21.507 [2024-11-26 07:42:05.326158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.507 [2024-11-26 07:42:05.326166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.507 qpair failed and we were unable to recover it. 00:32:21.507 [2024-11-26 07:42:05.326318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.507 [2024-11-26 07:42:05.326325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.507 qpair failed and we were unable to recover it. 00:32:21.507 [2024-11-26 07:42:05.326619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.508 [2024-11-26 07:42:05.326628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.508 qpair failed and we were unable to recover it. 00:32:21.508 [2024-11-26 07:42:05.326677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.508 [2024-11-26 07:42:05.326685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.508 qpair failed and we were unable to recover it. 00:32:21.508 [2024-11-26 07:42:05.326719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.508 [2024-11-26 07:42:05.326728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.508 qpair failed and we were unable to recover it. 00:32:21.508 [2024-11-26 07:42:05.327173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.508 [2024-11-26 07:42:05.327266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90fc000b90 with addr=10.0.0.2, port=4420 00:32:21.508 qpair failed and we were unable to recover it. 00:32:21.508 [2024-11-26 07:42:05.327527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.508 [2024-11-26 07:42:05.327565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90fc000b90 with addr=10.0.0.2, port=4420 00:32:21.508 qpair failed and we were unable to recover it. 00:32:21.508 [2024-11-26 07:42:05.327751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.508 [2024-11-26 07:42:05.327762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.508 qpair failed and we were unable to recover it. 00:32:21.508 [2024-11-26 07:42:05.328144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.508 [2024-11-26 07:42:05.328153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.508 qpair failed and we were unable to recover it. 00:32:21.508 [2024-11-26 07:42:05.328340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.508 [2024-11-26 07:42:05.328349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.508 qpair failed and we were unable to recover it. 00:32:21.508 [2024-11-26 07:42:05.328634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.508 [2024-11-26 07:42:05.328641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.508 qpair failed and we were unable to recover it. 00:32:21.508 [2024-11-26 07:42:05.328935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.508 [2024-11-26 07:42:05.328943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.508 qpair failed and we were unable to recover it. 00:32:21.508 [2024-11-26 07:42:05.329246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.508 [2024-11-26 07:42:05.329254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.508 qpair failed and we were unable to recover it. 00:32:21.508 [2024-11-26 07:42:05.329585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.508 [2024-11-26 07:42:05.329592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.508 qpair failed and we were unable to recover it. 00:32:21.508 [2024-11-26 07:42:05.329746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.508 [2024-11-26 07:42:05.329753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.508 qpair failed and we were unable to recover it. 00:32:21.508 [2024-11-26 07:42:05.330078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.508 [2024-11-26 07:42:05.330087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.508 qpair failed and we were unable to recover it. 00:32:21.508 [2024-11-26 07:42:05.330403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.508 [2024-11-26 07:42:05.330411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.508 qpair failed and we were unable to recover it. 00:32:21.508 [2024-11-26 07:42:05.330705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.508 [2024-11-26 07:42:05.330712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.508 qpair failed and we were unable to recover it. 00:32:21.508 [2024-11-26 07:42:05.331083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.508 [2024-11-26 07:42:05.331091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.508 qpair failed and we were unable to recover it. 00:32:21.508 [2024-11-26 07:42:05.331409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.508 [2024-11-26 07:42:05.331418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.508 qpair failed and we were unable to recover it. 00:32:21.508 [2024-11-26 07:42:05.331688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.508 [2024-11-26 07:42:05.331697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.508 qpair failed and we were unable to recover it. 00:32:21.508 [2024-11-26 07:42:05.332012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.508 [2024-11-26 07:42:05.332020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.508 qpair failed and we were unable to recover it. 00:32:21.508 [2024-11-26 07:42:05.332180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.508 [2024-11-26 07:42:05.332189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.508 qpair failed and we were unable to recover it. 00:32:21.508 [2024-11-26 07:42:05.332417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.508 [2024-11-26 07:42:05.332425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.508 qpair failed and we were unable to recover it. 00:32:21.508 [2024-11-26 07:42:05.332748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.508 [2024-11-26 07:42:05.332757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.508 qpair failed and we were unable to recover it. 00:32:21.508 [2024-11-26 07:42:05.333083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.508 [2024-11-26 07:42:05.333092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.508 qpair failed and we were unable to recover it. 00:32:21.508 [2024-11-26 07:42:05.333408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.508 [2024-11-26 07:42:05.333415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.508 qpair failed and we were unable to recover it. 00:32:21.508 [2024-11-26 07:42:05.333608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.508 [2024-11-26 07:42:05.333616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.508 qpair failed and we were unable to recover it. 00:32:21.508 [2024-11-26 07:42:05.333788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.508 [2024-11-26 07:42:05.333796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.508 qpair failed and we were unable to recover it. 00:32:21.508 [2024-11-26 07:42:05.333986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.508 [2024-11-26 07:42:05.333994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.508 qpair failed and we were unable to recover it. 00:32:21.508 [2024-11-26 07:42:05.334319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.508 [2024-11-26 07:42:05.334327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.508 qpair failed and we were unable to recover it. 00:32:21.508 [2024-11-26 07:42:05.334526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.508 [2024-11-26 07:42:05.334534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.508 qpair failed and we were unable to recover it. 00:32:21.508 [2024-11-26 07:42:05.334874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.508 [2024-11-26 07:42:05.334882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.508 qpair failed and we were unable to recover it. 00:32:21.508 [2024-11-26 07:42:05.335180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.508 [2024-11-26 07:42:05.335188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.508 qpair failed and we were unable to recover it. 00:32:21.508 [2024-11-26 07:42:05.335349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.508 [2024-11-26 07:42:05.335358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.508 qpair failed and we were unable to recover it. 00:32:21.508 [2024-11-26 07:42:05.335541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.508 [2024-11-26 07:42:05.335550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.508 qpair failed and we were unable to recover it. 00:32:21.508 [2024-11-26 07:42:05.335886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.508 [2024-11-26 07:42:05.335895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.508 qpair failed and we were unable to recover it. 00:32:21.508 [2024-11-26 07:42:05.336216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.508 [2024-11-26 07:42:05.336224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.508 qpair failed and we were unable to recover it. 00:32:21.508 [2024-11-26 07:42:05.336547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.508 [2024-11-26 07:42:05.336554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.508 qpair failed and we were unable to recover it. 00:32:21.508 [2024-11-26 07:42:05.336883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.508 [2024-11-26 07:42:05.336891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.508 qpair failed and we were unable to recover it. 00:32:21.509 [2024-11-26 07:42:05.337206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.509 [2024-11-26 07:42:05.337213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.509 qpair failed and we were unable to recover it. 00:32:21.509 [2024-11-26 07:42:05.337532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.509 [2024-11-26 07:42:05.337540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.509 qpair failed and we were unable to recover it. 00:32:21.509 [2024-11-26 07:42:05.337854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.509 [2024-11-26 07:42:05.337865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.509 qpair failed and we were unable to recover it. 00:32:21.509 [2024-11-26 07:42:05.338030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.509 [2024-11-26 07:42:05.338038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.509 qpair failed and we were unable to recover it. 00:32:21.509 [2024-11-26 07:42:05.338324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.509 [2024-11-26 07:42:05.338332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.509 qpair failed and we were unable to recover it. 00:32:21.509 [2024-11-26 07:42:05.338653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.509 [2024-11-26 07:42:05.338661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.509 qpair failed and we were unable to recover it. 00:32:21.509 [2024-11-26 07:42:05.338960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.509 [2024-11-26 07:42:05.338968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.509 qpair failed and we were unable to recover it. 00:32:21.509 [2024-11-26 07:42:05.339132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.509 [2024-11-26 07:42:05.339141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.509 qpair failed and we were unable to recover it. 00:32:21.509 [2024-11-26 07:42:05.339374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.509 [2024-11-26 07:42:05.339382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.509 qpair failed and we were unable to recover it. 00:32:21.509 [2024-11-26 07:42:05.339572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.509 [2024-11-26 07:42:05.339580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.509 qpair failed and we were unable to recover it. 00:32:21.509 [2024-11-26 07:42:05.339905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.509 [2024-11-26 07:42:05.339913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.509 qpair failed and we were unable to recover it. 00:32:21.509 [2024-11-26 07:42:05.340076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.509 [2024-11-26 07:42:05.340085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.509 qpair failed and we were unable to recover it. 00:32:21.509 [2024-11-26 07:42:05.340124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.509 [2024-11-26 07:42:05.340132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.509 qpair failed and we were unable to recover it. 00:32:21.509 [2024-11-26 07:42:05.340275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.509 [2024-11-26 07:42:05.340283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.509 qpair failed and we were unable to recover it. 00:32:21.509 [2024-11-26 07:42:05.340568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.509 [2024-11-26 07:42:05.340576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.509 qpair failed and we were unable to recover it. 00:32:21.509 [2024-11-26 07:42:05.340765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.509 [2024-11-26 07:42:05.340774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.509 qpair failed and we were unable to recover it. 00:32:21.509 [2024-11-26 07:42:05.340990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.509 [2024-11-26 07:42:05.340998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.509 qpair failed and we were unable to recover it. 00:32:21.509 [2024-11-26 07:42:05.341312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.509 [2024-11-26 07:42:05.341320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.509 qpair failed and we were unable to recover it. 00:32:21.509 [2024-11-26 07:42:05.341671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.509 [2024-11-26 07:42:05.341679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.509 qpair failed and we were unable to recover it. 00:32:21.509 [2024-11-26 07:42:05.342006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.509 [2024-11-26 07:42:05.342014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.509 qpair failed and we were unable to recover it. 00:32:21.509 [2024-11-26 07:42:05.342347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.509 [2024-11-26 07:42:05.342355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.509 qpair failed and we were unable to recover it. 00:32:21.509 [2024-11-26 07:42:05.342416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.509 [2024-11-26 07:42:05.342423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.509 qpair failed and we were unable to recover it. 00:32:21.509 [2024-11-26 07:42:05.342565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.509 [2024-11-26 07:42:05.342574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.509 qpair failed and we were unable to recover it. 00:32:21.509 [2024-11-26 07:42:05.342745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.509 [2024-11-26 07:42:05.342754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.509 qpair failed and we were unable to recover it. 00:32:21.509 [2024-11-26 07:42:05.343045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.509 [2024-11-26 07:42:05.343053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.509 qpair failed and we were unable to recover it. 00:32:21.509 [2024-11-26 07:42:05.343252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.509 [2024-11-26 07:42:05.343260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.509 qpair failed and we were unable to recover it. 00:32:21.509 [2024-11-26 07:42:05.343591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.509 [2024-11-26 07:42:05.343598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.509 qpair failed and we were unable to recover it. 00:32:21.509 [2024-11-26 07:42:05.343934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.509 [2024-11-26 07:42:05.343942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.509 qpair failed and we were unable to recover it. 00:32:21.509 [2024-11-26 07:42:05.344286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.509 [2024-11-26 07:42:05.344294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.509 qpair failed and we were unable to recover it. 00:32:21.509 [2024-11-26 07:42:05.344610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.509 [2024-11-26 07:42:05.344618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.509 qpair failed and we were unable to recover it. 00:32:21.509 [2024-11-26 07:42:05.344785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.509 [2024-11-26 07:42:05.344794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.509 qpair failed and we were unable to recover it. 00:32:21.509 [2024-11-26 07:42:05.344970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.509 [2024-11-26 07:42:05.344977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.509 qpair failed and we were unable to recover it. 00:32:21.509 [2024-11-26 07:42:05.345358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.509 [2024-11-26 07:42:05.345366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.509 qpair failed and we were unable to recover it. 00:32:21.509 [2024-11-26 07:42:05.345568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.509 [2024-11-26 07:42:05.345576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.509 qpair failed and we were unable to recover it. 00:32:21.509 [2024-11-26 07:42:05.345752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.509 [2024-11-26 07:42:05.345760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.509 qpair failed and we were unable to recover it. 00:32:21.509 [2024-11-26 07:42:05.345952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.509 [2024-11-26 07:42:05.345960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.509 qpair failed and we were unable to recover it. 00:32:21.510 [2024-11-26 07:42:05.346305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.510 [2024-11-26 07:42:05.346313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.510 qpair failed and we were unable to recover it. 00:32:21.510 [2024-11-26 07:42:05.346659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.510 [2024-11-26 07:42:05.346666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.510 qpair failed and we were unable to recover it. 00:32:21.510 [2024-11-26 07:42:05.346997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.510 [2024-11-26 07:42:05.347005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.510 qpair failed and we were unable to recover it. 00:32:21.510 [2024-11-26 07:42:05.347199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.510 [2024-11-26 07:42:05.347207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.510 qpair failed and we were unable to recover it. 00:32:21.510 [2024-11-26 07:42:05.347396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.510 [2024-11-26 07:42:05.347403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.510 qpair failed and we were unable to recover it. 00:32:21.510 [2024-11-26 07:42:05.347744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.510 [2024-11-26 07:42:05.347751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.510 qpair failed and we were unable to recover it. 00:32:21.510 [2024-11-26 07:42:05.348127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.510 [2024-11-26 07:42:05.348136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.510 qpair failed and we were unable to recover it. 00:32:21.510 [2024-11-26 07:42:05.348506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.510 [2024-11-26 07:42:05.348514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.510 qpair failed and we were unable to recover it. 00:32:21.510 [2024-11-26 07:42:05.348714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.510 [2024-11-26 07:42:05.348722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.510 qpair failed and we were unable to recover it. 00:32:21.510 [2024-11-26 07:42:05.349065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.510 [2024-11-26 07:42:05.349073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.510 qpair failed and we were unable to recover it. 00:32:21.510 [2024-11-26 07:42:05.349386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.510 [2024-11-26 07:42:05.349393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.510 qpair failed and we were unable to recover it. 00:32:21.510 [2024-11-26 07:42:05.349742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.510 [2024-11-26 07:42:05.349750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.510 qpair failed and we were unable to recover it. 00:32:21.510 [2024-11-26 07:42:05.349925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.510 [2024-11-26 07:42:05.349934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.510 qpair failed and we were unable to recover it. 00:32:21.510 [2024-11-26 07:42:05.350144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.510 [2024-11-26 07:42:05.350152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.510 qpair failed and we were unable to recover it. 00:32:21.510 [2024-11-26 07:42:05.350321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.510 [2024-11-26 07:42:05.350330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.510 qpair failed and we were unable to recover it. 00:32:21.510 [2024-11-26 07:42:05.350534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.510 [2024-11-26 07:42:05.350542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.510 qpair failed and we were unable to recover it. 00:32:21.510 [2024-11-26 07:42:05.350734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.510 [2024-11-26 07:42:05.350742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.510 qpair failed and we were unable to recover it. 00:32:21.510 [2024-11-26 07:42:05.350930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.510 [2024-11-26 07:42:05.350938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.510 qpair failed and we were unable to recover it. 00:32:21.510 [2024-11-26 07:42:05.351242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.510 [2024-11-26 07:42:05.351250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.510 qpair failed and we were unable to recover it. 00:32:21.510 [2024-11-26 07:42:05.351586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.510 [2024-11-26 07:42:05.351596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.510 qpair failed and we were unable to recover it. 00:32:21.510 [2024-11-26 07:42:05.351778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.510 [2024-11-26 07:42:05.351786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.510 qpair failed and we were unable to recover it. 00:32:21.510 [2024-11-26 07:42:05.352114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.510 [2024-11-26 07:42:05.352123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.510 qpair failed and we were unable to recover it. 00:32:21.510 [2024-11-26 07:42:05.352487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.510 [2024-11-26 07:42:05.352494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.510 qpair failed and we were unable to recover it. 00:32:21.510 [2024-11-26 07:42:05.352816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.510 [2024-11-26 07:42:05.352824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.510 qpair failed and we were unable to recover it. 00:32:21.510 [2024-11-26 07:42:05.353133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.510 [2024-11-26 07:42:05.353141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.510 qpair failed and we were unable to recover it. 00:32:21.510 [2024-11-26 07:42:05.353390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.510 [2024-11-26 07:42:05.353398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.510 qpair failed and we were unable to recover it. 00:32:21.510 [2024-11-26 07:42:05.353568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.510 [2024-11-26 07:42:05.353577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.510 qpair failed and we were unable to recover it. 00:32:21.510 [2024-11-26 07:42:05.353771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.510 [2024-11-26 07:42:05.353779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.510 qpair failed and we were unable to recover it. 00:32:21.510 [2024-11-26 07:42:05.354112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.510 [2024-11-26 07:42:05.354120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.510 qpair failed and we were unable to recover it. 00:32:21.510 [2024-11-26 07:42:05.354300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.510 [2024-11-26 07:42:05.354310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.510 qpair failed and we were unable to recover it. 00:32:21.510 [2024-11-26 07:42:05.354612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.510 [2024-11-26 07:42:05.354621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.510 qpair failed and we were unable to recover it. 00:32:21.510 [2024-11-26 07:42:05.354959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.510 [2024-11-26 07:42:05.354967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.510 qpair failed and we were unable to recover it. 00:32:21.510 [2024-11-26 07:42:05.355075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.510 [2024-11-26 07:42:05.355082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.510 qpair failed and we were unable to recover it. 00:32:21.511 07:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:21.511 [2024-11-26 07:42:05.355119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.511 [2024-11-26 07:42:05.355128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.511 qpair failed and we were unable to recover it. 00:32:21.511 [2024-11-26 07:42:05.355406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.511 [2024-11-26 07:42:05.355414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.511 qpair failed and we were unable to recover it. 00:32:21.511 07:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:32:21.511 [2024-11-26 07:42:05.355727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.511 [2024-11-26 07:42:05.355735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.511 qpair failed and we were unable to recover it. 00:32:21.511 07:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:21.511 [2024-11-26 07:42:05.355921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.511 [2024-11-26 07:42:05.355930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.511 qpair failed and we were unable to recover it. 00:32:21.511 07:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:21.511 [2024-11-26 07:42:05.356219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.511 [2024-11-26 07:42:05.356227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.511 qpair failed and we were unable to recover it. 00:32:21.511 07:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:21.511 [2024-11-26 07:42:05.356358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.511 [2024-11-26 07:42:05.356366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.511 qpair failed and we were unable to recover it. 00:32:21.511 [2024-11-26 07:42:05.356634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.511 [2024-11-26 07:42:05.356642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.511 qpair failed and we were unable to recover it. 00:32:21.511 [2024-11-26 07:42:05.356980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.511 [2024-11-26 07:42:05.356988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.511 qpair failed and we were unable to recover it. 00:32:21.511 [2024-11-26 07:42:05.357322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.511 [2024-11-26 07:42:05.357330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.511 qpair failed and we were unable to recover it. 00:32:21.511 [2024-11-26 07:42:05.357649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.511 [2024-11-26 07:42:05.357657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.511 qpair failed and we were unable to recover it. 00:32:21.511 [2024-11-26 07:42:05.357962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.511 [2024-11-26 07:42:05.357971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.511 qpair failed and we were unable to recover it. 00:32:21.511 [2024-11-26 07:42:05.358361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.511 [2024-11-26 07:42:05.358369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.511 qpair failed and we were unable to recover it. 00:32:21.511 [2024-11-26 07:42:05.358571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.511 [2024-11-26 07:42:05.358579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.511 qpair failed and we were unable to recover it. 00:32:21.511 [2024-11-26 07:42:05.358733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.511 [2024-11-26 07:42:05.358741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.511 qpair failed and we were unable to recover it. 00:32:21.511 [2024-11-26 07:42:05.358957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.511 [2024-11-26 07:42:05.358966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.511 qpair failed and we were unable to recover it. 00:32:21.511 [2024-11-26 07:42:05.359156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.511 [2024-11-26 07:42:05.359165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.511 qpair failed and we were unable to recover it. 00:32:21.511 [2024-11-26 07:42:05.359342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.511 [2024-11-26 07:42:05.359350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.511 qpair failed and we were unable to recover it. 00:32:21.511 [2024-11-26 07:42:05.359623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.511 [2024-11-26 07:42:05.359632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.511 qpair failed and we were unable to recover it. 00:32:21.511 [2024-11-26 07:42:05.359823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.511 [2024-11-26 07:42:05.359831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.511 qpair failed and we were unable to recover it. 00:32:21.511 [2024-11-26 07:42:05.360113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.511 [2024-11-26 07:42:05.360122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.511 qpair failed and we were unable to recover it. 00:32:21.511 [2024-11-26 07:42:05.360436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.511 [2024-11-26 07:42:05.360445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.511 qpair failed and we were unable to recover it. 00:32:21.511 [2024-11-26 07:42:05.360621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.511 [2024-11-26 07:42:05.360630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.511 qpair failed and we were unable to recover it. 00:32:21.511 [2024-11-26 07:42:05.360847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.511 [2024-11-26 07:42:05.360856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.511 qpair failed and we were unable to recover it. 00:32:21.511 [2024-11-26 07:42:05.361264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.511 [2024-11-26 07:42:05.361272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.511 qpair failed and we were unable to recover it. 00:32:21.511 [2024-11-26 07:42:05.361589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.511 [2024-11-26 07:42:05.361599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.511 qpair failed and we were unable to recover it. 00:32:21.511 [2024-11-26 07:42:05.361786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.511 [2024-11-26 07:42:05.361794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.511 qpair failed and we were unable to recover it. 00:32:21.511 [2024-11-26 07:42:05.362109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.511 [2024-11-26 07:42:05.362118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.511 qpair failed and we were unable to recover it. 00:32:21.511 [2024-11-26 07:42:05.362280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.511 [2024-11-26 07:42:05.362289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.511 qpair failed and we were unable to recover it. 00:32:21.511 [2024-11-26 07:42:05.362623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.511 [2024-11-26 07:42:05.362632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.511 qpair failed and we were unable to recover it. 00:32:21.511 [2024-11-26 07:42:05.362816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.511 [2024-11-26 07:42:05.362825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.511 qpair failed and we were unable to recover it. 00:32:21.511 [2024-11-26 07:42:05.362941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.511 [2024-11-26 07:42:05.362950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.511 qpair failed and we were unable to recover it. 00:32:21.511 [2024-11-26 07:42:05.363238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.511 [2024-11-26 07:42:05.363247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.511 qpair failed and we were unable to recover it. 00:32:21.511 [2024-11-26 07:42:05.363285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.511 [2024-11-26 07:42:05.363291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.511 qpair failed and we were unable to recover it. 00:32:21.511 [2024-11-26 07:42:05.363627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.511 [2024-11-26 07:42:05.363636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.511 qpair failed and we were unable to recover it. 00:32:21.511 [2024-11-26 07:42:05.363933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.511 [2024-11-26 07:42:05.363942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.511 qpair failed and we were unable to recover it. 00:32:21.511 [2024-11-26 07:42:05.364281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.511 [2024-11-26 07:42:05.364289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.511 qpair failed and we were unable to recover it. 00:32:21.512 [2024-11-26 07:42:05.364506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.512 [2024-11-26 07:42:05.364516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.512 qpair failed and we were unable to recover it. 00:32:21.512 [2024-11-26 07:42:05.364655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.512 [2024-11-26 07:42:05.364664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.512 qpair failed and we were unable to recover it. 00:32:21.512 [2024-11-26 07:42:05.364839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.512 [2024-11-26 07:42:05.364848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.512 qpair failed and we were unable to recover it. 00:32:21.512 [2024-11-26 07:42:05.365127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.512 [2024-11-26 07:42:05.365136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.512 qpair failed and we were unable to recover it. 00:32:21.512 [2024-11-26 07:42:05.365450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.512 [2024-11-26 07:42:05.365459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.512 qpair failed and we were unable to recover it. 00:32:21.512 [2024-11-26 07:42:05.365767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.512 [2024-11-26 07:42:05.365776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.512 qpair failed and we were unable to recover it. 00:32:21.512 [2024-11-26 07:42:05.365945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.512 [2024-11-26 07:42:05.365953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.512 qpair failed and we were unable to recover it. 00:32:21.512 [2024-11-26 07:42:05.366145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.512 [2024-11-26 07:42:05.366152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.512 qpair failed and we were unable to recover it. 00:32:21.512 [2024-11-26 07:42:05.366681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.512 [2024-11-26 07:42:05.366771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f0000b90 with addr=10.0.0.2, port=4420 00:32:21.512 qpair failed and we were unable to recover it. 00:32:21.512 [2024-11-26 07:42:05.367185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.512 [2024-11-26 07:42:05.367226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f0000b90 with addr=10.0.0.2, port=4420 00:32:21.512 qpair failed and we were unable to recover it. 00:32:21.512 [2024-11-26 07:42:05.367603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.512 [2024-11-26 07:42:05.367633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f0000b90 with addr=10.0.0.2, port=4420 00:32:21.512 qpair failed and we were unable to recover it. 00:32:21.512 [2024-11-26 07:42:05.367837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.512 [2024-11-26 07:42:05.367848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.512 qpair failed and we were unable to recover it. 00:32:21.512 [2024-11-26 07:42:05.368082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.512 [2024-11-26 07:42:05.368091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.512 qpair failed and we were unable to recover it. 00:32:21.512 [2024-11-26 07:42:05.368173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.512 [2024-11-26 07:42:05.368180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.512 qpair failed and we were unable to recover it. 00:32:21.512 [2024-11-26 07:42:05.368325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.512 [2024-11-26 07:42:05.368333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.512 qpair failed and we were unable to recover it. 00:32:21.512 [2024-11-26 07:42:05.368628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.512 [2024-11-26 07:42:05.368636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.512 qpair failed and we were unable to recover it. 00:32:21.512 [2024-11-26 07:42:05.368981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.512 [2024-11-26 07:42:05.368990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.512 qpair failed and we were unable to recover it. 00:32:21.512 [2024-11-26 07:42:05.369337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.512 [2024-11-26 07:42:05.369347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.512 qpair failed and we were unable to recover it. 00:32:21.512 [2024-11-26 07:42:05.369568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.512 [2024-11-26 07:42:05.369576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.512 qpair failed and we were unable to recover it. 00:32:21.512 [2024-11-26 07:42:05.369911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.512 [2024-11-26 07:42:05.369920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.512 qpair failed and we were unable to recover it. 00:32:21.512 [2024-11-26 07:42:05.370215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.512 [2024-11-26 07:42:05.370224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.512 qpair failed and we were unable to recover it. 00:32:21.512 [2024-11-26 07:42:05.370328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.512 [2024-11-26 07:42:05.370334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.512 qpair failed and we were unable to recover it. 00:32:21.512 [2024-11-26 07:42:05.370619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.512 [2024-11-26 07:42:05.370626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.512 qpair failed and we were unable to recover it. 00:32:21.512 [2024-11-26 07:42:05.370930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.512 [2024-11-26 07:42:05.370939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.512 qpair failed and we were unable to recover it. 00:32:21.512 [2024-11-26 07:42:05.371250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.512 [2024-11-26 07:42:05.371258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.512 qpair failed and we were unable to recover it. 00:32:21.512 [2024-11-26 07:42:05.371635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.512 [2024-11-26 07:42:05.371643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.512 qpair failed and we were unable to recover it. 00:32:21.512 [2024-11-26 07:42:05.371925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.512 [2024-11-26 07:42:05.371934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.512 qpair failed and we were unable to recover it. 00:32:21.512 [2024-11-26 07:42:05.372253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.512 [2024-11-26 07:42:05.372262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.512 qpair failed and we were unable to recover it. 00:32:21.512 [2024-11-26 07:42:05.372334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.512 [2024-11-26 07:42:05.372344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.512 qpair failed and we were unable to recover it. 00:32:21.512 [2024-11-26 07:42:05.372505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.512 [2024-11-26 07:42:05.372512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.512 qpair failed and we were unable to recover it. 00:32:21.512 [2024-11-26 07:42:05.372843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.512 [2024-11-26 07:42:05.372850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.512 qpair failed and we were unable to recover it. 00:32:21.512 [2024-11-26 07:42:05.373017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.512 [2024-11-26 07:42:05.373027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.512 qpair failed and we were unable to recover it. 00:32:21.512 [2024-11-26 07:42:05.373341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.512 [2024-11-26 07:42:05.373349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.512 qpair failed and we were unable to recover it. 00:32:21.512 [2024-11-26 07:42:05.373637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.512 [2024-11-26 07:42:05.373645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.512 qpair failed and we were unable to recover it. 00:32:21.512 [2024-11-26 07:42:05.373967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.512 [2024-11-26 07:42:05.373976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.512 qpair failed and we were unable to recover it. 00:32:21.512 [2024-11-26 07:42:05.374319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.512 [2024-11-26 07:42:05.374327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.512 qpair failed and we were unable to recover it. 00:32:21.512 [2024-11-26 07:42:05.374674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.512 [2024-11-26 07:42:05.374682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.512 qpair failed and we were unable to recover it. 00:32:21.512 [2024-11-26 07:42:05.375028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.512 [2024-11-26 07:42:05.375036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.513 qpair failed and we were unable to recover it. 00:32:21.513 [2024-11-26 07:42:05.375353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.513 [2024-11-26 07:42:05.375363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.513 qpair failed and we were unable to recover it. 00:32:21.513 [2024-11-26 07:42:05.375550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.513 [2024-11-26 07:42:05.375560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.513 qpair failed and we were unable to recover it. 00:32:21.513 [2024-11-26 07:42:05.375920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.513 [2024-11-26 07:42:05.375929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.513 qpair failed and we were unable to recover it. 00:32:21.513 [2024-11-26 07:42:05.376236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.513 [2024-11-26 07:42:05.376244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.513 qpair failed and we were unable to recover it. 00:32:21.513 [2024-11-26 07:42:05.376502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.513 [2024-11-26 07:42:05.376511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.513 qpair failed and we were unable to recover it. 00:32:21.513 [2024-11-26 07:42:05.376809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.513 [2024-11-26 07:42:05.376817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.513 qpair failed and we were unable to recover it. 00:32:21.513 [2024-11-26 07:42:05.377114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.513 [2024-11-26 07:42:05.377123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.513 qpair failed and we were unable to recover it. 00:32:21.513 [2024-11-26 07:42:05.377440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.513 [2024-11-26 07:42:05.377448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.513 qpair failed and we were unable to recover it. 00:32:21.513 [2024-11-26 07:42:05.377746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.513 [2024-11-26 07:42:05.377754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.513 qpair failed and we were unable to recover it. 00:32:21.513 [2024-11-26 07:42:05.378070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.513 [2024-11-26 07:42:05.378079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.513 qpair failed and we were unable to recover it. 00:32:21.513 [2024-11-26 07:42:05.378392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.513 [2024-11-26 07:42:05.378401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.513 qpair failed and we were unable to recover it. 00:32:21.513 [2024-11-26 07:42:05.378743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.513 [2024-11-26 07:42:05.378751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.513 qpair failed and we were unable to recover it. 00:32:21.513 [2024-11-26 07:42:05.378936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.513 [2024-11-26 07:42:05.378945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.513 qpair failed and we were unable to recover it. 00:32:21.513 [2024-11-26 07:42:05.379277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.513 [2024-11-26 07:42:05.379284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.513 qpair failed and we were unable to recover it. 00:32:21.513 [2024-11-26 07:42:05.379598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.513 [2024-11-26 07:42:05.379605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.513 qpair failed and we were unable to recover it. 00:32:21.513 [2024-11-26 07:42:05.379760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.513 [2024-11-26 07:42:05.379777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.513 qpair failed and we were unable to recover it. 00:32:21.513 [2024-11-26 07:42:05.380085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.513 [2024-11-26 07:42:05.380094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.513 qpair failed and we were unable to recover it. 00:32:21.513 [2024-11-26 07:42:05.380413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.513 [2024-11-26 07:42:05.380422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.513 qpair failed and we were unable to recover it. 00:32:21.513 [2024-11-26 07:42:05.380587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.513 [2024-11-26 07:42:05.380596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.513 qpair failed and we were unable to recover it. 00:32:21.513 [2024-11-26 07:42:05.380810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.513 [2024-11-26 07:42:05.380818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.513 qpair failed and we were unable to recover it. 00:32:21.513 [2024-11-26 07:42:05.380993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.513 [2024-11-26 07:42:05.381002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.513 qpair failed and we were unable to recover it. 00:32:21.513 [2024-11-26 07:42:05.381200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.513 [2024-11-26 07:42:05.381208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.513 qpair failed and we were unable to recover it. 00:32:21.513 [2024-11-26 07:42:05.381539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.513 [2024-11-26 07:42:05.381548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.513 qpair failed and we were unable to recover it. 00:32:21.513 [2024-11-26 07:42:05.381735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.513 [2024-11-26 07:42:05.381751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.513 qpair failed and we were unable to recover it. 00:32:21.513 [2024-11-26 07:42:05.382066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.513 [2024-11-26 07:42:05.382075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.513 qpair failed and we were unable to recover it. 00:32:21.513 [2024-11-26 07:42:05.382402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.513 [2024-11-26 07:42:05.382411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.513 qpair failed and we were unable to recover it. 00:32:21.513 [2024-11-26 07:42:05.382597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.513 [2024-11-26 07:42:05.382606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.513 qpair failed and we were unable to recover it. 00:32:21.513 [2024-11-26 07:42:05.382771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.513 [2024-11-26 07:42:05.382779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.513 qpair failed and we were unable to recover it. 00:32:21.513 [2024-11-26 07:42:05.383001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.513 [2024-11-26 07:42:05.383009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.513 qpair failed and we were unable to recover it. 00:32:21.513 [2024-11-26 07:42:05.383229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.513 [2024-11-26 07:42:05.383237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.513 qpair failed and we were unable to recover it. 00:32:21.513 [2024-11-26 07:42:05.383526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.513 [2024-11-26 07:42:05.383536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.513 qpair failed and we were unable to recover it. 00:32:21.513 [2024-11-26 07:42:05.383851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.513 [2024-11-26 07:42:05.383860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.513 qpair failed and we were unable to recover it. 00:32:21.513 [2024-11-26 07:42:05.384163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.513 [2024-11-26 07:42:05.384171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.513 qpair failed and we were unable to recover it. 00:32:21.513 [2024-11-26 07:42:05.384364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.513 [2024-11-26 07:42:05.384371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.513 qpair failed and we were unable to recover it. 00:32:21.513 [2024-11-26 07:42:05.384680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.513 [2024-11-26 07:42:05.384689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.513 qpair failed and we were unable to recover it. 00:32:21.513 [2024-11-26 07:42:05.385027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.513 [2024-11-26 07:42:05.385036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.513 qpair failed and we were unable to recover it. 00:32:21.513 [2024-11-26 07:42:05.385342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.513 [2024-11-26 07:42:05.385351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.513 qpair failed and we were unable to recover it. 00:32:21.513 [2024-11-26 07:42:05.385664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.513 [2024-11-26 07:42:05.385672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.514 qpair failed and we were unable to recover it. 00:32:21.514 [2024-11-26 07:42:05.385872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.514 [2024-11-26 07:42:05.385881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.514 qpair failed and we were unable to recover it. 00:32:21.514 [2024-11-26 07:42:05.386170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.514 [2024-11-26 07:42:05.386178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.514 qpair failed and we were unable to recover it. 00:32:21.514 [2024-11-26 07:42:05.386331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.514 [2024-11-26 07:42:05.386339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.514 qpair failed and we were unable to recover it. 00:32:21.514 [2024-11-26 07:42:05.386650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.514 [2024-11-26 07:42:05.386659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.514 qpair failed and we were unable to recover it. 00:32:21.514 [2024-11-26 07:42:05.386979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.514 [2024-11-26 07:42:05.386987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.514 qpair failed and we were unable to recover it. 00:32:21.514 [2024-11-26 07:42:05.387284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.514 [2024-11-26 07:42:05.387293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.514 qpair failed and we were unable to recover it. 00:32:21.514 [2024-11-26 07:42:05.387590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.514 [2024-11-26 07:42:05.387599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.514 qpair failed and we were unable to recover it. 00:32:21.514 [2024-11-26 07:42:05.387898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.514 [2024-11-26 07:42:05.387906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.514 qpair failed and we were unable to recover it. 00:32:21.514 [2024-11-26 07:42:05.388242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.514 [2024-11-26 07:42:05.388251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.514 qpair failed and we were unable to recover it. 00:32:21.514 [2024-11-26 07:42:05.388459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.514 [2024-11-26 07:42:05.388467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.514 qpair failed and we were unable to recover it. 00:32:21.514 [2024-11-26 07:42:05.388811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.514 [2024-11-26 07:42:05.388820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.514 qpair failed and we were unable to recover it. 00:32:21.514 [2024-11-26 07:42:05.389072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.514 [2024-11-26 07:42:05.389080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.514 qpair failed and we were unable to recover it. 00:32:21.514 [2024-11-26 07:42:05.389253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.514 [2024-11-26 07:42:05.389261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.514 qpair failed and we were unable to recover it. 00:32:21.514 [2024-11-26 07:42:05.389604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.514 [2024-11-26 07:42:05.389613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.514 qpair failed and we were unable to recover it. 00:32:21.514 [2024-11-26 07:42:05.389799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.514 [2024-11-26 07:42:05.389808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.514 qpair failed and we were unable to recover it. 00:32:21.514 [2024-11-26 07:42:05.390131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.514 [2024-11-26 07:42:05.390140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.514 qpair failed and we were unable to recover it. 00:32:21.514 [2024-11-26 07:42:05.390378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.514 [2024-11-26 07:42:05.390386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.514 qpair failed and we were unable to recover it. 00:32:21.514 [2024-11-26 07:42:05.390570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.514 [2024-11-26 07:42:05.390579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.514 qpair failed and we were unable to recover it. 00:32:21.514 [2024-11-26 07:42:05.390888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.514 [2024-11-26 07:42:05.390896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.514 qpair failed and we were unable to recover it. 00:32:21.514 [2024-11-26 07:42:05.391194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.514 [2024-11-26 07:42:05.391202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.514 qpair failed and we were unable to recover it. 00:32:21.514 [2024-11-26 07:42:05.391517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.514 [2024-11-26 07:42:05.391524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.514 qpair failed and we were unable to recover it. 00:32:21.514 [2024-11-26 07:42:05.391840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.514 [2024-11-26 07:42:05.391848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.514 qpair failed and we were unable to recover it. 00:32:21.514 [2024-11-26 07:42:05.392042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.514 [2024-11-26 07:42:05.392050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.514 qpair failed and we were unable to recover it. 00:32:21.514 [2024-11-26 07:42:05.392253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.514 [2024-11-26 07:42:05.392263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.514 qpair failed and we were unable to recover it. 00:32:21.514 [2024-11-26 07:42:05.392598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.514 [2024-11-26 07:42:05.392606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.514 qpair failed and we were unable to recover it. 00:32:21.514 [2024-11-26 07:42:05.392938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.514 [2024-11-26 07:42:05.392946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.514 qpair failed and we were unable to recover it. 00:32:21.514 [2024-11-26 07:42:05.393119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.514 [2024-11-26 07:42:05.393127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.514 qpair failed and we were unable to recover it. 00:32:21.514 [2024-11-26 07:42:05.393289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.514 [2024-11-26 07:42:05.393297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.514 qpair failed and we were unable to recover it. 00:32:21.514 [2024-11-26 07:42:05.393616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.514 [2024-11-26 07:42:05.393624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.514 qpair failed and we were unable to recover it. 00:32:21.514 [2024-11-26 07:42:05.393798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.514 [2024-11-26 07:42:05.393806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.514 qpair failed and we were unable to recover it. 00:32:21.514 [2024-11-26 07:42:05.394010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.514 [2024-11-26 07:42:05.394019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.514 qpair failed and we were unable to recover it. 00:32:21.514 [2024-11-26 07:42:05.394070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.514 [2024-11-26 07:42:05.394077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.515 qpair failed and we were unable to recover it. 00:32:21.515 [2024-11-26 07:42:05.394247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.515 [2024-11-26 07:42:05.394257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.515 qpair failed and we were unable to recover it. 00:32:21.515 [2024-11-26 07:42:05.394566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.515 [2024-11-26 07:42:05.394574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.515 qpair failed and we were unable to recover it. 00:32:21.515 [2024-11-26 07:42:05.394884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.515 [2024-11-26 07:42:05.394893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.515 qpair failed and we were unable to recover it. 00:32:21.515 07:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:21.515 [2024-11-26 07:42:05.395243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.515 [2024-11-26 07:42:05.395253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.515 qpair failed and we were unable to recover it. 00:32:21.515 07:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:21.515 [2024-11-26 07:42:05.395551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.515 [2024-11-26 07:42:05.395562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.515 qpair failed and we were unable to recover it. 00:32:21.515 07:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.515 [2024-11-26 07:42:05.395886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.515 [2024-11-26 07:42:05.395896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.515 qpair failed and we were unable to recover it. 00:32:21.515 07:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:21.515 [2024-11-26 07:42:05.396188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.515 [2024-11-26 07:42:05.396197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.515 qpair failed and we were unable to recover it. 00:32:21.515 [2024-11-26 07:42:05.396382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.515 [2024-11-26 07:42:05.396390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.515 qpair failed and we were unable to recover it. 00:32:21.515 [2024-11-26 07:42:05.396580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.515 [2024-11-26 07:42:05.396589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.515 qpair failed and we were unable to recover it. 00:32:21.515 [2024-11-26 07:42:05.396821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.515 [2024-11-26 07:42:05.396830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.515 qpair failed and we were unable to recover it. 00:32:21.515 [2024-11-26 07:42:05.397163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.515 [2024-11-26 07:42:05.397172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.515 qpair failed and we were unable to recover it. 00:32:21.515 [2024-11-26 07:42:05.397485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.515 [2024-11-26 07:42:05.397493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.515 qpair failed and we were unable to recover it. 00:32:21.515 [2024-11-26 07:42:05.397656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.515 [2024-11-26 07:42:05.397664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.515 qpair failed and we were unable to recover it. 00:32:21.515 [2024-11-26 07:42:05.397838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.515 [2024-11-26 07:42:05.397846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.515 qpair failed and we were unable to recover it. 00:32:21.515 [2024-11-26 07:42:05.398033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.515 [2024-11-26 07:42:05.398042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.515 qpair failed and we were unable to recover it. 00:32:21.515 [2024-11-26 07:42:05.398407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.515 [2024-11-26 07:42:05.398416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.515 qpair failed and we were unable to recover it. 00:32:21.515 [2024-11-26 07:42:05.398700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.515 [2024-11-26 07:42:05.398707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.515 qpair failed and we were unable to recover it. 00:32:21.515 [2024-11-26 07:42:05.399025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.515 [2024-11-26 07:42:05.399033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.515 qpair failed and we were unable to recover it. 00:32:21.515 [2024-11-26 07:42:05.399338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.515 [2024-11-26 07:42:05.399345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.515 qpair failed and we were unable to recover it. 00:32:21.515 [2024-11-26 07:42:05.399654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.515 [2024-11-26 07:42:05.399663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.515 qpair failed and we were unable to recover it. 00:32:21.515 [2024-11-26 07:42:05.399970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.515 [2024-11-26 07:42:05.399978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.515 qpair failed and we were unable to recover it. 00:32:21.515 [2024-11-26 07:42:05.400300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.515 [2024-11-26 07:42:05.400308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.515 qpair failed and we were unable to recover it. 00:32:21.515 [2024-11-26 07:42:05.400506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.515 [2024-11-26 07:42:05.400514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.515 qpair failed and we were unable to recover it. 00:32:21.515 [2024-11-26 07:42:05.400795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.515 [2024-11-26 07:42:05.400803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.515 qpair failed and we were unable to recover it. 00:32:21.515 [2024-11-26 07:42:05.400982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.515 [2024-11-26 07:42:05.400998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.515 qpair failed and we were unable to recover it. 00:32:21.515 [2024-11-26 07:42:05.401341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.515 [2024-11-26 07:42:05.401349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.515 qpair failed and we were unable to recover it. 00:32:21.515 [2024-11-26 07:42:05.401685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.515 [2024-11-26 07:42:05.401693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.515 qpair failed and we were unable to recover it. 00:32:21.515 [2024-11-26 07:42:05.401874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.515 [2024-11-26 07:42:05.401883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.515 qpair failed and we were unable to recover it. 00:32:21.515 [2024-11-26 07:42:05.402176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.515 [2024-11-26 07:42:05.402184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.515 qpair failed and we were unable to recover it. 00:32:21.515 [2024-11-26 07:42:05.402497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.515 [2024-11-26 07:42:05.402505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.515 qpair failed and we were unable to recover it. 00:32:21.515 [2024-11-26 07:42:05.402839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.515 [2024-11-26 07:42:05.402847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.515 qpair failed and we were unable to recover it. 00:32:21.515 [2024-11-26 07:42:05.403056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.515 [2024-11-26 07:42:05.403064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.515 qpair failed and we were unable to recover it. 00:32:21.515 [2024-11-26 07:42:05.403385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.515 [2024-11-26 07:42:05.403394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.515 qpair failed and we were unable to recover it. 00:32:21.515 [2024-11-26 07:42:05.403729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.515 [2024-11-26 07:42:05.403737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.515 qpair failed and we were unable to recover it. 00:32:21.515 [2024-11-26 07:42:05.403895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.515 [2024-11-26 07:42:05.403903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.516 qpair failed and we were unable to recover it. 00:32:21.516 [2024-11-26 07:42:05.404195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.516 [2024-11-26 07:42:05.404203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.516 qpair failed and we were unable to recover it. 00:32:21.516 [2024-11-26 07:42:05.404538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.516 [2024-11-26 07:42:05.404546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.516 qpair failed and we were unable to recover it. 00:32:21.516 [2024-11-26 07:42:05.404598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.516 [2024-11-26 07:42:05.404604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.516 qpair failed and we were unable to recover it. 00:32:21.516 [2024-11-26 07:42:05.404935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.516 [2024-11-26 07:42:05.404945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.516 qpair failed and we were unable to recover it. 00:32:21.516 [2024-11-26 07:42:05.405105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.516 [2024-11-26 07:42:05.405113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.516 qpair failed and we were unable to recover it. 00:32:21.516 [2024-11-26 07:42:05.405382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.516 [2024-11-26 07:42:05.405389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.516 qpair failed and we were unable to recover it. 00:32:21.516 [2024-11-26 07:42:05.405565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.516 [2024-11-26 07:42:05.405574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.516 qpair failed and we were unable to recover it. 00:32:21.516 [2024-11-26 07:42:05.405903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.516 [2024-11-26 07:42:05.405911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.516 qpair failed and we were unable to recover it. 00:32:21.516 [2024-11-26 07:42:05.406257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.516 [2024-11-26 07:42:05.406264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.516 qpair failed and we were unable to recover it. 00:32:21.516 [2024-11-26 07:42:05.406447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.516 [2024-11-26 07:42:05.406456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.516 qpair failed and we were unable to recover it. 00:32:21.516 [2024-11-26 07:42:05.406653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.516 [2024-11-26 07:42:05.406661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.516 qpair failed and we were unable to recover it. 00:32:21.516 [2024-11-26 07:42:05.406822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.516 [2024-11-26 07:42:05.406830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.516 qpair failed and we were unable to recover it. 00:32:21.516 [2024-11-26 07:42:05.407142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.516 [2024-11-26 07:42:05.407150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.516 qpair failed and we were unable to recover it. 00:32:21.516 [2024-11-26 07:42:05.407446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.516 [2024-11-26 07:42:05.407454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.516 qpair failed and we were unable to recover it. 00:32:21.516 [2024-11-26 07:42:05.407780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.516 [2024-11-26 07:42:05.407787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.516 qpair failed and we were unable to recover it. 00:32:21.516 [2024-11-26 07:42:05.407939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.516 [2024-11-26 07:42:05.407947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.516 qpair failed and we were unable to recover it. 00:32:21.516 [2024-11-26 07:42:05.408130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.516 [2024-11-26 07:42:05.408138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.516 qpair failed and we were unable to recover it. 00:32:21.516 [2024-11-26 07:42:05.408304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.516 [2024-11-26 07:42:05.408313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.516 qpair failed and we were unable to recover it. 00:32:21.516 [2024-11-26 07:42:05.408628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.516 [2024-11-26 07:42:05.408636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.516 qpair failed and we were unable to recover it. 00:32:21.516 [2024-11-26 07:42:05.408970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.516 [2024-11-26 07:42:05.408979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.516 qpair failed and we were unable to recover it. 00:32:21.516 [2024-11-26 07:42:05.409311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.516 [2024-11-26 07:42:05.409319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.516 qpair failed and we were unable to recover it. 00:32:21.516 [2024-11-26 07:42:05.409629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.516 [2024-11-26 07:42:05.409638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.516 qpair failed and we were unable to recover it. 00:32:21.516 [2024-11-26 07:42:05.409973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.516 [2024-11-26 07:42:05.409981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.516 qpair failed and we were unable to recover it. 00:32:21.516 [2024-11-26 07:42:05.410307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.516 [2024-11-26 07:42:05.410315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.516 qpair failed and we were unable to recover it. 00:32:21.516 [2024-11-26 07:42:05.410635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.516 [2024-11-26 07:42:05.410642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.516 qpair failed and we were unable to recover it. 00:32:21.516 [2024-11-26 07:42:05.411024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.516 [2024-11-26 07:42:05.411032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.516 qpair failed and we were unable to recover it. 00:32:21.516 [2024-11-26 07:42:05.411360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.516 [2024-11-26 07:42:05.411369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.516 qpair failed and we were unable to recover it. 00:32:21.516 [2024-11-26 07:42:05.411502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.516 [2024-11-26 07:42:05.411510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.516 qpair failed and we were unable to recover it. 00:32:21.516 [2024-11-26 07:42:05.411726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.516 [2024-11-26 07:42:05.411733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.516 qpair failed and we were unable to recover it. 00:32:21.516 [2024-11-26 07:42:05.411925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.516 [2024-11-26 07:42:05.411935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.516 qpair failed and we were unable to recover it. 00:32:21.516 [2024-11-26 07:42:05.412231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.516 [2024-11-26 07:42:05.412239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.516 qpair failed and we were unable to recover it. 00:32:21.516 [2024-11-26 07:42:05.412567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.516 [2024-11-26 07:42:05.412575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.516 qpair failed and we were unable to recover it. 00:32:21.516 [2024-11-26 07:42:05.412878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.516 [2024-11-26 07:42:05.412886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.516 qpair failed and we were unable to recover it. 00:32:21.516 [2024-11-26 07:42:05.413071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.516 [2024-11-26 07:42:05.413080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.516 qpair failed and we were unable to recover it. 00:32:21.516 [2024-11-26 07:42:05.413260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.516 [2024-11-26 07:42:05.413269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.516 qpair failed and we were unable to recover it. 00:32:21.516 [2024-11-26 07:42:05.413585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.516 [2024-11-26 07:42:05.413593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.516 qpair failed and we were unable to recover it. 00:32:21.516 [2024-11-26 07:42:05.413872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.516 [2024-11-26 07:42:05.413881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.516 qpair failed and we were unable to recover it. 00:32:21.516 [2024-11-26 07:42:05.414205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.517 [2024-11-26 07:42:05.414213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.517 qpair failed and we were unable to recover it. 00:32:21.517 [2024-11-26 07:42:05.414530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.517 [2024-11-26 07:42:05.414537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.517 qpair failed and we were unable to recover it. 00:32:21.517 [2024-11-26 07:42:05.414728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.517 [2024-11-26 07:42:05.414736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.517 qpair failed and we were unable to recover it. 00:32:21.517 [2024-11-26 07:42:05.415024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.517 [2024-11-26 07:42:05.415033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.517 qpair failed and we were unable to recover it. 00:32:21.517 [2024-11-26 07:42:05.415328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.517 [2024-11-26 07:42:05.415336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.517 qpair failed and we were unable to recover it. 00:32:21.517 [2024-11-26 07:42:05.415466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.517 [2024-11-26 07:42:05.415473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.517 qpair failed and we were unable to recover it. 00:32:21.517 [2024-11-26 07:42:05.415648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.517 [2024-11-26 07:42:05.415660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.517 qpair failed and we were unable to recover it. 00:32:21.517 [2024-11-26 07:42:05.415990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.517 [2024-11-26 07:42:05.415998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.517 qpair failed and we were unable to recover it. 00:32:21.517 [2024-11-26 07:42:05.416302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.517 [2024-11-26 07:42:05.416310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.517 qpair failed and we were unable to recover it. 00:32:21.517 [2024-11-26 07:42:05.416621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.517 [2024-11-26 07:42:05.416629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.517 qpair failed and we were unable to recover it. 00:32:21.517 [2024-11-26 07:42:05.416793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.517 [2024-11-26 07:42:05.416802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.517 qpair failed and we were unable to recover it. 00:32:21.517 [2024-11-26 07:42:05.417110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.517 [2024-11-26 07:42:05.417118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.517 qpair failed and we were unable to recover it. 00:32:21.517 [2024-11-26 07:42:05.417405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.517 [2024-11-26 07:42:05.417414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.517 qpair failed and we were unable to recover it. 00:32:21.517 [2024-11-26 07:42:05.417718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.517 [2024-11-26 07:42:05.417728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.517 qpair failed and we were unable to recover it. 00:32:21.517 [2024-11-26 07:42:05.417888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.517 [2024-11-26 07:42:05.417897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.517 qpair failed and we were unable to recover it. 00:32:21.517 [2024-11-26 07:42:05.418190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.517 [2024-11-26 07:42:05.418199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.517 qpair failed and we were unable to recover it. 00:32:21.517 [2024-11-26 07:42:05.418556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.517 [2024-11-26 07:42:05.418564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.517 qpair failed and we were unable to recover it. 00:32:21.517 [2024-11-26 07:42:05.418732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.517 [2024-11-26 07:42:05.418739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.517 qpair failed and we were unable to recover it. 00:32:21.517 [2024-11-26 07:42:05.418896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.517 [2024-11-26 07:42:05.418904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.517 qpair failed and we were unable to recover it. 00:32:21.517 [2024-11-26 07:42:05.419307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.517 [2024-11-26 07:42:05.419315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.517 qpair failed and we were unable to recover it. 00:32:21.517 [2024-11-26 07:42:05.419690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.517 [2024-11-26 07:42:05.419698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.517 qpair failed and we were unable to recover it. 00:32:21.517 [2024-11-26 07:42:05.420006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.517 [2024-11-26 07:42:05.420015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.517 qpair failed and we were unable to recover it. 00:32:21.517 [2024-11-26 07:42:05.420330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.517 [2024-11-26 07:42:05.420338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.517 qpair failed and we were unable to recover it. 00:32:21.517 [2024-11-26 07:42:05.420518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.517 [2024-11-26 07:42:05.420526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.517 qpair failed and we were unable to recover it. 00:32:21.517 [2024-11-26 07:42:05.420856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.517 [2024-11-26 07:42:05.420869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.517 qpair failed and we were unable to recover it. 00:32:21.517 [2024-11-26 07:42:05.421021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.517 [2024-11-26 07:42:05.421028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.517 qpair failed and we were unable to recover it. 00:32:21.517 [2024-11-26 07:42:05.421224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.517 [2024-11-26 07:42:05.421232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.517 qpair failed and we were unable to recover it. 00:32:21.517 [2024-11-26 07:42:05.421542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.517 [2024-11-26 07:42:05.421550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.517 qpair failed and we were unable to recover it. 00:32:21.517 [2024-11-26 07:42:05.421865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.517 [2024-11-26 07:42:05.421874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.517 qpair failed and we were unable to recover it. 00:32:21.517 [2024-11-26 07:42:05.422188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.517 [2024-11-26 07:42:05.422197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.517 qpair failed and we were unable to recover it. 00:32:21.517 [2024-11-26 07:42:05.422360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.517 [2024-11-26 07:42:05.422369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.517 qpair failed and we were unable to recover it. 00:32:21.517 [2024-11-26 07:42:05.422720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.517 [2024-11-26 07:42:05.422728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.517 qpair failed and we were unable to recover it. 00:32:21.517 [2024-11-26 07:42:05.423078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.517 [2024-11-26 07:42:05.423087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.517 qpair failed and we were unable to recover it. 00:32:21.517 [2024-11-26 07:42:05.423380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.517 [2024-11-26 07:42:05.423389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.517 qpair failed and we were unable to recover it. 00:32:21.517 [2024-11-26 07:42:05.423552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.517 [2024-11-26 07:42:05.423561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.517 qpair failed and we were unable to recover it. 00:32:21.517 [2024-11-26 07:42:05.423914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.517 [2024-11-26 07:42:05.423923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.517 qpair failed and we were unable to recover it. 00:32:21.517 [2024-11-26 07:42:05.424259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.517 [2024-11-26 07:42:05.424267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.517 qpair failed and we were unable to recover it. 00:32:21.517 [2024-11-26 07:42:05.424582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.517 [2024-11-26 07:42:05.424590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.517 qpair failed and we were unable to recover it. 00:32:21.518 [2024-11-26 07:42:05.424781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.518 [2024-11-26 07:42:05.424790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.518 qpair failed and we were unable to recover it. 00:32:21.518 [2024-11-26 07:42:05.425199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.518 [2024-11-26 07:42:05.425207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.518 qpair failed and we were unable to recover it. 00:32:21.518 [2024-11-26 07:42:05.425510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.518 [2024-11-26 07:42:05.425518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.518 qpair failed and we were unable to recover it. 00:32:21.518 [2024-11-26 07:42:05.425710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.518 [2024-11-26 07:42:05.425718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.518 qpair failed and we were unable to recover it. 00:32:21.518 [2024-11-26 07:42:05.425878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.518 [2024-11-26 07:42:05.425887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.518 qpair failed and we were unable to recover it. 00:32:21.518 [2024-11-26 07:42:05.426177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.518 [2024-11-26 07:42:05.426185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.518 qpair failed and we were unable to recover it. 00:32:21.518 [2024-11-26 07:42:05.426497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.518 [2024-11-26 07:42:05.426505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.518 qpair failed and we were unable to recover it. 00:32:21.518 Malloc0 00:32:21.518 [2024-11-26 07:42:05.426895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.518 [2024-11-26 07:42:05.426904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.518 qpair failed and we were unable to recover it. 00:32:21.518 [2024-11-26 07:42:05.427216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.518 [2024-11-26 07:42:05.427226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.518 qpair failed and we were unable to recover it. 00:32:21.518 07:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.518 [2024-11-26 07:42:05.427540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.518 [2024-11-26 07:42:05.427549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.518 qpair failed and we were unable to recover it. 00:32:21.518 07:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:21.518 [2024-11-26 07:42:05.427876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.518 [2024-11-26 07:42:05.427885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.518 qpair failed and we were unable to recover it. 00:32:21.518 07:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.518 [2024-11-26 07:42:05.428201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.518 [2024-11-26 07:42:05.428210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.518 qpair failed and we were unable to recover it. 00:32:21.518 07:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:21.518 [2024-11-26 07:42:05.428402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.518 [2024-11-26 07:42:05.428411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.518 qpair failed and we were unable to recover it. 00:32:21.518 [2024-11-26 07:42:05.428707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.518 [2024-11-26 07:42:05.428715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.518 qpair failed and we were unable to recover it. 00:32:21.518 [2024-11-26 07:42:05.429038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.518 [2024-11-26 07:42:05.429046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.518 qpair failed and we were unable to recover it. 00:32:21.518 [2024-11-26 07:42:05.429303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.518 [2024-11-26 07:42:05.429312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.518 qpair failed and we were unable to recover it. 00:32:21.518 [2024-11-26 07:42:05.429612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.518 [2024-11-26 07:42:05.429621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.518 qpair failed and we were unable to recover it. 00:32:21.518 [2024-11-26 07:42:05.429957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.518 [2024-11-26 07:42:05.429966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.518 qpair failed and we were unable to recover it. 00:32:21.518 [2024-11-26 07:42:05.430298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.518 [2024-11-26 07:42:05.430307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.518 qpair failed and we were unable to recover it. 00:32:21.518 [2024-11-26 07:42:05.430447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.518 [2024-11-26 07:42:05.430455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.518 qpair failed and we were unable to recover it. 00:32:21.518 [2024-11-26 07:42:05.430618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.518 [2024-11-26 07:42:05.430626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.518 qpair failed and we were unable to recover it. 00:32:21.518 [2024-11-26 07:42:05.430945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.518 [2024-11-26 07:42:05.430954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.518 qpair failed and we were unable to recover it. 00:32:21.518 [2024-11-26 07:42:05.431264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.518 [2024-11-26 07:42:05.431273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.518 qpair failed and we were unable to recover it. 00:32:21.518 [2024-11-26 07:42:05.431568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.518 [2024-11-26 07:42:05.431575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.518 qpair failed and we were unable to recover it. 00:32:21.518 [2024-11-26 07:42:05.431940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.518 [2024-11-26 07:42:05.431948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.518 qpair failed and we were unable to recover it. 00:32:21.518 [2024-11-26 07:42:05.432272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.518 [2024-11-26 07:42:05.432280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.518 qpair failed and we were unable to recover it. 00:32:21.518 [2024-11-26 07:42:05.432599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.518 [2024-11-26 07:42:05.432607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.518 qpair failed and we were unable to recover it. 00:32:21.518 [2024-11-26 07:42:05.432949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.518 [2024-11-26 07:42:05.432958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.518 qpair failed and we were unable to recover it. 00:32:21.518 [2024-11-26 07:42:05.433266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.518 [2024-11-26 07:42:05.433274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.518 qpair failed and we were unable to recover it. 00:32:21.518 [2024-11-26 07:42:05.433587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.518 [2024-11-26 07:42:05.433595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.518 qpair failed and we were unable to recover it. 00:32:21.518 [2024-11-26 07:42:05.433741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.518 [2024-11-26 07:42:05.433749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.518 qpair failed and we were unable to recover it. 00:32:21.518 [2024-11-26 07:42:05.434007] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:21.518 [2024-11-26 07:42:05.434092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.518 [2024-11-26 07:42:05.434100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.518 qpair failed and we were unable to recover it. 00:32:21.518 [2024-11-26 07:42:05.434266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.518 [2024-11-26 07:42:05.434273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.518 qpair failed and we were unable to recover it. 00:32:21.518 [2024-11-26 07:42:05.434584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.518 [2024-11-26 07:42:05.434592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.518 qpair failed and we were unable to recover it. 00:32:21.518 [2024-11-26 07:42:05.434913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.519 [2024-11-26 07:42:05.434923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.519 qpair failed and we were unable to recover it. 00:32:21.519 [2024-11-26 07:42:05.435271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.519 [2024-11-26 07:42:05.435279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.519 qpair failed and we were unable to recover it. 00:32:21.519 [2024-11-26 07:42:05.435454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.519 [2024-11-26 07:42:05.435463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.519 qpair failed and we were unable to recover it. 00:32:21.519 [2024-11-26 07:42:05.435644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.519 [2024-11-26 07:42:05.435652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.519 qpair failed and we were unable to recover it. 00:32:21.519 [2024-11-26 07:42:05.435841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.519 [2024-11-26 07:42:05.435850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.519 qpair failed and we were unable to recover it. 00:32:21.519 [2024-11-26 07:42:05.436086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.519 [2024-11-26 07:42:05.436096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.519 qpair failed and we were unable to recover it. 00:32:21.519 [2024-11-26 07:42:05.436401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.519 [2024-11-26 07:42:05.436409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.519 qpair failed and we were unable to recover it. 00:32:21.519 [2024-11-26 07:42:05.436606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.519 [2024-11-26 07:42:05.436614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.519 qpair failed and we were unable to recover it. 00:32:21.519 [2024-11-26 07:42:05.436937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.519 [2024-11-26 07:42:05.436946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.519 qpair failed and we were unable to recover it. 00:32:21.519 [2024-11-26 07:42:05.437130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.519 [2024-11-26 07:42:05.437139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.519 qpair failed and we were unable to recover it. 00:32:21.519 [2024-11-26 07:42:05.437336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.519 [2024-11-26 07:42:05.437344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.519 qpair failed and we were unable to recover it. 00:32:21.519 [2024-11-26 07:42:05.437659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.519 [2024-11-26 07:42:05.437667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.519 qpair failed and we were unable to recover it. 00:32:21.519 [2024-11-26 07:42:05.437827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.519 [2024-11-26 07:42:05.437837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.519 qpair failed and we were unable to recover it. 00:32:21.519 [2024-11-26 07:42:05.438043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.519 [2024-11-26 07:42:05.438051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.519 qpair failed and we were unable to recover it. 00:32:21.519 [2024-11-26 07:42:05.438202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.519 [2024-11-26 07:42:05.438211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.519 qpair failed and we were unable to recover it. 00:32:21.519 [2024-11-26 07:42:05.438403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.519 [2024-11-26 07:42:05.438411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.519 qpair failed and we were unable to recover it. 00:32:21.519 [2024-11-26 07:42:05.438615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.519 [2024-11-26 07:42:05.438624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.519 qpair failed and we were unable to recover it. 00:32:21.519 [2024-11-26 07:42:05.438672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.519 [2024-11-26 07:42:05.438680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.519 qpair failed and we were unable to recover it. 00:32:21.519 [2024-11-26 07:42:05.439018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.519 [2024-11-26 07:42:05.439027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.519 qpair failed and we were unable to recover it. 00:32:21.519 [2024-11-26 07:42:05.439202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.519 [2024-11-26 07:42:05.439211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.519 qpair failed and we were unable to recover it. 00:32:21.519 [2024-11-26 07:42:05.439529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.519 [2024-11-26 07:42:05.439538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.519 qpair failed and we were unable to recover it. 00:32:21.519 [2024-11-26 07:42:05.439723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.519 [2024-11-26 07:42:05.439732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.519 qpair failed and we were unable to recover it. 00:32:21.519 [2024-11-26 07:42:05.440079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.519 [2024-11-26 07:42:05.440087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.519 qpair failed and we were unable to recover it. 00:32:21.519 [2024-11-26 07:42:05.440266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.519 [2024-11-26 07:42:05.440275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.519 qpair failed and we were unable to recover it. 00:32:21.519 [2024-11-26 07:42:05.440608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.519 [2024-11-26 07:42:05.440616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.519 qpair failed and we were unable to recover it. 00:32:21.519 [2024-11-26 07:42:05.440937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.519 [2024-11-26 07:42:05.440945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.519 qpair failed and we were unable to recover it. 00:32:21.519 [2024-11-26 07:42:05.441283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.519 [2024-11-26 07:42:05.441291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.519 qpair failed and we were unable to recover it. 00:32:21.519 [2024-11-26 07:42:05.441463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.519 [2024-11-26 07:42:05.441472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.519 qpair failed and we were unable to recover it. 00:32:21.519 [2024-11-26 07:42:05.441764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.519 [2024-11-26 07:42:05.441773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.519 qpair failed and we were unable to recover it. 00:32:21.519 [2024-11-26 07:42:05.442078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.519 [2024-11-26 07:42:05.442086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.519 qpair failed and we were unable to recover it. 00:32:21.519 [2024-11-26 07:42:05.442278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.519 [2024-11-26 07:42:05.442287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.519 qpair failed and we were unable to recover it. 00:32:21.519 [2024-11-26 07:42:05.442480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.519 [2024-11-26 07:42:05.442488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.519 qpair failed and we were unable to recover it. 00:32:21.519 [2024-11-26 07:42:05.442833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.520 [2024-11-26 07:42:05.442841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.520 qpair failed and we were unable to recover it. 00:32:21.520 07:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.520 [2024-11-26 07:42:05.443144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.520 [2024-11-26 07:42:05.443153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.520 qpair failed and we were unable to recover it. 00:32:21.520 07:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:21.520 [2024-11-26 07:42:05.443523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.520 [2024-11-26 07:42:05.443532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.520 qpair failed and we were unable to recover it. 00:32:21.520 [2024-11-26 07:42:05.443730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.520 07:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.520 [2024-11-26 07:42:05.443738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.520 qpair failed and we were unable to recover it. 00:32:21.520 [2024-11-26 07:42:05.443902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.520 [2024-11-26 07:42:05.443910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.520 qpair failed and we were unable to recover it. 00:32:21.520 07:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:21.520 [2024-11-26 07:42:05.444224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.520 [2024-11-26 07:42:05.444235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.520 qpair failed and we were unable to recover it. 00:32:21.520 [2024-11-26 07:42:05.444563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.520 [2024-11-26 07:42:05.444571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.520 qpair failed and we were unable to recover it. 00:32:21.520 [2024-11-26 07:42:05.444900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.520 [2024-11-26 07:42:05.444909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.520 qpair failed and we were unable to recover it. 00:32:21.520 [2024-11-26 07:42:05.445000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.520 [2024-11-26 07:42:05.445007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.520 qpair failed and we were unable to recover it. 00:32:21.520 [2024-11-26 07:42:05.445324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.520 [2024-11-26 07:42:05.445332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.520 qpair failed and we were unable to recover it. 00:32:21.520 [2024-11-26 07:42:05.445652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.520 [2024-11-26 07:42:05.445660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.520 qpair failed and we were unable to recover it. 00:32:21.520 [2024-11-26 07:42:05.445973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.520 [2024-11-26 07:42:05.445982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.520 qpair failed and we were unable to recover it. 00:32:21.520 [2024-11-26 07:42:05.446328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.520 [2024-11-26 07:42:05.446336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.520 qpair failed and we were unable to recover it. 00:32:21.520 [2024-11-26 07:42:05.446662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.520 [2024-11-26 07:42:05.446670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.520 qpair failed and we were unable to recover it. 00:32:21.520 [2024-11-26 07:42:05.446963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.520 [2024-11-26 07:42:05.446972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.520 qpair failed and we were unable to recover it. 00:32:21.520 [2024-11-26 07:42:05.447164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.520 [2024-11-26 07:42:05.447172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.520 qpair failed and we were unable to recover it. 00:32:21.520 [2024-11-26 07:42:05.447500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.520 [2024-11-26 07:42:05.447510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.520 qpair failed and we were unable to recover it. 00:32:21.520 [2024-11-26 07:42:05.447858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.520 [2024-11-26 07:42:05.447872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.520 qpair failed and we were unable to recover it. 00:32:21.520 [2024-11-26 07:42:05.448164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.520 [2024-11-26 07:42:05.448173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.520 qpair failed and we were unable to recover it. 00:32:21.520 [2024-11-26 07:42:05.448213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.520 [2024-11-26 07:42:05.448223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.520 qpair failed and we were unable to recover it. 00:32:21.520 [2024-11-26 07:42:05.448526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.520 [2024-11-26 07:42:05.448533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.520 qpair failed and we were unable to recover it. 00:32:21.520 [2024-11-26 07:42:05.448848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.520 [2024-11-26 07:42:05.448856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.520 qpair failed and we were unable to recover it. 00:32:21.520 [2024-11-26 07:42:05.449066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.520 [2024-11-26 07:42:05.449075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.520 qpair failed and we were unable to recover it. 00:32:21.520 [2024-11-26 07:42:05.449348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.520 [2024-11-26 07:42:05.449356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.520 qpair failed and we were unable to recover it. 00:32:21.520 [2024-11-26 07:42:05.449514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.520 [2024-11-26 07:42:05.449522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.520 qpair failed and we were unable to recover it. 00:32:21.520 [2024-11-26 07:42:05.449897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.520 [2024-11-26 07:42:05.449905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.520 qpair failed and we were unable to recover it. 00:32:21.520 [2024-11-26 07:42:05.449982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.520 [2024-11-26 07:42:05.449988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.520 qpair failed and we were unable to recover it. 00:32:21.520 [2024-11-26 07:42:05.450164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.520 [2024-11-26 07:42:05.450172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.520 qpair failed and we were unable to recover it. 00:32:21.520 [2024-11-26 07:42:05.450510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.520 [2024-11-26 07:42:05.450518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.520 qpair failed and we were unable to recover it. 00:32:21.520 [2024-11-26 07:42:05.450839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.520 [2024-11-26 07:42:05.450847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.520 qpair failed and we were unable to recover it. 00:32:21.520 [2024-11-26 07:42:05.451022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.520 [2024-11-26 07:42:05.451031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.520 qpair failed and we were unable to recover it. 00:32:21.520 [2024-11-26 07:42:05.451218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.520 [2024-11-26 07:42:05.451226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.520 qpair failed and we were unable to recover it. 00:32:21.520 [2024-11-26 07:42:05.451640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.520 [2024-11-26 07:42:05.451650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.520 qpair failed and we were unable to recover it. 00:32:21.520 [2024-11-26 07:42:05.451932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.520 [2024-11-26 07:42:05.451941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.520 qpair failed and we were unable to recover it. 00:32:21.520 [2024-11-26 07:42:05.452272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.520 [2024-11-26 07:42:05.452281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.520 qpair failed and we were unable to recover it. 00:32:21.520 [2024-11-26 07:42:05.452479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.521 [2024-11-26 07:42:05.452487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.521 qpair failed and we were unable to recover it. 00:32:21.521 [2024-11-26 07:42:05.452812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.521 [2024-11-26 07:42:05.452822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.521 qpair failed and we were unable to recover it. 00:32:21.521 [2024-11-26 07:42:05.452989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.521 [2024-11-26 07:42:05.452998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.521 qpair failed and we were unable to recover it. 00:32:21.521 [2024-11-26 07:42:05.453298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.521 [2024-11-26 07:42:05.453307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.521 qpair failed and we were unable to recover it. 00:32:21.521 [2024-11-26 07:42:05.453611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.521 [2024-11-26 07:42:05.453619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.521 qpair failed and we were unable to recover it. 00:32:21.521 [2024-11-26 07:42:05.453927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.521 [2024-11-26 07:42:05.453935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.521 qpair failed and we were unable to recover it. 00:32:21.521 [2024-11-26 07:42:05.454135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.521 [2024-11-26 07:42:05.454143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.521 qpair failed and we were unable to recover it. 00:32:21.521 [2024-11-26 07:42:05.454504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.521 [2024-11-26 07:42:05.454511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.521 qpair failed and we were unable to recover it. 00:32:21.521 [2024-11-26 07:42:05.454837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.521 [2024-11-26 07:42:05.454845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.521 qpair failed and we were unable to recover it. 00:32:21.521 07:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.521 [2024-11-26 07:42:05.455162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.521 [2024-11-26 07:42:05.455171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.521 qpair failed and we were unable to recover it. 00:32:21.521 [2024-11-26 07:42:05.455356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.521 [2024-11-26 07:42:05.455365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.521 qpair failed and we were unable to recover it. 00:32:21.521 07:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:21.521 [2024-11-26 07:42:05.455725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.521 [2024-11-26 07:42:05.455733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.521 qpair failed and we were unable to recover it. 00:32:21.521 07:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.521 [2024-11-26 07:42:05.455944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.521 [2024-11-26 07:42:05.455954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.521 qpair failed and we were unable to recover it. 00:32:21.521 [2024-11-26 07:42:05.455989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.521 [2024-11-26 07:42:05.455997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.521 qpair failed and we were unable to recover it. 00:32:21.521 07:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:21.521 [2024-11-26 07:42:05.456357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.521 [2024-11-26 07:42:05.456365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.521 qpair failed and we were unable to recover it. 00:32:21.521 [2024-11-26 07:42:05.456689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.521 [2024-11-26 07:42:05.456697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.521 qpair failed and we were unable to recover it. 00:32:21.521 [2024-11-26 07:42:05.457011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.521 [2024-11-26 07:42:05.457020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.521 qpair failed and we were unable to recover it. 00:32:21.521 [2024-11-26 07:42:05.457395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.521 [2024-11-26 07:42:05.457403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.521 qpair failed and we were unable to recover it. 00:32:21.521 [2024-11-26 07:42:05.457725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.521 [2024-11-26 07:42:05.457733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.521 qpair failed and we were unable to recover it. 00:32:21.521 [2024-11-26 07:42:05.458057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.521 [2024-11-26 07:42:05.458066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.521 qpair failed and we were unable to recover it. 00:32:21.521 [2024-11-26 07:42:05.458266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.521 [2024-11-26 07:42:05.458274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.521 qpair failed and we were unable to recover it. 00:32:21.521 [2024-11-26 07:42:05.458572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.521 [2024-11-26 07:42:05.458580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.521 qpair failed and we were unable to recover it. 00:32:21.521 [2024-11-26 07:42:05.458776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.521 [2024-11-26 07:42:05.458786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.521 qpair failed and we were unable to recover it. 00:32:21.521 [2024-11-26 07:42:05.459119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.521 [2024-11-26 07:42:05.459128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.521 qpair failed and we were unable to recover it. 00:32:21.521 [2024-11-26 07:42:05.459301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.521 [2024-11-26 07:42:05.459310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.521 qpair failed and we were unable to recover it. 00:32:21.521 [2024-11-26 07:42:05.459570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.521 [2024-11-26 07:42:05.459580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.521 qpair failed and we were unable to recover it. 00:32:21.521 [2024-11-26 07:42:05.459884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.521 [2024-11-26 07:42:05.459893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.521 qpair failed and we were unable to recover it. 00:32:21.521 [2024-11-26 07:42:05.460186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.521 [2024-11-26 07:42:05.460195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.521 qpair failed and we were unable to recover it. 00:32:21.521 [2024-11-26 07:42:05.460382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.521 [2024-11-26 07:42:05.460391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.521 qpair failed and we were unable to recover it. 00:32:21.521 [2024-11-26 07:42:05.460589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.521 [2024-11-26 07:42:05.460598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.521 qpair failed and we were unable to recover it. 00:32:21.521 [2024-11-26 07:42:05.460773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.521 [2024-11-26 07:42:05.460781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.521 qpair failed and we were unable to recover it. 00:32:21.521 [2024-11-26 07:42:05.460975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.521 [2024-11-26 07:42:05.460984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.521 qpair failed and we were unable to recover it. 00:32:21.521 [2024-11-26 07:42:05.461312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.521 [2024-11-26 07:42:05.461320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.521 qpair failed and we were unable to recover it. 00:32:21.521 [2024-11-26 07:42:05.461628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.521 [2024-11-26 07:42:05.461636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.521 qpair failed and we were unable to recover it. 00:32:21.521 [2024-11-26 07:42:05.461797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.521 [2024-11-26 07:42:05.461804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.521 qpair failed and we were unable to recover it. 00:32:21.521 [2024-11-26 07:42:05.461998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.522 [2024-11-26 07:42:05.462006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.522 qpair failed and we were unable to recover it. 00:32:21.522 [2024-11-26 07:42:05.462334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.522 [2024-11-26 07:42:05.462342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.522 qpair failed and we were unable to recover it. 00:32:21.522 [2024-11-26 07:42:05.462659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.522 [2024-11-26 07:42:05.462668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.522 qpair failed and we were unable to recover it. 00:32:21.522 [2024-11-26 07:42:05.462853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.522 [2024-11-26 07:42:05.462867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.522 qpair failed and we were unable to recover it. 00:32:21.522 [2024-11-26 07:42:05.463240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.522 [2024-11-26 07:42:05.463248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.522 qpair failed and we were unable to recover it. 00:32:21.522 [2024-11-26 07:42:05.463410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.522 [2024-11-26 07:42:05.463419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.522 qpair failed and we were unable to recover it. 00:32:21.522 [2024-11-26 07:42:05.463740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.522 [2024-11-26 07:42:05.463750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.522 qpair failed and we were unable to recover it. 00:32:21.522 [2024-11-26 07:42:05.463929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.522 [2024-11-26 07:42:05.463939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.522 qpair failed and we were unable to recover it. 00:32:21.522 [2024-11-26 07:42:05.464292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.522 [2024-11-26 07:42:05.464301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.522 qpair failed and we were unable to recover it. 00:32:21.522 [2024-11-26 07:42:05.464622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.522 [2024-11-26 07:42:05.464631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.522 qpair failed and we were unable to recover it. 00:32:21.522 [2024-11-26 07:42:05.464935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.522 [2024-11-26 07:42:05.464944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.522 qpair failed and we were unable to recover it. 00:32:21.522 [2024-11-26 07:42:05.465172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.522 [2024-11-26 07:42:05.465181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.522 qpair failed and we were unable to recover it. 00:32:21.522 [2024-11-26 07:42:05.465504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.522 [2024-11-26 07:42:05.465513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.522 qpair failed and we were unable to recover it. 00:32:21.522 [2024-11-26 07:42:05.465831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.522 [2024-11-26 07:42:05.465841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.522 qpair failed and we were unable to recover it. 00:32:21.522 [2024-11-26 07:42:05.466044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.522 [2024-11-26 07:42:05.466053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.522 qpair failed and we were unable to recover it. 00:32:21.522 [2024-11-26 07:42:05.466402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.522 [2024-11-26 07:42:05.466411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.522 qpair failed and we were unable to recover it. 00:32:21.522 [2024-11-26 07:42:05.466725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.522 [2024-11-26 07:42:05.466733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.522 qpair failed and we were unable to recover it. 00:32:21.522 [2024-11-26 07:42:05.466930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.522 [2024-11-26 07:42:05.466939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.522 qpair failed and we were unable to recover it. 00:32:21.522 [2024-11-26 07:42:05.467139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.522 [2024-11-26 07:42:05.467150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.522 qpair failed and we were unable to recover it. 00:32:21.522 07:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.522 [2024-11-26 07:42:05.467469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.522 [2024-11-26 07:42:05.467477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.522 qpair failed and we were unable to recover it. 00:32:21.522 07:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:21.522 [2024-11-26 07:42:05.467795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.522 [2024-11-26 07:42:05.467804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.522 qpair failed and we were unable to recover it. 00:32:21.522 [2024-11-26 07:42:05.467876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.522 [2024-11-26 07:42:05.467883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.522 qpair failed and we were unable to recover it. 00:32:21.522 07:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.522 07:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:21.522 [2024-11-26 07:42:05.468212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.522 [2024-11-26 07:42:05.468221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.522 qpair failed and we were unable to recover it. 00:32:21.522 [2024-11-26 07:42:05.468592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.522 [2024-11-26 07:42:05.468601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.522 qpair failed and we were unable to recover it. 00:32:21.522 [2024-11-26 07:42:05.468766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.522 [2024-11-26 07:42:05.468775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.522 qpair failed and we were unable to recover it. 00:32:21.522 [2024-11-26 07:42:05.469092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.522 [2024-11-26 07:42:05.469101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.522 qpair failed and we were unable to recover it. 00:32:21.522 [2024-11-26 07:42:05.469282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.522 [2024-11-26 07:42:05.469291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.522 qpair failed and we were unable to recover it. 00:32:21.522 [2024-11-26 07:42:05.469470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.522 [2024-11-26 07:42:05.469478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.522 qpair failed and we were unable to recover it. 00:32:21.522 [2024-11-26 07:42:05.469769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.522 [2024-11-26 07:42:05.469777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.522 qpair failed and we were unable to recover it. 00:32:21.522 [2024-11-26 07:42:05.470100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.522 [2024-11-26 07:42:05.470108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.522 qpair failed and we were unable to recover it. 00:32:21.522 [2024-11-26 07:42:05.470436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.522 [2024-11-26 07:42:05.470444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.522 qpair failed and we were unable to recover it. 00:32:21.522 [2024-11-26 07:42:05.470801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.522 [2024-11-26 07:42:05.470809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.522 qpair failed and we were unable to recover it. 00:32:21.522 [2024-11-26 07:42:05.470977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.522 [2024-11-26 07:42:05.470995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.522 qpair failed and we were unable to recover it. 00:32:21.522 [2024-11-26 07:42:05.471323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.522 [2024-11-26 07:42:05.471332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.522 qpair failed and we were unable to recover it. 00:32:21.522 [2024-11-26 07:42:05.471657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.522 [2024-11-26 07:42:05.471666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.522 qpair failed and we were unable to recover it. 00:32:21.522 [2024-11-26 07:42:05.471999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.522 [2024-11-26 07:42:05.472009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.523 qpair failed and we were unable to recover it. 00:32:21.523 [2024-11-26 07:42:05.472361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.523 [2024-11-26 07:42:05.472370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.523 qpair failed and we were unable to recover it. 00:32:21.523 [2024-11-26 07:42:05.472406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.523 [2024-11-26 07:42:05.472414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.523 qpair failed and we were unable to recover it. 00:32:21.523 [2024-11-26 07:42:05.472717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.523 [2024-11-26 07:42:05.472727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.523 qpair failed and we were unable to recover it. 00:32:21.523 [2024-11-26 07:42:05.472923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.523 [2024-11-26 07:42:05.472932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.523 qpair failed and we were unable to recover it. 00:32:21.523 [2024-11-26 07:42:05.473112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.523 [2024-11-26 07:42:05.473120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.523 qpair failed and we were unable to recover it. 00:32:21.523 [2024-11-26 07:42:05.473428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.523 [2024-11-26 07:42:05.473437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.523 qpair failed and we were unable to recover it. 00:32:21.523 [2024-11-26 07:42:05.473595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.523 [2024-11-26 07:42:05.473604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.523 qpair failed and we were unable to recover it. 00:32:21.523 [2024-11-26 07:42:05.473796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.523 [2024-11-26 07:42:05.473805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.523 qpair failed and we were unable to recover it. 00:32:21.523 [2024-11-26 07:42:05.474119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.523 [2024-11-26 07:42:05.474127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f90f4000b90 with addr=10.0.0.2, port=4420 00:32:21.523 qpair failed and we were unable to recover it. 00:32:21.523 [2024-11-26 07:42:05.474310] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:21.523 07:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.523 07:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:21.523 07:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.523 07:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:21.523 [2024-11-26 07:42:05.484984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.523 [2024-11-26 07:42:05.485082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.523 [2024-11-26 07:42:05.485097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.523 [2024-11-26 07:42:05.485103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.523 [2024-11-26 07:42:05.485108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.523 [2024-11-26 07:42:05.485124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.523 qpair failed and we were unable to recover it. 00:32:21.523 07:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.523 07:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2317065 00:32:21.523 [2024-11-26 07:42:05.494930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.523 [2024-11-26 07:42:05.494986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.523 [2024-11-26 07:42:05.494997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.523 [2024-11-26 07:42:05.495003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.523 [2024-11-26 07:42:05.495008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.523 [2024-11-26 07:42:05.495019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.523 qpair failed and we were unable to recover it. 00:32:21.523 [2024-11-26 07:42:05.504940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.523 [2024-11-26 07:42:05.504994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.523 [2024-11-26 07:42:05.505006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.523 [2024-11-26 07:42:05.505011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.523 [2024-11-26 07:42:05.505016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.523 [2024-11-26 07:42:05.505027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.523 qpair failed and we were unable to recover it. 00:32:21.523 [2024-11-26 07:42:05.514968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.523 [2024-11-26 07:42:05.515024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.523 [2024-11-26 07:42:05.515035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.523 [2024-11-26 07:42:05.515040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.523 [2024-11-26 07:42:05.515044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.523 [2024-11-26 07:42:05.515056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.523 qpair failed and we were unable to recover it. 00:32:21.523 [2024-11-26 07:42:05.524920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.523 [2024-11-26 07:42:05.525021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.523 [2024-11-26 07:42:05.525031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.523 [2024-11-26 07:42:05.525036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.523 [2024-11-26 07:42:05.525041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.523 [2024-11-26 07:42:05.525052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.523 qpair failed and we were unable to recover it. 00:32:21.523 [2024-11-26 07:42:05.534921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.523 [2024-11-26 07:42:05.535009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.523 [2024-11-26 07:42:05.535019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.523 [2024-11-26 07:42:05.535027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.523 [2024-11-26 07:42:05.535032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.523 [2024-11-26 07:42:05.535042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.523 qpair failed and we were unable to recover it. 00:32:21.523 [2024-11-26 07:42:05.544947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.523 [2024-11-26 07:42:05.544995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.523 [2024-11-26 07:42:05.545005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.523 [2024-11-26 07:42:05.545011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.523 [2024-11-26 07:42:05.545015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.523 [2024-11-26 07:42:05.545026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.523 qpair failed and we were unable to recover it. 00:32:21.523 [2024-11-26 07:42:05.554850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.523 [2024-11-26 07:42:05.554906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.523 [2024-11-26 07:42:05.554916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.523 [2024-11-26 07:42:05.554921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.523 [2024-11-26 07:42:05.554926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.523 [2024-11-26 07:42:05.554936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.523 qpair failed and we were unable to recover it. 00:32:21.523 [2024-11-26 07:42:05.565049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.523 [2024-11-26 07:42:05.565102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.524 [2024-11-26 07:42:05.565112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.524 [2024-11-26 07:42:05.565117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.524 [2024-11-26 07:42:05.565122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.524 [2024-11-26 07:42:05.565132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.524 qpair failed and we were unable to recover it. 00:32:21.524 [2024-11-26 07:42:05.575036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.524 [2024-11-26 07:42:05.575084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.524 [2024-11-26 07:42:05.575094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.524 [2024-11-26 07:42:05.575099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.524 [2024-11-26 07:42:05.575103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.524 [2024-11-26 07:42:05.575116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.524 qpair failed and we were unable to recover it. 00:32:21.524 [2024-11-26 07:42:05.585087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.524 [2024-11-26 07:42:05.585138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.524 [2024-11-26 07:42:05.585148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.524 [2024-11-26 07:42:05.585153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.524 [2024-11-26 07:42:05.585158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.524 [2024-11-26 07:42:05.585168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.524 qpair failed and we were unable to recover it. 00:32:21.524 [2024-11-26 07:42:05.594957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.524 [2024-11-26 07:42:05.595019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.524 [2024-11-26 07:42:05.595030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.524 [2024-11-26 07:42:05.595035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.524 [2024-11-26 07:42:05.595040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.524 [2024-11-26 07:42:05.595050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.524 qpair failed and we were unable to recover it. 00:32:21.524 [2024-11-26 07:42:05.605115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.524 [2024-11-26 07:42:05.605169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.524 [2024-11-26 07:42:05.605179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.524 [2024-11-26 07:42:05.605185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.524 [2024-11-26 07:42:05.605190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.524 [2024-11-26 07:42:05.605200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.524 qpair failed and we were unable to recover it. 00:32:21.524 [2024-11-26 07:42:05.615150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.524 [2024-11-26 07:42:05.615202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.524 [2024-11-26 07:42:05.615212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.524 [2024-11-26 07:42:05.615217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.524 [2024-11-26 07:42:05.615222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.524 [2024-11-26 07:42:05.615232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.524 qpair failed and we were unable to recover it. 00:32:21.787 [2024-11-26 07:42:05.625142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.787 [2024-11-26 07:42:05.625191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.787 [2024-11-26 07:42:05.625201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.787 [2024-11-26 07:42:05.625207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.787 [2024-11-26 07:42:05.625211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.787 [2024-11-26 07:42:05.625221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.787 qpair failed and we were unable to recover it. 00:32:21.787 [2024-11-26 07:42:05.635251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.787 [2024-11-26 07:42:05.635300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.787 [2024-11-26 07:42:05.635310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.787 [2024-11-26 07:42:05.635315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.787 [2024-11-26 07:42:05.635320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.787 [2024-11-26 07:42:05.635330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.787 qpair failed and we were unable to recover it. 00:32:21.787 [2024-11-26 07:42:05.645119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.787 [2024-11-26 07:42:05.645171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.787 [2024-11-26 07:42:05.645181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.787 [2024-11-26 07:42:05.645186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.787 [2024-11-26 07:42:05.645191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.787 [2024-11-26 07:42:05.645201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.787 qpair failed and we were unable to recover it. 00:32:21.787 [2024-11-26 07:42:05.655215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.787 [2024-11-26 07:42:05.655270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.787 [2024-11-26 07:42:05.655280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.787 [2024-11-26 07:42:05.655286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.787 [2024-11-26 07:42:05.655290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.787 [2024-11-26 07:42:05.655301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.787 qpair failed and we were unable to recover it. 00:32:21.787 [2024-11-26 07:42:05.665265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.787 [2024-11-26 07:42:05.665314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.787 [2024-11-26 07:42:05.665326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.787 [2024-11-26 07:42:05.665332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.787 [2024-11-26 07:42:05.665336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.787 [2024-11-26 07:42:05.665347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.787 qpair failed and we were unable to recover it. 00:32:21.787 [2024-11-26 07:42:05.675235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.787 [2024-11-26 07:42:05.675283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.787 [2024-11-26 07:42:05.675295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.787 [2024-11-26 07:42:05.675300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.787 [2024-11-26 07:42:05.675305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.787 [2024-11-26 07:42:05.675316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.787 qpair failed and we were unable to recover it. 00:32:21.787 [2024-11-26 07:42:05.685325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.787 [2024-11-26 07:42:05.685377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.787 [2024-11-26 07:42:05.685387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.787 [2024-11-26 07:42:05.685392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.787 [2024-11-26 07:42:05.685397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.788 [2024-11-26 07:42:05.685407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.788 qpair failed and we were unable to recover it. 00:32:21.788 [2024-11-26 07:42:05.695230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.788 [2024-11-26 07:42:05.695276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.788 [2024-11-26 07:42:05.695287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.788 [2024-11-26 07:42:05.695292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.788 [2024-11-26 07:42:05.695297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.788 [2024-11-26 07:42:05.695308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.788 qpair failed and we were unable to recover it. 00:32:21.788 [2024-11-26 07:42:05.705392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.788 [2024-11-26 07:42:05.705487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.788 [2024-11-26 07:42:05.705498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.788 [2024-11-26 07:42:05.705503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.788 [2024-11-26 07:42:05.705510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.788 [2024-11-26 07:42:05.705521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.788 qpair failed and we were unable to recover it. 00:32:21.788 [2024-11-26 07:42:05.715666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.788 [2024-11-26 07:42:05.715721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.788 [2024-11-26 07:42:05.715731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.788 [2024-11-26 07:42:05.715736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.788 [2024-11-26 07:42:05.715741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.788 [2024-11-26 07:42:05.715751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.788 qpair failed and we were unable to recover it. 00:32:21.788 [2024-11-26 07:42:05.725514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.788 [2024-11-26 07:42:05.725565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.788 [2024-11-26 07:42:05.725575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.788 [2024-11-26 07:42:05.725580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.788 [2024-11-26 07:42:05.725585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.788 [2024-11-26 07:42:05.725595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.788 qpair failed and we were unable to recover it. 00:32:21.788 [2024-11-26 07:42:05.735535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.788 [2024-11-26 07:42:05.735587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.788 [2024-11-26 07:42:05.735597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.788 [2024-11-26 07:42:05.735602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.788 [2024-11-26 07:42:05.735607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.788 [2024-11-26 07:42:05.735617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.788 qpair failed and we were unable to recover it. 00:32:21.788 [2024-11-26 07:42:05.745518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.788 [2024-11-26 07:42:05.745561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.788 [2024-11-26 07:42:05.745571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.788 [2024-11-26 07:42:05.745576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.788 [2024-11-26 07:42:05.745581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.788 [2024-11-26 07:42:05.745591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.788 qpair failed and we were unable to recover it. 00:32:21.788 [2024-11-26 07:42:05.755552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.788 [2024-11-26 07:42:05.755605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.788 [2024-11-26 07:42:05.755615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.788 [2024-11-26 07:42:05.755620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.788 [2024-11-26 07:42:05.755625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.788 [2024-11-26 07:42:05.755635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.788 qpair failed and we were unable to recover it. 00:32:21.788 [2024-11-26 07:42:05.765578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.788 [2024-11-26 07:42:05.765626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.788 [2024-11-26 07:42:05.765636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.788 [2024-11-26 07:42:05.765641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.788 [2024-11-26 07:42:05.765646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.788 [2024-11-26 07:42:05.765656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.788 qpair failed and we were unable to recover it. 00:32:21.788 [2024-11-26 07:42:05.775493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.788 [2024-11-26 07:42:05.775543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.788 [2024-11-26 07:42:05.775553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.788 [2024-11-26 07:42:05.775558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.788 [2024-11-26 07:42:05.775563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.788 [2024-11-26 07:42:05.775573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.788 qpair failed and we were unable to recover it. 00:32:21.788 [2024-11-26 07:42:05.785620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.788 [2024-11-26 07:42:05.785665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.788 [2024-11-26 07:42:05.785675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.788 [2024-11-26 07:42:05.785680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.788 [2024-11-26 07:42:05.785685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.789 [2024-11-26 07:42:05.785695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.789 qpair failed and we were unable to recover it. 00:32:21.789 [2024-11-26 07:42:05.795703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.789 [2024-11-26 07:42:05.795802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.789 [2024-11-26 07:42:05.795815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.789 [2024-11-26 07:42:05.795820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.789 [2024-11-26 07:42:05.795825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.789 [2024-11-26 07:42:05.795835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.789 qpair failed and we were unable to recover it. 00:32:21.789 [2024-11-26 07:42:05.805566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.789 [2024-11-26 07:42:05.805614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.789 [2024-11-26 07:42:05.805625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.789 [2024-11-26 07:42:05.805630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.789 [2024-11-26 07:42:05.805635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.789 [2024-11-26 07:42:05.805645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.789 qpair failed and we were unable to recover it. 00:32:21.789 [2024-11-26 07:42:05.815696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.789 [2024-11-26 07:42:05.815744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.789 [2024-11-26 07:42:05.815754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.789 [2024-11-26 07:42:05.815760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.789 [2024-11-26 07:42:05.815765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.789 [2024-11-26 07:42:05.815775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.789 qpair failed and we were unable to recover it. 00:32:21.789 [2024-11-26 07:42:05.825682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.789 [2024-11-26 07:42:05.825732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.789 [2024-11-26 07:42:05.825742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.789 [2024-11-26 07:42:05.825747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.789 [2024-11-26 07:42:05.825752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.789 [2024-11-26 07:42:05.825762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.789 qpair failed and we were unable to recover it. 00:32:21.789 [2024-11-26 07:42:05.835807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.789 [2024-11-26 07:42:05.835859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.789 [2024-11-26 07:42:05.835873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.789 [2024-11-26 07:42:05.835879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.789 [2024-11-26 07:42:05.835887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.789 [2024-11-26 07:42:05.835897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.789 qpair failed and we were unable to recover it. 00:32:21.789 [2024-11-26 07:42:05.845705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.789 [2024-11-26 07:42:05.845757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.789 [2024-11-26 07:42:05.845767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.789 [2024-11-26 07:42:05.845773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.789 [2024-11-26 07:42:05.845778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.789 [2024-11-26 07:42:05.845788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.789 qpair failed and we were unable to recover it. 00:32:21.789 [2024-11-26 07:42:05.855704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.789 [2024-11-26 07:42:05.855753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.789 [2024-11-26 07:42:05.855764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.789 [2024-11-26 07:42:05.855769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.789 [2024-11-26 07:42:05.855774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.789 [2024-11-26 07:42:05.855785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.789 qpair failed and we were unable to recover it. 00:32:21.789 [2024-11-26 07:42:05.865835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.789 [2024-11-26 07:42:05.865882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.789 [2024-11-26 07:42:05.865893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.789 [2024-11-26 07:42:05.865899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.789 [2024-11-26 07:42:05.865904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.789 [2024-11-26 07:42:05.865914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.789 qpair failed and we were unable to recover it. 00:32:21.789 [2024-11-26 07:42:05.875858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.789 [2024-11-26 07:42:05.875913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.789 [2024-11-26 07:42:05.875924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.789 [2024-11-26 07:42:05.875929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.789 [2024-11-26 07:42:05.875934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.789 [2024-11-26 07:42:05.875945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.789 qpair failed and we were unable to recover it. 00:32:21.789 [2024-11-26 07:42:05.885902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.790 [2024-11-26 07:42:05.885954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.790 [2024-11-26 07:42:05.885964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.790 [2024-11-26 07:42:05.885969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.790 [2024-11-26 07:42:05.885974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.790 [2024-11-26 07:42:05.885985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.790 qpair failed and we were unable to recover it. 00:32:21.790 [2024-11-26 07:42:05.895912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.790 [2024-11-26 07:42:05.895994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.790 [2024-11-26 07:42:05.896003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.790 [2024-11-26 07:42:05.896009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.790 [2024-11-26 07:42:05.896014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.790 [2024-11-26 07:42:05.896024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.790 qpair failed and we were unable to recover it. 00:32:21.790 [2024-11-26 07:42:05.905929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.790 [2024-11-26 07:42:05.905980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.790 [2024-11-26 07:42:05.905990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.790 [2024-11-26 07:42:05.905995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.790 [2024-11-26 07:42:05.906000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:21.790 [2024-11-26 07:42:05.906010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:21.790 qpair failed and we were unable to recover it. 00:32:22.053 [2024-11-26 07:42:05.915973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.053 [2024-11-26 07:42:05.916021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.053 [2024-11-26 07:42:05.916031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.053 [2024-11-26 07:42:05.916037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.053 [2024-11-26 07:42:05.916041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.053 [2024-11-26 07:42:05.916052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.053 qpair failed and we were unable to recover it. 00:32:22.053 [2024-11-26 07:42:05.925900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.053 [2024-11-26 07:42:05.925983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.053 [2024-11-26 07:42:05.925996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.053 [2024-11-26 07:42:05.926001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.053 [2024-11-26 07:42:05.926006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.053 [2024-11-26 07:42:05.926016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.053 qpair failed and we were unable to recover it. 00:32:22.053 [2024-11-26 07:42:05.936017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.053 [2024-11-26 07:42:05.936063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.053 [2024-11-26 07:42:05.936073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.053 [2024-11-26 07:42:05.936078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.053 [2024-11-26 07:42:05.936083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.053 [2024-11-26 07:42:05.936093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.053 qpair failed and we were unable to recover it. 00:32:22.053 [2024-11-26 07:42:05.946077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.053 [2024-11-26 07:42:05.946143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.053 [2024-11-26 07:42:05.946153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.053 [2024-11-26 07:42:05.946158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.053 [2024-11-26 07:42:05.946163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.053 [2024-11-26 07:42:05.946173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.053 qpair failed and we were unable to recover it. 00:32:22.053 [2024-11-26 07:42:05.956113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.053 [2024-11-26 07:42:05.956159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.053 [2024-11-26 07:42:05.956169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.053 [2024-11-26 07:42:05.956174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.053 [2024-11-26 07:42:05.956179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.053 [2024-11-26 07:42:05.956189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.053 qpair failed and we were unable to recover it. 00:32:22.053 [2024-11-26 07:42:05.966119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.053 [2024-11-26 07:42:05.966194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.053 [2024-11-26 07:42:05.966204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.053 [2024-11-26 07:42:05.966212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.053 [2024-11-26 07:42:05.966217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.053 [2024-11-26 07:42:05.966227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.053 qpair failed and we were unable to recover it. 00:32:22.053 [2024-11-26 07:42:05.976159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.054 [2024-11-26 07:42:05.976213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.054 [2024-11-26 07:42:05.976223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.054 [2024-11-26 07:42:05.976228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.054 [2024-11-26 07:42:05.976233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.054 [2024-11-26 07:42:05.976243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.054 qpair failed and we were unable to recover it. 00:32:22.054 [2024-11-26 07:42:05.986098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.054 [2024-11-26 07:42:05.986142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.054 [2024-11-26 07:42:05.986152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.054 [2024-11-26 07:42:05.986157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.054 [2024-11-26 07:42:05.986162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.054 [2024-11-26 07:42:05.986172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.054 qpair failed and we were unable to recover it. 00:32:22.054 [2024-11-26 07:42:05.996181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.054 [2024-11-26 07:42:05.996232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.054 [2024-11-26 07:42:05.996242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.054 [2024-11-26 07:42:05.996247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.054 [2024-11-26 07:42:05.996252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.054 [2024-11-26 07:42:05.996262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.054 qpair failed and we were unable to recover it. 00:32:22.054 [2024-11-26 07:42:06.006238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.054 [2024-11-26 07:42:06.006290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.054 [2024-11-26 07:42:06.006299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.054 [2024-11-26 07:42:06.006305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.054 [2024-11-26 07:42:06.006309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.054 [2024-11-26 07:42:06.006322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.054 qpair failed and we were unable to recover it. 00:32:22.054 [2024-11-26 07:42:06.016268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.054 [2024-11-26 07:42:06.016318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.054 [2024-11-26 07:42:06.016327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.054 [2024-11-26 07:42:06.016333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.054 [2024-11-26 07:42:06.016338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.054 [2024-11-26 07:42:06.016348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.054 qpair failed and we were unable to recover it. 00:32:22.054 [2024-11-26 07:42:06.026283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.054 [2024-11-26 07:42:06.026336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.054 [2024-11-26 07:42:06.026346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.054 [2024-11-26 07:42:06.026351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.054 [2024-11-26 07:42:06.026356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.054 [2024-11-26 07:42:06.026366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.054 qpair failed and we were unable to recover it. 00:32:22.054 [2024-11-26 07:42:06.036306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.054 [2024-11-26 07:42:06.036357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.054 [2024-11-26 07:42:06.036367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.054 [2024-11-26 07:42:06.036372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.054 [2024-11-26 07:42:06.036377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.054 [2024-11-26 07:42:06.036388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.054 qpair failed and we were unable to recover it. 00:32:22.054 [2024-11-26 07:42:06.046323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.054 [2024-11-26 07:42:06.046386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.054 [2024-11-26 07:42:06.046396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.054 [2024-11-26 07:42:06.046401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.054 [2024-11-26 07:42:06.046406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.054 [2024-11-26 07:42:06.046416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.054 qpair failed and we were unable to recover it. 00:32:22.054 [2024-11-26 07:42:06.056359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.054 [2024-11-26 07:42:06.056414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.054 [2024-11-26 07:42:06.056424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.054 [2024-11-26 07:42:06.056429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.054 [2024-11-26 07:42:06.056434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.054 [2024-11-26 07:42:06.056444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.054 qpair failed and we were unable to recover it. 00:32:22.054 [2024-11-26 07:42:06.066458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.054 [2024-11-26 07:42:06.066516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.054 [2024-11-26 07:42:06.066526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.054 [2024-11-26 07:42:06.066531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.054 [2024-11-26 07:42:06.066536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.054 [2024-11-26 07:42:06.066546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.054 qpair failed and we were unable to recover it. 00:32:22.054 [2024-11-26 07:42:06.076469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.055 [2024-11-26 07:42:06.076531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.055 [2024-11-26 07:42:06.076550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.055 [2024-11-26 07:42:06.076556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.055 [2024-11-26 07:42:06.076561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.055 [2024-11-26 07:42:06.076575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.055 qpair failed and we were unable to recover it. 00:32:22.055 [2024-11-26 07:42:06.086461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.055 [2024-11-26 07:42:06.086549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.055 [2024-11-26 07:42:06.086559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.055 [2024-11-26 07:42:06.086564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.055 [2024-11-26 07:42:06.086569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.055 [2024-11-26 07:42:06.086580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.055 qpair failed and we were unable to recover it. 00:32:22.055 [2024-11-26 07:42:06.096460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.055 [2024-11-26 07:42:06.096510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.055 [2024-11-26 07:42:06.096520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.055 [2024-11-26 07:42:06.096528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.055 [2024-11-26 07:42:06.096533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.055 [2024-11-26 07:42:06.096544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.055 qpair failed and we were unable to recover it. 00:32:22.055 [2024-11-26 07:42:06.106408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.055 [2024-11-26 07:42:06.106499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.055 [2024-11-26 07:42:06.106509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.055 [2024-11-26 07:42:06.106515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.055 [2024-11-26 07:42:06.106520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.055 [2024-11-26 07:42:06.106530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.055 qpair failed and we were unable to recover it. 00:32:22.055 [2024-11-26 07:42:06.116402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.055 [2024-11-26 07:42:06.116461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.055 [2024-11-26 07:42:06.116471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.055 [2024-11-26 07:42:06.116476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.055 [2024-11-26 07:42:06.116481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.055 [2024-11-26 07:42:06.116491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.055 qpair failed and we were unable to recover it. 00:32:22.055 [2024-11-26 07:42:06.126567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.055 [2024-11-26 07:42:06.126620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.055 [2024-11-26 07:42:06.126630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.055 [2024-11-26 07:42:06.126635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.055 [2024-11-26 07:42:06.126640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.055 [2024-11-26 07:42:06.126650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.055 qpair failed and we were unable to recover it. 00:32:22.055 [2024-11-26 07:42:06.136640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.055 [2024-11-26 07:42:06.136702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.055 [2024-11-26 07:42:06.136712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.055 [2024-11-26 07:42:06.136717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.055 [2024-11-26 07:42:06.136722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.055 [2024-11-26 07:42:06.136734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.055 qpair failed and we were unable to recover it. 00:32:22.055 [2024-11-26 07:42:06.146613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.055 [2024-11-26 07:42:06.146660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.055 [2024-11-26 07:42:06.146670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.055 [2024-11-26 07:42:06.146675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.055 [2024-11-26 07:42:06.146680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.055 [2024-11-26 07:42:06.146690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.055 qpair failed and we were unable to recover it. 00:32:22.055 [2024-11-26 07:42:06.156551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.055 [2024-11-26 07:42:06.156611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.055 [2024-11-26 07:42:06.156621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.055 [2024-11-26 07:42:06.156626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.055 [2024-11-26 07:42:06.156631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.055 [2024-11-26 07:42:06.156641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.055 qpair failed and we were unable to recover it. 00:32:22.055 [2024-11-26 07:42:06.166694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.055 [2024-11-26 07:42:06.166754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.055 [2024-11-26 07:42:06.166764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.055 [2024-11-26 07:42:06.166769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.055 [2024-11-26 07:42:06.166774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.055 [2024-11-26 07:42:06.166784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.055 qpair failed and we were unable to recover it. 00:32:22.055 [2024-11-26 07:42:06.176705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.055 [2024-11-26 07:42:06.176760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.055 [2024-11-26 07:42:06.176771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.055 [2024-11-26 07:42:06.176776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.056 [2024-11-26 07:42:06.176781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.056 [2024-11-26 07:42:06.176791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.056 qpair failed and we were unable to recover it. 00:32:22.318 [2024-11-26 07:42:06.186614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.318 [2024-11-26 07:42:06.186665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.318 [2024-11-26 07:42:06.186676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.318 [2024-11-26 07:42:06.186682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.318 [2024-11-26 07:42:06.186687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.318 [2024-11-26 07:42:06.186697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.318 qpair failed and we were unable to recover it. 00:32:22.318 [2024-11-26 07:42:06.196637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.318 [2024-11-26 07:42:06.196687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.318 [2024-11-26 07:42:06.196697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.318 [2024-11-26 07:42:06.196702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.318 [2024-11-26 07:42:06.196707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.318 [2024-11-26 07:42:06.196717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.318 qpair failed and we were unable to recover it. 00:32:22.318 [2024-11-26 07:42:06.206755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.318 [2024-11-26 07:42:06.206804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.318 [2024-11-26 07:42:06.206814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.319 [2024-11-26 07:42:06.206819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.319 [2024-11-26 07:42:06.206824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.319 [2024-11-26 07:42:06.206834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.319 qpair failed and we were unable to recover it. 00:32:22.319 [2024-11-26 07:42:06.216840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.319 [2024-11-26 07:42:06.216915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.319 [2024-11-26 07:42:06.216925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.319 [2024-11-26 07:42:06.216930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.319 [2024-11-26 07:42:06.216935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.319 [2024-11-26 07:42:06.216945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.319 qpair failed and we were unable to recover it. 00:32:22.319 [2024-11-26 07:42:06.226826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.319 [2024-11-26 07:42:06.226882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.319 [2024-11-26 07:42:06.226895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.319 [2024-11-26 07:42:06.226900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.319 [2024-11-26 07:42:06.226905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.319 [2024-11-26 07:42:06.226916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.319 qpair failed and we were unable to recover it. 00:32:22.319 [2024-11-26 07:42:06.236893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.319 [2024-11-26 07:42:06.236944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.319 [2024-11-26 07:42:06.236954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.319 [2024-11-26 07:42:06.236960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.319 [2024-11-26 07:42:06.236964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.319 [2024-11-26 07:42:06.236974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.319 qpair failed and we were unable to recover it. 00:32:22.319 [2024-11-26 07:42:06.246892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.319 [2024-11-26 07:42:06.246945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.319 [2024-11-26 07:42:06.246955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.319 [2024-11-26 07:42:06.246960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.319 [2024-11-26 07:42:06.246965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.319 [2024-11-26 07:42:06.246976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.319 qpair failed and we were unable to recover it. 00:32:22.319 [2024-11-26 07:42:06.256903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.319 [2024-11-26 07:42:06.256953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.319 [2024-11-26 07:42:06.256963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.319 [2024-11-26 07:42:06.256969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.319 [2024-11-26 07:42:06.256973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.319 [2024-11-26 07:42:06.256983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.319 qpair failed and we were unable to recover it. 00:32:22.319 [2024-11-26 07:42:06.267003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.319 [2024-11-26 07:42:06.267058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.319 [2024-11-26 07:42:06.267068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.319 [2024-11-26 07:42:06.267073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.319 [2024-11-26 07:42:06.267080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.319 [2024-11-26 07:42:06.267090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.319 qpair failed and we were unable to recover it. 00:32:22.319 [2024-11-26 07:42:06.276966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.319 [2024-11-26 07:42:06.277028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.319 [2024-11-26 07:42:06.277038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.319 [2024-11-26 07:42:06.277043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.319 [2024-11-26 07:42:06.277048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.319 [2024-11-26 07:42:06.277058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.319 qpair failed and we were unable to recover it. 00:32:22.319 [2024-11-26 07:42:06.287005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.319 [2024-11-26 07:42:06.287056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.319 [2024-11-26 07:42:06.287066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.319 [2024-11-26 07:42:06.287071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.319 [2024-11-26 07:42:06.287076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.319 [2024-11-26 07:42:06.287086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.319 qpair failed and we were unable to recover it. 00:32:22.319 [2024-11-26 07:42:06.297007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.319 [2024-11-26 07:42:06.297055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.319 [2024-11-26 07:42:06.297065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.319 [2024-11-26 07:42:06.297070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.319 [2024-11-26 07:42:06.297075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.319 [2024-11-26 07:42:06.297086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.319 qpair failed and we were unable to recover it. 00:32:22.319 [2024-11-26 07:42:06.307081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.319 [2024-11-26 07:42:06.307141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.319 [2024-11-26 07:42:06.307151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.319 [2024-11-26 07:42:06.307156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.320 [2024-11-26 07:42:06.307161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.320 [2024-11-26 07:42:06.307171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.320 qpair failed and we were unable to recover it. 00:32:22.320 [2024-11-26 07:42:06.316980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.320 [2024-11-26 07:42:06.317072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.320 [2024-11-26 07:42:06.317082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.320 [2024-11-26 07:42:06.317087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.320 [2024-11-26 07:42:06.317092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.320 [2024-11-26 07:42:06.317102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.320 qpair failed and we were unable to recover it. 00:32:22.320 [2024-11-26 07:42:06.327132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.320 [2024-11-26 07:42:06.327190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.320 [2024-11-26 07:42:06.327200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.320 [2024-11-26 07:42:06.327205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.320 [2024-11-26 07:42:06.327210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.320 [2024-11-26 07:42:06.327220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.320 qpair failed and we were unable to recover it. 00:32:22.320 [2024-11-26 07:42:06.337023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.320 [2024-11-26 07:42:06.337070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.320 [2024-11-26 07:42:06.337080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.320 [2024-11-26 07:42:06.337086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.320 [2024-11-26 07:42:06.337090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.320 [2024-11-26 07:42:06.337101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.320 qpair failed and we were unable to recover it. 00:32:22.320 [2024-11-26 07:42:06.347063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.320 [2024-11-26 07:42:06.347114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.320 [2024-11-26 07:42:06.347124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.320 [2024-11-26 07:42:06.347129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.320 [2024-11-26 07:42:06.347134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.320 [2024-11-26 07:42:06.347145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.320 qpair failed and we were unable to recover it. 00:32:22.320 [2024-11-26 07:42:06.357223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.320 [2024-11-26 07:42:06.357273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.320 [2024-11-26 07:42:06.357285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.320 [2024-11-26 07:42:06.357290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.320 [2024-11-26 07:42:06.357295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.320 [2024-11-26 07:42:06.357305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.320 qpair failed and we were unable to recover it. 00:32:22.320 [2024-11-26 07:42:06.367243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.320 [2024-11-26 07:42:06.367293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.320 [2024-11-26 07:42:06.367303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.320 [2024-11-26 07:42:06.367309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.320 [2024-11-26 07:42:06.367313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.320 [2024-11-26 07:42:06.367324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.320 qpair failed and we were unable to recover it. 00:32:22.320 [2024-11-26 07:42:06.377274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.320 [2024-11-26 07:42:06.377321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.320 [2024-11-26 07:42:06.377331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.320 [2024-11-26 07:42:06.377336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.320 [2024-11-26 07:42:06.377341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.320 [2024-11-26 07:42:06.377351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.320 qpair failed and we were unable to recover it. 00:32:22.320 [2024-11-26 07:42:06.387277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.320 [2024-11-26 07:42:06.387368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.320 [2024-11-26 07:42:06.387378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.320 [2024-11-26 07:42:06.387383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.320 [2024-11-26 07:42:06.387388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.320 [2024-11-26 07:42:06.387398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.320 qpair failed and we were unable to recover it. 00:32:22.320 [2024-11-26 07:42:06.397337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.320 [2024-11-26 07:42:06.397428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.320 [2024-11-26 07:42:06.397438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.320 [2024-11-26 07:42:06.397443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.320 [2024-11-26 07:42:06.397450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.320 [2024-11-26 07:42:06.397461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.320 qpair failed and we were unable to recover it. 00:32:22.320 [2024-11-26 07:42:06.407235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.320 [2024-11-26 07:42:06.407288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.320 [2024-11-26 07:42:06.407298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.320 [2024-11-26 07:42:06.407304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.320 [2024-11-26 07:42:06.407308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.321 [2024-11-26 07:42:06.407319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.321 qpair failed and we were unable to recover it. 00:32:22.321 [2024-11-26 07:42:06.417444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.321 [2024-11-26 07:42:06.417533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.321 [2024-11-26 07:42:06.417543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.321 [2024-11-26 07:42:06.417548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.321 [2024-11-26 07:42:06.417553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.321 [2024-11-26 07:42:06.417562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.321 qpair failed and we were unable to recover it. 00:32:22.321 [2024-11-26 07:42:06.427328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.321 [2024-11-26 07:42:06.427376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.321 [2024-11-26 07:42:06.427386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.321 [2024-11-26 07:42:06.427391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.321 [2024-11-26 07:42:06.427396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.321 [2024-11-26 07:42:06.427406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.321 qpair failed and we were unable to recover it. 00:32:22.321 [2024-11-26 07:42:06.437444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.321 [2024-11-26 07:42:06.437496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.321 [2024-11-26 07:42:06.437506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.321 [2024-11-26 07:42:06.437511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.321 [2024-11-26 07:42:06.437515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.321 [2024-11-26 07:42:06.437525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.321 qpair failed and we were unable to recover it. 00:32:22.585 [2024-11-26 07:42:06.447526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.585 [2024-11-26 07:42:06.447609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.585 [2024-11-26 07:42:06.447619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.585 [2024-11-26 07:42:06.447625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.585 [2024-11-26 07:42:06.447631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.585 [2024-11-26 07:42:06.447642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.585 qpair failed and we were unable to recover it. 00:32:22.585 [2024-11-26 07:42:06.457469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.585 [2024-11-26 07:42:06.457516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.585 [2024-11-26 07:42:06.457525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.585 [2024-11-26 07:42:06.457530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.585 [2024-11-26 07:42:06.457535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.585 [2024-11-26 07:42:06.457546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.585 qpair failed and we were unable to recover it. 00:32:22.585 [2024-11-26 07:42:06.467512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.585 [2024-11-26 07:42:06.467557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.585 [2024-11-26 07:42:06.467567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.585 [2024-11-26 07:42:06.467572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.585 [2024-11-26 07:42:06.467577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.585 [2024-11-26 07:42:06.467587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.585 qpair failed and we were unable to recover it. 00:32:22.585 [2024-11-26 07:42:06.477563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.585 [2024-11-26 07:42:06.477629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.585 [2024-11-26 07:42:06.477639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.585 [2024-11-26 07:42:06.477644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.585 [2024-11-26 07:42:06.477649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.585 [2024-11-26 07:42:06.477659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.585 qpair failed and we were unable to recover it. 00:32:22.585 [2024-11-26 07:42:06.487581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.585 [2024-11-26 07:42:06.487636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.585 [2024-11-26 07:42:06.487658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.585 [2024-11-26 07:42:06.487665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.585 [2024-11-26 07:42:06.487670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.585 [2024-11-26 07:42:06.487684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.585 qpair failed and we were unable to recover it. 00:32:22.585 [2024-11-26 07:42:06.497586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.585 [2024-11-26 07:42:06.497663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.585 [2024-11-26 07:42:06.497674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.585 [2024-11-26 07:42:06.497680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.585 [2024-11-26 07:42:06.497684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.585 [2024-11-26 07:42:06.497696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.585 qpair failed and we were unable to recover it. 00:32:22.585 [2024-11-26 07:42:06.507636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.585 [2024-11-26 07:42:06.507709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.585 [2024-11-26 07:42:06.507720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.585 [2024-11-26 07:42:06.507725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.585 [2024-11-26 07:42:06.507730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.585 [2024-11-26 07:42:06.507740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.585 qpair failed and we were unable to recover it. 00:32:22.585 [2024-11-26 07:42:06.517636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.585 [2024-11-26 07:42:06.517684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.585 [2024-11-26 07:42:06.517694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.585 [2024-11-26 07:42:06.517699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.585 [2024-11-26 07:42:06.517704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.585 [2024-11-26 07:42:06.517715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.585 qpair failed and we were unable to recover it. 00:32:22.585 [2024-11-26 07:42:06.527650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.585 [2024-11-26 07:42:06.527704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.585 [2024-11-26 07:42:06.527713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.585 [2024-11-26 07:42:06.527721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.585 [2024-11-26 07:42:06.527726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.585 [2024-11-26 07:42:06.527737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.585 qpair failed and we were unable to recover it. 00:32:22.585 [2024-11-26 07:42:06.537739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.586 [2024-11-26 07:42:06.537787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.586 [2024-11-26 07:42:06.537797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.586 [2024-11-26 07:42:06.537802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.586 [2024-11-26 07:42:06.537806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.586 [2024-11-26 07:42:06.537817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.586 qpair failed and we were unable to recover it. 00:32:22.586 [2024-11-26 07:42:06.547715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.586 [2024-11-26 07:42:06.547759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.586 [2024-11-26 07:42:06.547769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.586 [2024-11-26 07:42:06.547774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.586 [2024-11-26 07:42:06.547779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.586 [2024-11-26 07:42:06.547789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.586 qpair failed and we were unable to recover it. 00:32:22.586 [2024-11-26 07:42:06.557755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.586 [2024-11-26 07:42:06.557804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.586 [2024-11-26 07:42:06.557814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.586 [2024-11-26 07:42:06.557820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.586 [2024-11-26 07:42:06.557824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.586 [2024-11-26 07:42:06.557834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.586 qpair failed and we were unable to recover it. 00:32:22.586 [2024-11-26 07:42:06.567654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.586 [2024-11-26 07:42:06.567705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.586 [2024-11-26 07:42:06.567715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.586 [2024-11-26 07:42:06.567721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.586 [2024-11-26 07:42:06.567725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.586 [2024-11-26 07:42:06.567739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.586 qpair failed and we were unable to recover it. 00:32:22.586 [2024-11-26 07:42:06.577792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.586 [2024-11-26 07:42:06.577844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.586 [2024-11-26 07:42:06.577854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.586 [2024-11-26 07:42:06.577859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.586 [2024-11-26 07:42:06.577867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.586 [2024-11-26 07:42:06.577877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.586 qpair failed and we were unable to recover it. 00:32:22.586 [2024-11-26 07:42:06.587848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.586 [2024-11-26 07:42:06.587926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.586 [2024-11-26 07:42:06.587936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.586 [2024-11-26 07:42:06.587941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.586 [2024-11-26 07:42:06.587946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.586 [2024-11-26 07:42:06.587957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.586 qpair failed and we were unable to recover it. 00:32:22.586 [2024-11-26 07:42:06.597920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.586 [2024-11-26 07:42:06.597973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.586 [2024-11-26 07:42:06.597983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.586 [2024-11-26 07:42:06.597988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.586 [2024-11-26 07:42:06.597993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.586 [2024-11-26 07:42:06.598003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.586 qpair failed and we were unable to recover it. 00:32:22.586 [2024-11-26 07:42:06.607904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.586 [2024-11-26 07:42:06.607952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.586 [2024-11-26 07:42:06.607962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.586 [2024-11-26 07:42:06.607967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.586 [2024-11-26 07:42:06.607971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.586 [2024-11-26 07:42:06.607982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.586 qpair failed and we were unable to recover it. 00:32:22.586 [2024-11-26 07:42:06.617884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.586 [2024-11-26 07:42:06.617969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.586 [2024-11-26 07:42:06.617979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.586 [2024-11-26 07:42:06.617984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.586 [2024-11-26 07:42:06.617989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.586 [2024-11-26 07:42:06.617999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.586 qpair failed and we were unable to recover it. 00:32:22.586 [2024-11-26 07:42:06.627967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.586 [2024-11-26 07:42:06.628021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.586 [2024-11-26 07:42:06.628031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.586 [2024-11-26 07:42:06.628036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.586 [2024-11-26 07:42:06.628043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.586 [2024-11-26 07:42:06.628053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.586 qpair failed and we were unable to recover it. 00:32:22.586 [2024-11-26 07:42:06.637965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.586 [2024-11-26 07:42:06.638015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.586 [2024-11-26 07:42:06.638025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.586 [2024-11-26 07:42:06.638030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.586 [2024-11-26 07:42:06.638035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.586 [2024-11-26 07:42:06.638045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.586 qpair failed and we were unable to recover it. 00:32:22.586 [2024-11-26 07:42:06.648013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.586 [2024-11-26 07:42:06.648060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.586 [2024-11-26 07:42:06.648070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.586 [2024-11-26 07:42:06.648076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.586 [2024-11-26 07:42:06.648080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.586 [2024-11-26 07:42:06.648090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.586 qpair failed and we were unable to recover it. 00:32:22.586 [2024-11-26 07:42:06.658046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.586 [2024-11-26 07:42:06.658098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.586 [2024-11-26 07:42:06.658107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.586 [2024-11-26 07:42:06.658115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.586 [2024-11-26 07:42:06.658120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.586 [2024-11-26 07:42:06.658131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.586 qpair failed and we were unable to recover it. 00:32:22.586 [2024-11-26 07:42:06.667945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.586 [2024-11-26 07:42:06.667993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.586 [2024-11-26 07:42:06.668003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.586 [2024-11-26 07:42:06.668008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.586 [2024-11-26 07:42:06.668013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.586 [2024-11-26 07:42:06.668023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.586 qpair failed and we were unable to recover it. 00:32:22.586 [2024-11-26 07:42:06.678052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.586 [2024-11-26 07:42:06.678101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.586 [2024-11-26 07:42:06.678111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.586 [2024-11-26 07:42:06.678116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.586 [2024-11-26 07:42:06.678121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.586 [2024-11-26 07:42:06.678131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.586 qpair failed and we were unable to recover it. 00:32:22.586 [2024-11-26 07:42:06.688154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.587 [2024-11-26 07:42:06.688203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.587 [2024-11-26 07:42:06.688213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.587 [2024-11-26 07:42:06.688218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.587 [2024-11-26 07:42:06.688223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.587 [2024-11-26 07:42:06.688233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.587 qpair failed and we were unable to recover it. 00:32:22.587 [2024-11-26 07:42:06.698147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.587 [2024-11-26 07:42:06.698207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.587 [2024-11-26 07:42:06.698217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.587 [2024-11-26 07:42:06.698222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.587 [2024-11-26 07:42:06.698227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.587 [2024-11-26 07:42:06.698240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.587 qpair failed and we were unable to recover it. 00:32:22.587 [2024-11-26 07:42:06.708181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.587 [2024-11-26 07:42:06.708233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.587 [2024-11-26 07:42:06.708244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.587 [2024-11-26 07:42:06.708249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.587 [2024-11-26 07:42:06.708254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.587 [2024-11-26 07:42:06.708264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.587 qpair failed and we were unable to recover it. 00:32:22.849 [2024-11-26 07:42:06.718215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.849 [2024-11-26 07:42:06.718266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.850 [2024-11-26 07:42:06.718276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.850 [2024-11-26 07:42:06.718282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.850 [2024-11-26 07:42:06.718287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.850 [2024-11-26 07:42:06.718297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.850 qpair failed and we were unable to recover it. 00:32:22.850 [2024-11-26 07:42:06.728286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.850 [2024-11-26 07:42:06.728337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.850 [2024-11-26 07:42:06.728347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.850 [2024-11-26 07:42:06.728352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.850 [2024-11-26 07:42:06.728357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.850 [2024-11-26 07:42:06.728367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.850 qpair failed and we were unable to recover it. 00:32:22.850 [2024-11-26 07:42:06.738256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.850 [2024-11-26 07:42:06.738299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.850 [2024-11-26 07:42:06.738309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.850 [2024-11-26 07:42:06.738314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.850 [2024-11-26 07:42:06.738319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.850 [2024-11-26 07:42:06.738329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.850 qpair failed and we were unable to recover it. 00:32:22.850 [2024-11-26 07:42:06.748244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.850 [2024-11-26 07:42:06.748314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.850 [2024-11-26 07:42:06.748323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.850 [2024-11-26 07:42:06.748329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.850 [2024-11-26 07:42:06.748333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.850 [2024-11-26 07:42:06.748344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.850 qpair failed and we were unable to recover it. 00:32:22.850 [2024-11-26 07:42:06.758322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.850 [2024-11-26 07:42:06.758372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.850 [2024-11-26 07:42:06.758382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.850 [2024-11-26 07:42:06.758387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.850 [2024-11-26 07:42:06.758392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.850 [2024-11-26 07:42:06.758402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.850 qpair failed and we were unable to recover it. 00:32:22.850 [2024-11-26 07:42:06.768343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.850 [2024-11-26 07:42:06.768403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.850 [2024-11-26 07:42:06.768413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.850 [2024-11-26 07:42:06.768418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.850 [2024-11-26 07:42:06.768423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.850 [2024-11-26 07:42:06.768434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.850 qpair failed and we were unable to recover it. 00:32:22.850 [2024-11-26 07:42:06.778362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.850 [2024-11-26 07:42:06.778404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.850 [2024-11-26 07:42:06.778414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.850 [2024-11-26 07:42:06.778419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.850 [2024-11-26 07:42:06.778424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.850 [2024-11-26 07:42:06.778434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.850 qpair failed and we were unable to recover it. 00:32:22.850 [2024-11-26 07:42:06.788378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.850 [2024-11-26 07:42:06.788429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.850 [2024-11-26 07:42:06.788441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.850 [2024-11-26 07:42:06.788447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.850 [2024-11-26 07:42:06.788451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.850 [2024-11-26 07:42:06.788462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.850 qpair failed and we were unable to recover it. 00:32:22.850 [2024-11-26 07:42:06.798421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.850 [2024-11-26 07:42:06.798472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.850 [2024-11-26 07:42:06.798482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.850 [2024-11-26 07:42:06.798487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.850 [2024-11-26 07:42:06.798492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.850 [2024-11-26 07:42:06.798502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.850 qpair failed and we were unable to recover it. 00:32:22.850 [2024-11-26 07:42:06.808497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.850 [2024-11-26 07:42:06.808549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.850 [2024-11-26 07:42:06.808559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.850 [2024-11-26 07:42:06.808564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.850 [2024-11-26 07:42:06.808569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.850 [2024-11-26 07:42:06.808579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.850 qpair failed and we were unable to recover it. 00:32:22.850 [2024-11-26 07:42:06.818446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.850 [2024-11-26 07:42:06.818490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.850 [2024-11-26 07:42:06.818501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.850 [2024-11-26 07:42:06.818507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.850 [2024-11-26 07:42:06.818511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.850 [2024-11-26 07:42:06.818522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.850 qpair failed and we were unable to recover it. 00:32:22.850 [2024-11-26 07:42:06.828489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.850 [2024-11-26 07:42:06.828536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.850 [2024-11-26 07:42:06.828547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.850 [2024-11-26 07:42:06.828552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.850 [2024-11-26 07:42:06.828559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.850 [2024-11-26 07:42:06.828570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.851 qpair failed and we were unable to recover it. 00:32:22.851 [2024-11-26 07:42:06.838526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.851 [2024-11-26 07:42:06.838594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.851 [2024-11-26 07:42:06.838605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.851 [2024-11-26 07:42:06.838610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.851 [2024-11-26 07:42:06.838614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.851 [2024-11-26 07:42:06.838624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.851 qpair failed and we were unable to recover it. 00:32:22.851 [2024-11-26 07:42:06.848460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.851 [2024-11-26 07:42:06.848557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.851 [2024-11-26 07:42:06.848568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.851 [2024-11-26 07:42:06.848574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.851 [2024-11-26 07:42:06.848578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.851 [2024-11-26 07:42:06.848589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.851 qpair failed and we were unable to recover it. 00:32:22.851 [2024-11-26 07:42:06.858568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.851 [2024-11-26 07:42:06.858618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.851 [2024-11-26 07:42:06.858628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.851 [2024-11-26 07:42:06.858633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.851 [2024-11-26 07:42:06.858638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.851 [2024-11-26 07:42:06.858648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.851 qpair failed and we were unable to recover it. 00:32:22.851 [2024-11-26 07:42:06.868616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.851 [2024-11-26 07:42:06.868667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.851 [2024-11-26 07:42:06.868685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.851 [2024-11-26 07:42:06.868692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.851 [2024-11-26 07:42:06.868697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.851 [2024-11-26 07:42:06.868711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.851 qpair failed and we were unable to recover it. 00:32:22.851 [2024-11-26 07:42:06.878593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.851 [2024-11-26 07:42:06.878662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.851 [2024-11-26 07:42:06.878681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.851 [2024-11-26 07:42:06.878687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.851 [2024-11-26 07:42:06.878692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.851 [2024-11-26 07:42:06.878707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.851 qpair failed and we were unable to recover it. 00:32:22.851 [2024-11-26 07:42:06.888664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.851 [2024-11-26 07:42:06.888718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.851 [2024-11-26 07:42:06.888736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.851 [2024-11-26 07:42:06.888743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.851 [2024-11-26 07:42:06.888748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.851 [2024-11-26 07:42:06.888762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.851 qpair failed and we were unable to recover it. 00:32:22.851 [2024-11-26 07:42:06.898682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.851 [2024-11-26 07:42:06.898734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.851 [2024-11-26 07:42:06.898746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.851 [2024-11-26 07:42:06.898752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.851 [2024-11-26 07:42:06.898756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.851 [2024-11-26 07:42:06.898767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.851 qpair failed and we were unable to recover it. 00:32:22.851 [2024-11-26 07:42:06.908700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.851 [2024-11-26 07:42:06.908784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.851 [2024-11-26 07:42:06.908794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.851 [2024-11-26 07:42:06.908800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.851 [2024-11-26 07:42:06.908805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.851 [2024-11-26 07:42:06.908816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.851 qpair failed and we were unable to recover it. 00:32:22.851 [2024-11-26 07:42:06.918747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.851 [2024-11-26 07:42:06.918828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.851 [2024-11-26 07:42:06.918842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.851 [2024-11-26 07:42:06.918848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.851 [2024-11-26 07:42:06.918852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.851 [2024-11-26 07:42:06.918867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.851 qpair failed and we were unable to recover it. 00:32:22.851 [2024-11-26 07:42:06.928636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.851 [2024-11-26 07:42:06.928687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.851 [2024-11-26 07:42:06.928697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.851 [2024-11-26 07:42:06.928702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.851 [2024-11-26 07:42:06.928707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.851 [2024-11-26 07:42:06.928717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.851 qpair failed and we were unable to recover it. 00:32:22.851 [2024-11-26 07:42:06.938758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.851 [2024-11-26 07:42:06.938809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.851 [2024-11-26 07:42:06.938819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.851 [2024-11-26 07:42:06.938824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.851 [2024-11-26 07:42:06.938829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.851 [2024-11-26 07:42:06.938839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.851 qpair failed and we were unable to recover it. 00:32:22.851 [2024-11-26 07:42:06.948817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.851 [2024-11-26 07:42:06.948868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.851 [2024-11-26 07:42:06.948879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.851 [2024-11-26 07:42:06.948885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.851 [2024-11-26 07:42:06.948890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.851 [2024-11-26 07:42:06.948902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.851 qpair failed and we were unable to recover it. 00:32:22.851 [2024-11-26 07:42:06.958847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.851 [2024-11-26 07:42:06.958900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.851 [2024-11-26 07:42:06.958910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.851 [2024-11-26 07:42:06.958915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.851 [2024-11-26 07:42:06.958922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.852 [2024-11-26 07:42:06.958933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.852 qpair failed and we were unable to recover it. 00:32:22.852 [2024-11-26 07:42:06.968895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.852 [2024-11-26 07:42:06.968947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.852 [2024-11-26 07:42:06.968957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.852 [2024-11-26 07:42:06.968962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.852 [2024-11-26 07:42:06.968967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:22.852 [2024-11-26 07:42:06.968977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:22.852 qpair failed and we were unable to recover it. 00:32:23.114 [2024-11-26 07:42:06.978902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.114 [2024-11-26 07:42:06.978949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.114 [2024-11-26 07:42:06.978959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.115 [2024-11-26 07:42:06.978965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.115 [2024-11-26 07:42:06.978969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.115 [2024-11-26 07:42:06.978979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.115 qpair failed and we were unable to recover it. 00:32:23.115 [2024-11-26 07:42:06.988929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.115 [2024-11-26 07:42:06.988975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.115 [2024-11-26 07:42:06.988985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.115 [2024-11-26 07:42:06.988990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.115 [2024-11-26 07:42:06.988995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.115 [2024-11-26 07:42:06.989005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.115 qpair failed and we were unable to recover it. 00:32:23.115 [2024-11-26 07:42:06.998961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.115 [2024-11-26 07:42:06.999012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.115 [2024-11-26 07:42:06.999022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.115 [2024-11-26 07:42:06.999028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.115 [2024-11-26 07:42:06.999033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.115 [2024-11-26 07:42:06.999043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.115 qpair failed and we were unable to recover it. 00:32:23.115 [2024-11-26 07:42:07.008892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.115 [2024-11-26 07:42:07.008941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.115 [2024-11-26 07:42:07.008952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.115 [2024-11-26 07:42:07.008958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.115 [2024-11-26 07:42:07.008962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.115 [2024-11-26 07:42:07.008973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.115 qpair failed and we were unable to recover it. 00:32:23.115 [2024-11-26 07:42:07.019019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.115 [2024-11-26 07:42:07.019066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.115 [2024-11-26 07:42:07.019077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.115 [2024-11-26 07:42:07.019082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.115 [2024-11-26 07:42:07.019087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.115 [2024-11-26 07:42:07.019097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.115 qpair failed and we were unable to recover it. 00:32:23.115 [2024-11-26 07:42:07.029036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.115 [2024-11-26 07:42:07.029088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.115 [2024-11-26 07:42:07.029098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.115 [2024-11-26 07:42:07.029104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.115 [2024-11-26 07:42:07.029108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.115 [2024-11-26 07:42:07.029119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.115 qpair failed and we were unable to recover it. 00:32:23.115 [2024-11-26 07:42:07.039096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.115 [2024-11-26 07:42:07.039146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.115 [2024-11-26 07:42:07.039156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.115 [2024-11-26 07:42:07.039161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.115 [2024-11-26 07:42:07.039166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.115 [2024-11-26 07:42:07.039176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.115 qpair failed and we were unable to recover it. 00:32:23.115 [2024-11-26 07:42:07.049093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.115 [2024-11-26 07:42:07.049145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.115 [2024-11-26 07:42:07.049156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.115 [2024-11-26 07:42:07.049161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.115 [2024-11-26 07:42:07.049166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.115 [2024-11-26 07:42:07.049176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.115 qpair failed and we were unable to recover it. 00:32:23.115 [2024-11-26 07:42:07.059101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.115 [2024-11-26 07:42:07.059146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.115 [2024-11-26 07:42:07.059156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.115 [2024-11-26 07:42:07.059161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.115 [2024-11-26 07:42:07.059165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.115 [2024-11-26 07:42:07.059175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.115 qpair failed and we were unable to recover it. 00:32:23.115 [2024-11-26 07:42:07.069142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.115 [2024-11-26 07:42:07.069222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.115 [2024-11-26 07:42:07.069232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.115 [2024-11-26 07:42:07.069238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.115 [2024-11-26 07:42:07.069242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.115 [2024-11-26 07:42:07.069252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.115 qpair failed and we were unable to recover it. 00:32:23.115 [2024-11-26 07:42:07.079191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.115 [2024-11-26 07:42:07.079241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.115 [2024-11-26 07:42:07.079250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.115 [2024-11-26 07:42:07.079256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.115 [2024-11-26 07:42:07.079260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.115 [2024-11-26 07:42:07.079270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.115 qpair failed and we were unable to recover it. 00:32:23.115 [2024-11-26 07:42:07.089074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.115 [2024-11-26 07:42:07.089127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.115 [2024-11-26 07:42:07.089136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.115 [2024-11-26 07:42:07.089144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.115 [2024-11-26 07:42:07.089149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.115 [2024-11-26 07:42:07.089159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.115 qpair failed and we were unable to recover it. 00:32:23.115 [2024-11-26 07:42:07.099231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.115 [2024-11-26 07:42:07.099278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.115 [2024-11-26 07:42:07.099288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.115 [2024-11-26 07:42:07.099293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.115 [2024-11-26 07:42:07.099298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.115 [2024-11-26 07:42:07.099308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.115 qpair failed and we were unable to recover it. 00:32:23.115 [2024-11-26 07:42:07.109123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.115 [2024-11-26 07:42:07.109164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.115 [2024-11-26 07:42:07.109174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.116 [2024-11-26 07:42:07.109179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.116 [2024-11-26 07:42:07.109183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.116 [2024-11-26 07:42:07.109194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.116 qpair failed and we were unable to recover it. 00:32:23.116 [2024-11-26 07:42:07.119299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.116 [2024-11-26 07:42:07.119362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.116 [2024-11-26 07:42:07.119371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.116 [2024-11-26 07:42:07.119377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.116 [2024-11-26 07:42:07.119381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.116 [2024-11-26 07:42:07.119392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.116 qpair failed and we were unable to recover it. 00:32:23.116 [2024-11-26 07:42:07.129342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.116 [2024-11-26 07:42:07.129455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.116 [2024-11-26 07:42:07.129465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.116 [2024-11-26 07:42:07.129471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.116 [2024-11-26 07:42:07.129475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.116 [2024-11-26 07:42:07.129489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.116 qpair failed and we were unable to recover it. 00:32:23.116 [2024-11-26 07:42:07.139353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.116 [2024-11-26 07:42:07.139397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.116 [2024-11-26 07:42:07.139408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.116 [2024-11-26 07:42:07.139413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.116 [2024-11-26 07:42:07.139418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.116 [2024-11-26 07:42:07.139428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.116 qpair failed and we were unable to recover it. 00:32:23.116 [2024-11-26 07:42:07.149374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.116 [2024-11-26 07:42:07.149468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.116 [2024-11-26 07:42:07.149478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.116 [2024-11-26 07:42:07.149483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.116 [2024-11-26 07:42:07.149488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.116 [2024-11-26 07:42:07.149498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.116 qpair failed and we were unable to recover it. 00:32:23.116 [2024-11-26 07:42:07.159414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.116 [2024-11-26 07:42:07.159468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.116 [2024-11-26 07:42:07.159478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.116 [2024-11-26 07:42:07.159483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.116 [2024-11-26 07:42:07.159488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.116 [2024-11-26 07:42:07.159498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.116 qpair failed and we were unable to recover it. 00:32:23.116 [2024-11-26 07:42:07.169430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.116 [2024-11-26 07:42:07.169477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.116 [2024-11-26 07:42:07.169487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.116 [2024-11-26 07:42:07.169492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.116 [2024-11-26 07:42:07.169497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.116 [2024-11-26 07:42:07.169506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.116 qpair failed and we were unable to recover it. 00:32:23.116 [2024-11-26 07:42:07.179476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.116 [2024-11-26 07:42:07.179559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.116 [2024-11-26 07:42:07.179570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.116 [2024-11-26 07:42:07.179576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.116 [2024-11-26 07:42:07.179580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.116 [2024-11-26 07:42:07.179591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.116 qpair failed and we were unable to recover it. 00:32:23.116 [2024-11-26 07:42:07.189476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.116 [2024-11-26 07:42:07.189554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.116 [2024-11-26 07:42:07.189565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.116 [2024-11-26 07:42:07.189570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.116 [2024-11-26 07:42:07.189575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.116 [2024-11-26 07:42:07.189585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.116 qpair failed and we were unable to recover it. 00:32:23.116 [2024-11-26 07:42:07.199524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.116 [2024-11-26 07:42:07.199576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.116 [2024-11-26 07:42:07.199586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.116 [2024-11-26 07:42:07.199591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.116 [2024-11-26 07:42:07.199597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.116 [2024-11-26 07:42:07.199608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.116 qpair failed and we were unable to recover it. 00:32:23.116 [2024-11-26 07:42:07.209553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.116 [2024-11-26 07:42:07.209606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.116 [2024-11-26 07:42:07.209616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.116 [2024-11-26 07:42:07.209621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.116 [2024-11-26 07:42:07.209626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.116 [2024-11-26 07:42:07.209636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.116 qpair failed and we were unable to recover it. 00:32:23.116 [2024-11-26 07:42:07.219585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.116 [2024-11-26 07:42:07.219638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.116 [2024-11-26 07:42:07.219647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.116 [2024-11-26 07:42:07.219656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.116 [2024-11-26 07:42:07.219661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.116 [2024-11-26 07:42:07.219671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.116 qpair failed and we were unable to recover it. 00:32:23.116 [2024-11-26 07:42:07.229567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.116 [2024-11-26 07:42:07.229612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.116 [2024-11-26 07:42:07.229622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.116 [2024-11-26 07:42:07.229627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.116 [2024-11-26 07:42:07.229632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.116 [2024-11-26 07:42:07.229643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.116 qpair failed and we were unable to recover it. 00:32:23.116 [2024-11-26 07:42:07.239625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.116 [2024-11-26 07:42:07.239675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.117 [2024-11-26 07:42:07.239685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.117 [2024-11-26 07:42:07.239690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.117 [2024-11-26 07:42:07.239694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.117 [2024-11-26 07:42:07.239704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.117 qpair failed and we were unable to recover it. 00:32:23.380 [2024-11-26 07:42:07.249533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.380 [2024-11-26 07:42:07.249584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.380 [2024-11-26 07:42:07.249595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.380 [2024-11-26 07:42:07.249600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.380 [2024-11-26 07:42:07.249605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.380 [2024-11-26 07:42:07.249615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.380 qpair failed and we were unable to recover it. 00:32:23.380 [2024-11-26 07:42:07.259649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.380 [2024-11-26 07:42:07.259706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.380 [2024-11-26 07:42:07.259725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.380 [2024-11-26 07:42:07.259732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.380 [2024-11-26 07:42:07.259737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.380 [2024-11-26 07:42:07.259754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.380 qpair failed and we were unable to recover it. 00:32:23.380 [2024-11-26 07:42:07.269693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.380 [2024-11-26 07:42:07.269741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.380 [2024-11-26 07:42:07.269752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.380 [2024-11-26 07:42:07.269758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.380 [2024-11-26 07:42:07.269764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.380 [2024-11-26 07:42:07.269777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.380 qpair failed and we were unable to recover it. 00:32:23.380 [2024-11-26 07:42:07.279698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.380 [2024-11-26 07:42:07.279758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.380 [2024-11-26 07:42:07.279769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.380 [2024-11-26 07:42:07.279774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.380 [2024-11-26 07:42:07.279779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.380 [2024-11-26 07:42:07.279789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.380 qpair failed and we were unable to recover it. 00:32:23.380 [2024-11-26 07:42:07.289640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.380 [2024-11-26 07:42:07.289692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.380 [2024-11-26 07:42:07.289702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.380 [2024-11-26 07:42:07.289707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.380 [2024-11-26 07:42:07.289712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.380 [2024-11-26 07:42:07.289722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.380 qpair failed and we were unable to recover it. 00:32:23.380 [2024-11-26 07:42:07.299802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.380 [2024-11-26 07:42:07.299849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.380 [2024-11-26 07:42:07.299859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.380 [2024-11-26 07:42:07.299870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.380 [2024-11-26 07:42:07.299875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.380 [2024-11-26 07:42:07.299885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.380 qpair failed and we were unable to recover it. 00:32:23.380 [2024-11-26 07:42:07.309817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.380 [2024-11-26 07:42:07.309871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.380 [2024-11-26 07:42:07.309882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.380 [2024-11-26 07:42:07.309887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.380 [2024-11-26 07:42:07.309892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.380 [2024-11-26 07:42:07.309902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.380 qpair failed and we were unable to recover it. 00:32:23.380 [2024-11-26 07:42:07.319869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.380 [2024-11-26 07:42:07.319923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.380 [2024-11-26 07:42:07.319933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.380 [2024-11-26 07:42:07.319938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.380 [2024-11-26 07:42:07.319943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.380 [2024-11-26 07:42:07.319954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.380 qpair failed and we were unable to recover it. 00:32:23.380 [2024-11-26 07:42:07.329948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.380 [2024-11-26 07:42:07.330001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.380 [2024-11-26 07:42:07.330011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.380 [2024-11-26 07:42:07.330016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.380 [2024-11-26 07:42:07.330021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.380 [2024-11-26 07:42:07.330031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.380 qpair failed and we were unable to recover it. 00:32:23.380 [2024-11-26 07:42:07.339879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.380 [2024-11-26 07:42:07.339932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.380 [2024-11-26 07:42:07.339942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.380 [2024-11-26 07:42:07.339947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.380 [2024-11-26 07:42:07.339952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.380 [2024-11-26 07:42:07.339962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.380 qpair failed and we were unable to recover it. 00:32:23.380 [2024-11-26 07:42:07.349939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.381 [2024-11-26 07:42:07.349988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.381 [2024-11-26 07:42:07.350001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.381 [2024-11-26 07:42:07.350006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.381 [2024-11-26 07:42:07.350011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.381 [2024-11-26 07:42:07.350021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.381 qpair failed and we were unable to recover it. 00:32:23.381 [2024-11-26 07:42:07.359830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.381 [2024-11-26 07:42:07.359889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.381 [2024-11-26 07:42:07.359899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.381 [2024-11-26 07:42:07.359905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.381 [2024-11-26 07:42:07.359909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.381 [2024-11-26 07:42:07.359920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.381 qpair failed and we were unable to recover it. 00:32:23.381 [2024-11-26 07:42:07.370017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.381 [2024-11-26 07:42:07.370068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.381 [2024-11-26 07:42:07.370078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.381 [2024-11-26 07:42:07.370083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.381 [2024-11-26 07:42:07.370088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.381 [2024-11-26 07:42:07.370099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.381 qpair failed and we were unable to recover it. 00:32:23.381 [2024-11-26 07:42:07.380032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.381 [2024-11-26 07:42:07.380111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.381 [2024-11-26 07:42:07.380121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.381 [2024-11-26 07:42:07.380126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.381 [2024-11-26 07:42:07.380131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.381 [2024-11-26 07:42:07.380142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.381 qpair failed and we were unable to recover it. 00:32:23.381 [2024-11-26 07:42:07.390018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.381 [2024-11-26 07:42:07.390071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.381 [2024-11-26 07:42:07.390081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.381 [2024-11-26 07:42:07.390086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.381 [2024-11-26 07:42:07.390094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.381 [2024-11-26 07:42:07.390104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.381 qpair failed and we were unable to recover it. 00:32:23.381 [2024-11-26 07:42:07.400001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.381 [2024-11-26 07:42:07.400059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.381 [2024-11-26 07:42:07.400069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.381 [2024-11-26 07:42:07.400074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.381 [2024-11-26 07:42:07.400079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.381 [2024-11-26 07:42:07.400089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.381 qpair failed and we were unable to recover it. 00:32:23.381 [2024-11-26 07:42:07.410125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.381 [2024-11-26 07:42:07.410177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.381 [2024-11-26 07:42:07.410187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.381 [2024-11-26 07:42:07.410192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.381 [2024-11-26 07:42:07.410197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.381 [2024-11-26 07:42:07.410208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.381 qpair failed and we were unable to recover it. 00:32:23.381 [2024-11-26 07:42:07.420001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.381 [2024-11-26 07:42:07.420064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.381 [2024-11-26 07:42:07.420075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.381 [2024-11-26 07:42:07.420080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.381 [2024-11-26 07:42:07.420085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.381 [2024-11-26 07:42:07.420095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.381 qpair failed and we were unable to recover it. 00:32:23.381 [2024-11-26 07:42:07.430160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.381 [2024-11-26 07:42:07.430209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.381 [2024-11-26 07:42:07.430219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.381 [2024-11-26 07:42:07.430224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.381 [2024-11-26 07:42:07.430229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.381 [2024-11-26 07:42:07.430239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.381 qpair failed and we were unable to recover it. 00:32:23.381 [2024-11-26 07:42:07.440193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.381 [2024-11-26 07:42:07.440261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.381 [2024-11-26 07:42:07.440271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.381 [2024-11-26 07:42:07.440277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.381 [2024-11-26 07:42:07.440282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.381 [2024-11-26 07:42:07.440292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.381 qpair failed and we were unable to recover it. 00:32:23.381 [2024-11-26 07:42:07.450085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.381 [2024-11-26 07:42:07.450136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.381 [2024-11-26 07:42:07.450145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.381 [2024-11-26 07:42:07.450151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.381 [2024-11-26 07:42:07.450157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.381 [2024-11-26 07:42:07.450168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.381 qpair failed and we were unable to recover it. 00:32:23.381 [2024-11-26 07:42:07.460282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.381 [2024-11-26 07:42:07.460330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.381 [2024-11-26 07:42:07.460339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.381 [2024-11-26 07:42:07.460345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.381 [2024-11-26 07:42:07.460349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.381 [2024-11-26 07:42:07.460359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.381 qpair failed and we were unable to recover it. 00:32:23.381 [2024-11-26 07:42:07.470251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.381 [2024-11-26 07:42:07.470299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.381 [2024-11-26 07:42:07.470309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.381 [2024-11-26 07:42:07.470314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.381 [2024-11-26 07:42:07.470319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.381 [2024-11-26 07:42:07.470329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.381 qpair failed and we were unable to recover it. 00:32:23.381 [2024-11-26 07:42:07.480287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.382 [2024-11-26 07:42:07.480338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.382 [2024-11-26 07:42:07.480350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.382 [2024-11-26 07:42:07.480355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.382 [2024-11-26 07:42:07.480360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.382 [2024-11-26 07:42:07.480370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.382 qpair failed and we were unable to recover it. 00:32:23.382 [2024-11-26 07:42:07.490317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.382 [2024-11-26 07:42:07.490370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.382 [2024-11-26 07:42:07.490379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.382 [2024-11-26 07:42:07.490384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.382 [2024-11-26 07:42:07.490389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.382 [2024-11-26 07:42:07.490399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.382 qpair failed and we were unable to recover it. 00:32:23.382 [2024-11-26 07:42:07.500324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.382 [2024-11-26 07:42:07.500444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.382 [2024-11-26 07:42:07.500455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.382 [2024-11-26 07:42:07.500460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.382 [2024-11-26 07:42:07.500465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.382 [2024-11-26 07:42:07.500475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.382 qpair failed and we were unable to recover it. 00:32:23.645 [2024-11-26 07:42:07.510348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.645 [2024-11-26 07:42:07.510395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.645 [2024-11-26 07:42:07.510405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.645 [2024-11-26 07:42:07.510411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.645 [2024-11-26 07:42:07.510416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.645 [2024-11-26 07:42:07.510426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.645 qpair failed and we were unable to recover it. 00:32:23.645 [2024-11-26 07:42:07.520402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.645 [2024-11-26 07:42:07.520454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.645 [2024-11-26 07:42:07.520464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.645 [2024-11-26 07:42:07.520470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.645 [2024-11-26 07:42:07.520477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.645 [2024-11-26 07:42:07.520488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.645 qpair failed and we were unable to recover it. 00:32:23.645 [2024-11-26 07:42:07.530432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.645 [2024-11-26 07:42:07.530482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.645 [2024-11-26 07:42:07.530492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.645 [2024-11-26 07:42:07.530497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.645 [2024-11-26 07:42:07.530502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.645 [2024-11-26 07:42:07.530512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.645 qpair failed and we were unable to recover it. 00:32:23.645 [2024-11-26 07:42:07.540315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.645 [2024-11-26 07:42:07.540378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.645 [2024-11-26 07:42:07.540388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.645 [2024-11-26 07:42:07.540393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.645 [2024-11-26 07:42:07.540398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.645 [2024-11-26 07:42:07.540408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.645 qpair failed and we were unable to recover it. 00:32:23.645 [2024-11-26 07:42:07.550473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.645 [2024-11-26 07:42:07.550518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.645 [2024-11-26 07:42:07.550528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.645 [2024-11-26 07:42:07.550534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.645 [2024-11-26 07:42:07.550538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.645 [2024-11-26 07:42:07.550549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.645 qpair failed and we were unable to recover it. 00:32:23.645 [2024-11-26 07:42:07.560481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.645 [2024-11-26 07:42:07.560547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.645 [2024-11-26 07:42:07.560557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.645 [2024-11-26 07:42:07.560563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.645 [2024-11-26 07:42:07.560568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.645 [2024-11-26 07:42:07.560579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.645 qpair failed and we were unable to recover it. 00:32:23.645 [2024-11-26 07:42:07.570418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.645 [2024-11-26 07:42:07.570469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.645 [2024-11-26 07:42:07.570480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.645 [2024-11-26 07:42:07.570486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.645 [2024-11-26 07:42:07.570490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.645 [2024-11-26 07:42:07.570501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.645 qpair failed and we were unable to recover it. 00:32:23.645 [2024-11-26 07:42:07.580579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.645 [2024-11-26 07:42:07.580658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.645 [2024-11-26 07:42:07.580668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.645 [2024-11-26 07:42:07.580674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.646 [2024-11-26 07:42:07.580678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.646 [2024-11-26 07:42:07.580689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.646 qpair failed and we were unable to recover it. 00:32:23.646 [2024-11-26 07:42:07.590589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.646 [2024-11-26 07:42:07.590637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.646 [2024-11-26 07:42:07.590647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.646 [2024-11-26 07:42:07.590653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.646 [2024-11-26 07:42:07.590657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.646 [2024-11-26 07:42:07.590667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.646 qpair failed and we were unable to recover it. 00:32:23.646 [2024-11-26 07:42:07.600630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.646 [2024-11-26 07:42:07.600724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.646 [2024-11-26 07:42:07.600735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.646 [2024-11-26 07:42:07.600740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.646 [2024-11-26 07:42:07.600745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.646 [2024-11-26 07:42:07.600755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.646 qpair failed and we were unable to recover it. 00:32:23.646 [2024-11-26 07:42:07.610583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.646 [2024-11-26 07:42:07.610645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.646 [2024-11-26 07:42:07.610655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.646 [2024-11-26 07:42:07.610661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.646 [2024-11-26 07:42:07.610665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.646 [2024-11-26 07:42:07.610675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.646 qpair failed and we were unable to recover it. 00:32:23.646 [2024-11-26 07:42:07.620683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.646 [2024-11-26 07:42:07.620735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.646 [2024-11-26 07:42:07.620745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.646 [2024-11-26 07:42:07.620751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.646 [2024-11-26 07:42:07.620757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.646 [2024-11-26 07:42:07.620767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.646 qpair failed and we were unable to recover it. 00:32:23.646 [2024-11-26 07:42:07.630566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.646 [2024-11-26 07:42:07.630611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.646 [2024-11-26 07:42:07.630620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.646 [2024-11-26 07:42:07.630626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.646 [2024-11-26 07:42:07.630630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.646 [2024-11-26 07:42:07.630640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.646 qpair failed and we were unable to recover it. 00:32:23.646 [2024-11-26 07:42:07.640741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.646 [2024-11-26 07:42:07.640793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.646 [2024-11-26 07:42:07.640802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.646 [2024-11-26 07:42:07.640808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.646 [2024-11-26 07:42:07.640812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.646 [2024-11-26 07:42:07.640822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.646 qpair failed and we were unable to recover it. 00:32:23.646 [2024-11-26 07:42:07.650741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.646 [2024-11-26 07:42:07.650789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.646 [2024-11-26 07:42:07.650799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.646 [2024-11-26 07:42:07.650807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.646 [2024-11-26 07:42:07.650812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.646 [2024-11-26 07:42:07.650822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.646 qpair failed and we were unable to recover it. 00:32:23.646 [2024-11-26 07:42:07.660773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.646 [2024-11-26 07:42:07.660822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.646 [2024-11-26 07:42:07.660832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.646 [2024-11-26 07:42:07.660838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.646 [2024-11-26 07:42:07.660843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.646 [2024-11-26 07:42:07.660853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.646 qpair failed and we were unable to recover it. 00:32:23.646 [2024-11-26 07:42:07.670821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.646 [2024-11-26 07:42:07.670891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.646 [2024-11-26 07:42:07.670902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.646 [2024-11-26 07:42:07.670907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.646 [2024-11-26 07:42:07.670911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.646 [2024-11-26 07:42:07.670922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.646 qpair failed and we were unable to recover it. 00:32:23.646 [2024-11-26 07:42:07.680812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.646 [2024-11-26 07:42:07.680859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.646 [2024-11-26 07:42:07.680873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.646 [2024-11-26 07:42:07.680879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.646 [2024-11-26 07:42:07.680883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.646 [2024-11-26 07:42:07.680894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.646 qpair failed and we were unable to recover it. 00:32:23.646 [2024-11-26 07:42:07.690935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.646 [2024-11-26 07:42:07.690990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.646 [2024-11-26 07:42:07.690999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.646 [2024-11-26 07:42:07.691005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.646 [2024-11-26 07:42:07.691009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.646 [2024-11-26 07:42:07.691022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.646 qpair failed and we were unable to recover it. 00:32:23.646 [2024-11-26 07:42:07.700757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.646 [2024-11-26 07:42:07.700830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.646 [2024-11-26 07:42:07.700840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.646 [2024-11-26 07:42:07.700845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.646 [2024-11-26 07:42:07.700851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.646 [2024-11-26 07:42:07.700868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.646 qpair failed and we were unable to recover it. 00:32:23.646 [2024-11-26 07:42:07.710803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.646 [2024-11-26 07:42:07.710848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.646 [2024-11-26 07:42:07.710858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.646 [2024-11-26 07:42:07.710868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.647 [2024-11-26 07:42:07.710872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.647 [2024-11-26 07:42:07.710883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.647 qpair failed and we were unable to recover it. 00:32:23.647 [2024-11-26 07:42:07.720966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.647 [2024-11-26 07:42:07.721032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.647 [2024-11-26 07:42:07.721041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.647 [2024-11-26 07:42:07.721047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.647 [2024-11-26 07:42:07.721052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.647 [2024-11-26 07:42:07.721062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.647 qpair failed and we were unable to recover it. 00:32:23.647 [2024-11-26 07:42:07.730958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.647 [2024-11-26 07:42:07.731010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.647 [2024-11-26 07:42:07.731020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.647 [2024-11-26 07:42:07.731025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.647 [2024-11-26 07:42:07.731030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.647 [2024-11-26 07:42:07.731040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.647 qpair failed and we were unable to recover it. 00:32:23.647 [2024-11-26 07:42:07.741105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.647 [2024-11-26 07:42:07.741173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.647 [2024-11-26 07:42:07.741183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.647 [2024-11-26 07:42:07.741189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.647 [2024-11-26 07:42:07.741194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.647 [2024-11-26 07:42:07.741204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.647 qpair failed and we were unable to recover it. 00:32:23.647 [2024-11-26 07:42:07.751101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.647 [2024-11-26 07:42:07.751145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.647 [2024-11-26 07:42:07.751155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.647 [2024-11-26 07:42:07.751160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.647 [2024-11-26 07:42:07.751165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.647 [2024-11-26 07:42:07.751175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.647 qpair failed and we were unable to recover it. 00:32:23.647 [2024-11-26 07:42:07.761117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.647 [2024-11-26 07:42:07.761173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.647 [2024-11-26 07:42:07.761183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.647 [2024-11-26 07:42:07.761188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.647 [2024-11-26 07:42:07.761193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.647 [2024-11-26 07:42:07.761203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.647 qpair failed and we were unable to recover it. 00:32:23.647 [2024-11-26 07:42:07.771110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.647 [2024-11-26 07:42:07.771165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.647 [2024-11-26 07:42:07.771174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.647 [2024-11-26 07:42:07.771180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.647 [2024-11-26 07:42:07.771184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.647 [2024-11-26 07:42:07.771194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.647 qpair failed and we were unable to recover it. 00:32:23.909 [2024-11-26 07:42:07.781134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.909 [2024-11-26 07:42:07.781186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.909 [2024-11-26 07:42:07.781198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.909 [2024-11-26 07:42:07.781203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.909 [2024-11-26 07:42:07.781208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.909 [2024-11-26 07:42:07.781219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.909 qpair failed and we were unable to recover it. 00:32:23.909 [2024-11-26 07:42:07.791168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.909 [2024-11-26 07:42:07.791221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.909 [2024-11-26 07:42:07.791232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.909 [2024-11-26 07:42:07.791237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.909 [2024-11-26 07:42:07.791242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.909 [2024-11-26 07:42:07.791253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.909 qpair failed and we were unable to recover it. 00:32:23.909 [2024-11-26 07:42:07.801197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.909 [2024-11-26 07:42:07.801296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.909 [2024-11-26 07:42:07.801307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.909 [2024-11-26 07:42:07.801313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.909 [2024-11-26 07:42:07.801317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.909 [2024-11-26 07:42:07.801327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.909 qpair failed and we were unable to recover it. 00:32:23.909 [2024-11-26 07:42:07.811199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.909 [2024-11-26 07:42:07.811247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.909 [2024-11-26 07:42:07.811257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.909 [2024-11-26 07:42:07.811263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.909 [2024-11-26 07:42:07.811267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.909 [2024-11-26 07:42:07.811278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.909 qpair failed and we were unable to recover it. 00:32:23.909 [2024-11-26 07:42:07.821200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.909 [2024-11-26 07:42:07.821258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.909 [2024-11-26 07:42:07.821275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.909 [2024-11-26 07:42:07.821281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.909 [2024-11-26 07:42:07.821286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.909 [2024-11-26 07:42:07.821306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.909 qpair failed and we were unable to recover it. 00:32:23.909 [2024-11-26 07:42:07.831240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.909 [2024-11-26 07:42:07.831286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.909 [2024-11-26 07:42:07.831296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.909 [2024-11-26 07:42:07.831301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.909 [2024-11-26 07:42:07.831306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.909 [2024-11-26 07:42:07.831316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.909 qpair failed and we were unable to recover it. 00:32:23.909 [2024-11-26 07:42:07.841269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.909 [2024-11-26 07:42:07.841316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.909 [2024-11-26 07:42:07.841326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.909 [2024-11-26 07:42:07.841332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.909 [2024-11-26 07:42:07.841337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.909 [2024-11-26 07:42:07.841347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.909 qpair failed and we were unable to recover it. 00:32:23.909 [2024-11-26 07:42:07.851344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.909 [2024-11-26 07:42:07.851408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.909 [2024-11-26 07:42:07.851418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.909 [2024-11-26 07:42:07.851423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.909 [2024-11-26 07:42:07.851428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.909 [2024-11-26 07:42:07.851438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.909 qpair failed and we were unable to recover it. 00:32:23.909 [2024-11-26 07:42:07.861323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.909 [2024-11-26 07:42:07.861376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.909 [2024-11-26 07:42:07.861386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.909 [2024-11-26 07:42:07.861392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.909 [2024-11-26 07:42:07.861396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.909 [2024-11-26 07:42:07.861406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.909 qpair failed and we were unable to recover it. 00:32:23.909 [2024-11-26 07:42:07.871366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.909 [2024-11-26 07:42:07.871423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.909 [2024-11-26 07:42:07.871433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.909 [2024-11-26 07:42:07.871438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.909 [2024-11-26 07:42:07.871443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.909 [2024-11-26 07:42:07.871453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.909 qpair failed and we were unable to recover it. 00:32:23.909 [2024-11-26 07:42:07.881300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.909 [2024-11-26 07:42:07.881352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.909 [2024-11-26 07:42:07.881362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.909 [2024-11-26 07:42:07.881367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.909 [2024-11-26 07:42:07.881372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.909 [2024-11-26 07:42:07.881382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.909 qpair failed and we were unable to recover it. 00:32:23.909 [2024-11-26 07:42:07.891455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.909 [2024-11-26 07:42:07.891506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.909 [2024-11-26 07:42:07.891516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.909 [2024-11-26 07:42:07.891521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.909 [2024-11-26 07:42:07.891526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.909 [2024-11-26 07:42:07.891536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.909 qpair failed and we were unable to recover it. 00:32:23.909 [2024-11-26 07:42:07.901447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.909 [2024-11-26 07:42:07.901495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.909 [2024-11-26 07:42:07.901506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.909 [2024-11-26 07:42:07.901511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.909 [2024-11-26 07:42:07.901515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.909 [2024-11-26 07:42:07.901525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.909 qpair failed and we were unable to recover it. 00:32:23.909 [2024-11-26 07:42:07.911491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.909 [2024-11-26 07:42:07.911535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.909 [2024-11-26 07:42:07.911547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.909 [2024-11-26 07:42:07.911553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.909 [2024-11-26 07:42:07.911557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.910 [2024-11-26 07:42:07.911567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.910 qpair failed and we were unable to recover it. 00:32:23.910 [2024-11-26 07:42:07.921543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.910 [2024-11-26 07:42:07.921594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.910 [2024-11-26 07:42:07.921604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.910 [2024-11-26 07:42:07.921610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.910 [2024-11-26 07:42:07.921615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.910 [2024-11-26 07:42:07.921626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.910 qpair failed and we were unable to recover it. 00:32:23.910 [2024-11-26 07:42:07.931465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.910 [2024-11-26 07:42:07.931519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.910 [2024-11-26 07:42:07.931529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.910 [2024-11-26 07:42:07.931534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.910 [2024-11-26 07:42:07.931539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.910 [2024-11-26 07:42:07.931549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.910 qpair failed and we were unable to recover it. 00:32:23.910 [2024-11-26 07:42:07.941443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.910 [2024-11-26 07:42:07.941501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.910 [2024-11-26 07:42:07.941511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.910 [2024-11-26 07:42:07.941516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.910 [2024-11-26 07:42:07.941521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.910 [2024-11-26 07:42:07.941531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.910 qpair failed and we were unable to recover it. 00:32:23.910 [2024-11-26 07:42:07.951561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.910 [2024-11-26 07:42:07.951609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.910 [2024-11-26 07:42:07.951618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.910 [2024-11-26 07:42:07.951624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.910 [2024-11-26 07:42:07.951635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.910 [2024-11-26 07:42:07.951646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.910 qpair failed and we were unable to recover it. 00:32:23.910 [2024-11-26 07:42:07.961498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.910 [2024-11-26 07:42:07.961552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.910 [2024-11-26 07:42:07.961562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.910 [2024-11-26 07:42:07.961567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.910 [2024-11-26 07:42:07.961572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.910 [2024-11-26 07:42:07.961582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.910 qpair failed and we were unable to recover it. 00:32:23.910 [2024-11-26 07:42:07.971689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.910 [2024-11-26 07:42:07.971741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.910 [2024-11-26 07:42:07.971751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.910 [2024-11-26 07:42:07.971756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.910 [2024-11-26 07:42:07.971761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.910 [2024-11-26 07:42:07.971771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.910 qpair failed and we were unable to recover it. 00:32:23.910 [2024-11-26 07:42:07.981677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.910 [2024-11-26 07:42:07.981774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.910 [2024-11-26 07:42:07.981785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.910 [2024-11-26 07:42:07.981791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.910 [2024-11-26 07:42:07.981795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.910 [2024-11-26 07:42:07.981806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.910 qpair failed and we were unable to recover it. 00:32:23.910 [2024-11-26 07:42:07.991681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.910 [2024-11-26 07:42:07.991757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.910 [2024-11-26 07:42:07.991767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.910 [2024-11-26 07:42:07.991773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.910 [2024-11-26 07:42:07.991777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.910 [2024-11-26 07:42:07.991787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.910 qpair failed and we were unable to recover it. 00:32:23.910 [2024-11-26 07:42:08.001752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.910 [2024-11-26 07:42:08.001799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.910 [2024-11-26 07:42:08.001811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.910 [2024-11-26 07:42:08.001816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.910 [2024-11-26 07:42:08.001821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.910 [2024-11-26 07:42:08.001832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.910 qpair failed and we were unable to recover it. 00:32:23.910 [2024-11-26 07:42:08.011761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.910 [2024-11-26 07:42:08.011811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.910 [2024-11-26 07:42:08.011821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.910 [2024-11-26 07:42:08.011827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.910 [2024-11-26 07:42:08.011832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.910 [2024-11-26 07:42:08.011842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.910 qpair failed and we were unable to recover it. 00:32:23.910 [2024-11-26 07:42:08.021784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.910 [2024-11-26 07:42:08.021860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.910 [2024-11-26 07:42:08.021874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.910 [2024-11-26 07:42:08.021879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.910 [2024-11-26 07:42:08.021884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.910 [2024-11-26 07:42:08.021895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.910 qpair failed and we were unable to recover it. 00:32:23.910 [2024-11-26 07:42:08.031700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.910 [2024-11-26 07:42:08.031746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.910 [2024-11-26 07:42:08.031756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.910 [2024-11-26 07:42:08.031761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.910 [2024-11-26 07:42:08.031766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:23.910 [2024-11-26 07:42:08.031776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:23.910 qpair failed and we were unable to recover it. 00:32:24.174 [2024-11-26 07:42:08.041840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.174 [2024-11-26 07:42:08.041895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.174 [2024-11-26 07:42:08.041908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.174 [2024-11-26 07:42:08.041913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.174 [2024-11-26 07:42:08.041918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.174 [2024-11-26 07:42:08.041928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.174 qpair failed and we were unable to recover it. 00:32:24.174 [2024-11-26 07:42:08.051741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.174 [2024-11-26 07:42:08.051794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.174 [2024-11-26 07:42:08.051804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.174 [2024-11-26 07:42:08.051810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.174 [2024-11-26 07:42:08.051815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.174 [2024-11-26 07:42:08.051825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.174 qpair failed and we were unable to recover it. 00:32:24.174 [2024-11-26 07:42:08.061898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.174 [2024-11-26 07:42:08.061944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.174 [2024-11-26 07:42:08.061955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.174 [2024-11-26 07:42:08.061960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.174 [2024-11-26 07:42:08.061965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.174 [2024-11-26 07:42:08.061976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.174 qpair failed and we were unable to recover it. 00:32:24.174 [2024-11-26 07:42:08.071900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.174 [2024-11-26 07:42:08.071958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.174 [2024-11-26 07:42:08.071967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.174 [2024-11-26 07:42:08.071973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.174 [2024-11-26 07:42:08.071978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.174 [2024-11-26 07:42:08.071988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.174 qpair failed and we were unable to recover it. 00:32:24.174 [2024-11-26 07:42:08.081953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.174 [2024-11-26 07:42:08.082002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.174 [2024-11-26 07:42:08.082012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.174 [2024-11-26 07:42:08.082020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.174 [2024-11-26 07:42:08.082025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.174 [2024-11-26 07:42:08.082036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.174 qpair failed and we were unable to recover it. 00:32:24.174 [2024-11-26 07:42:08.091979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.174 [2024-11-26 07:42:08.092026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.174 [2024-11-26 07:42:08.092036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.174 [2024-11-26 07:42:08.092041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.174 [2024-11-26 07:42:08.092046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.174 [2024-11-26 07:42:08.092057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.174 qpair failed and we were unable to recover it. 00:32:24.174 [2024-11-26 07:42:08.101929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.174 [2024-11-26 07:42:08.101977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.174 [2024-11-26 07:42:08.101987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.174 [2024-11-26 07:42:08.101992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.174 [2024-11-26 07:42:08.101997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.174 [2024-11-26 07:42:08.102007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.174 qpair failed and we were unable to recover it. 00:32:24.174 [2024-11-26 07:42:08.112016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.174 [2024-11-26 07:42:08.112064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.174 [2024-11-26 07:42:08.112073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.174 [2024-11-26 07:42:08.112078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.175 [2024-11-26 07:42:08.112083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.175 [2024-11-26 07:42:08.112093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.175 qpair failed and we were unable to recover it. 00:32:24.175 [2024-11-26 07:42:08.122055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.175 [2024-11-26 07:42:08.122104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.175 [2024-11-26 07:42:08.122113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.175 [2024-11-26 07:42:08.122119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.175 [2024-11-26 07:42:08.122124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.175 [2024-11-26 07:42:08.122135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.175 qpair failed and we were unable to recover it. 00:32:24.175 [2024-11-26 07:42:08.131938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.175 [2024-11-26 07:42:08.131989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.175 [2024-11-26 07:42:08.131998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.175 [2024-11-26 07:42:08.132003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.175 [2024-11-26 07:42:08.132008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.175 [2024-11-26 07:42:08.132018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.175 qpair failed and we were unable to recover it. 00:32:24.175 [2024-11-26 07:42:08.142056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.175 [2024-11-26 07:42:08.142096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.175 [2024-11-26 07:42:08.142106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.175 [2024-11-26 07:42:08.142111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.175 [2024-11-26 07:42:08.142116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.175 [2024-11-26 07:42:08.142126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.175 qpair failed and we were unable to recover it. 00:32:24.175 [2024-11-26 07:42:08.152131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.175 [2024-11-26 07:42:08.152181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.175 [2024-11-26 07:42:08.152190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.175 [2024-11-26 07:42:08.152195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.175 [2024-11-26 07:42:08.152200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.175 [2024-11-26 07:42:08.152210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.175 qpair failed and we were unable to recover it. 00:32:24.175 [2024-11-26 07:42:08.162095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.175 [2024-11-26 07:42:08.162155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.175 [2024-11-26 07:42:08.162165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.175 [2024-11-26 07:42:08.162170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.175 [2024-11-26 07:42:08.162175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.175 [2024-11-26 07:42:08.162186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.175 qpair failed and we were unable to recover it. 00:32:24.175 [2024-11-26 07:42:08.172079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.175 [2024-11-26 07:42:08.172130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.175 [2024-11-26 07:42:08.172140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.175 [2024-11-26 07:42:08.172145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.175 [2024-11-26 07:42:08.172150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.175 [2024-11-26 07:42:08.172160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.175 qpair failed and we were unable to recover it. 00:32:24.175 [2024-11-26 07:42:08.182077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.175 [2024-11-26 07:42:08.182120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.175 [2024-11-26 07:42:08.182129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.175 [2024-11-26 07:42:08.182135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.175 [2024-11-26 07:42:08.182139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.175 [2024-11-26 07:42:08.182149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.175 qpair failed and we were unable to recover it. 00:32:24.175 [2024-11-26 07:42:08.192227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.175 [2024-11-26 07:42:08.192270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.175 [2024-11-26 07:42:08.192279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.175 [2024-11-26 07:42:08.192284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.175 [2024-11-26 07:42:08.192289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.175 [2024-11-26 07:42:08.192299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.175 qpair failed and we were unable to recover it. 00:32:24.175 [2024-11-26 07:42:08.202079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.175 [2024-11-26 07:42:08.202123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.175 [2024-11-26 07:42:08.202133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.175 [2024-11-26 07:42:08.202139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.175 [2024-11-26 07:42:08.202144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.175 [2024-11-26 07:42:08.202155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.175 qpair failed and we were unable to recover it. 00:32:24.175 [2024-11-26 07:42:08.212307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.175 [2024-11-26 07:42:08.212358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.175 [2024-11-26 07:42:08.212368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.175 [2024-11-26 07:42:08.212376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.175 [2024-11-26 07:42:08.212381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.175 [2024-11-26 07:42:08.212391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.175 qpair failed and we were unable to recover it. 00:32:24.175 [2024-11-26 07:42:08.222221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.175 [2024-11-26 07:42:08.222257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.175 [2024-11-26 07:42:08.222267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.175 [2024-11-26 07:42:08.222272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.175 [2024-11-26 07:42:08.222277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.175 [2024-11-26 07:42:08.222288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.175 qpair failed and we were unable to recover it. 00:32:24.175 [2024-11-26 07:42:08.232328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.175 [2024-11-26 07:42:08.232376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.175 [2024-11-26 07:42:08.232387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.175 [2024-11-26 07:42:08.232392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.176 [2024-11-26 07:42:08.232397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.176 [2024-11-26 07:42:08.232407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.176 qpair failed and we were unable to recover it. 00:32:24.176 [2024-11-26 07:42:08.242331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.176 [2024-11-26 07:42:08.242374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.176 [2024-11-26 07:42:08.242384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.176 [2024-11-26 07:42:08.242389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.176 [2024-11-26 07:42:08.242394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.176 [2024-11-26 07:42:08.242405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.176 qpair failed and we were unable to recover it. 00:32:24.176 [2024-11-26 07:42:08.252265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.176 [2024-11-26 07:42:08.252317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.176 [2024-11-26 07:42:08.252327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.176 [2024-11-26 07:42:08.252332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.176 [2024-11-26 07:42:08.252337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.176 [2024-11-26 07:42:08.252350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.176 qpair failed and we were unable to recover it. 00:32:24.176 [2024-11-26 07:42:08.262344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.176 [2024-11-26 07:42:08.262383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.176 [2024-11-26 07:42:08.262393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.176 [2024-11-26 07:42:08.262398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.176 [2024-11-26 07:42:08.262403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.176 [2024-11-26 07:42:08.262413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.176 qpair failed and we were unable to recover it. 00:32:24.176 [2024-11-26 07:42:08.272404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.176 [2024-11-26 07:42:08.272454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.176 [2024-11-26 07:42:08.272464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.176 [2024-11-26 07:42:08.272469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.176 [2024-11-26 07:42:08.272474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.176 [2024-11-26 07:42:08.272484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.176 qpair failed and we were unable to recover it. 00:32:24.176 [2024-11-26 07:42:08.282291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.176 [2024-11-26 07:42:08.282334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.176 [2024-11-26 07:42:08.282344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.176 [2024-11-26 07:42:08.282349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.176 [2024-11-26 07:42:08.282353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.176 [2024-11-26 07:42:08.282364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.176 qpair failed and we were unable to recover it. 00:32:24.176 [2024-11-26 07:42:08.292387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.176 [2024-11-26 07:42:08.292436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.176 [2024-11-26 07:42:08.292445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.176 [2024-11-26 07:42:08.292451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.176 [2024-11-26 07:42:08.292455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.176 [2024-11-26 07:42:08.292466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.176 qpair failed and we were unable to recover it. 00:32:24.440 [2024-11-26 07:42:08.302471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.440 [2024-11-26 07:42:08.302512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.440 [2024-11-26 07:42:08.302522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.440 [2024-11-26 07:42:08.302527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.440 [2024-11-26 07:42:08.302532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.440 [2024-11-26 07:42:08.302542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.440 qpair failed and we were unable to recover it. 00:32:24.440 [2024-11-26 07:42:08.312554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.440 [2024-11-26 07:42:08.312598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.440 [2024-11-26 07:42:08.312608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.440 [2024-11-26 07:42:08.312613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.440 [2024-11-26 07:42:08.312618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.440 [2024-11-26 07:42:08.312627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.440 qpair failed and we were unable to recover it. 00:32:24.440 [2024-11-26 07:42:08.322451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.440 [2024-11-26 07:42:08.322491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.440 [2024-11-26 07:42:08.322500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.440 [2024-11-26 07:42:08.322505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.440 [2024-11-26 07:42:08.322510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.440 [2024-11-26 07:42:08.322519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.440 qpair failed and we were unable to recover it. 00:32:24.440 [2024-11-26 07:42:08.332571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.440 [2024-11-26 07:42:08.332641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.440 [2024-11-26 07:42:08.332651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.440 [2024-11-26 07:42:08.332656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.440 [2024-11-26 07:42:08.332661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.440 [2024-11-26 07:42:08.332671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.440 qpair failed and we were unable to recover it. 00:32:24.440 [2024-11-26 07:42:08.342563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.440 [2024-11-26 07:42:08.342601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.440 [2024-11-26 07:42:08.342615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.440 [2024-11-26 07:42:08.342621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.440 [2024-11-26 07:42:08.342626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.440 [2024-11-26 07:42:08.342636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.440 qpair failed and we were unable to recover it. 00:32:24.440 [2024-11-26 07:42:08.352651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.440 [2024-11-26 07:42:08.352697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.440 [2024-11-26 07:42:08.352707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.440 [2024-11-26 07:42:08.352713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.440 [2024-11-26 07:42:08.352717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.440 [2024-11-26 07:42:08.352728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.440 qpair failed and we were unable to recover it. 00:32:24.440 [2024-11-26 07:42:08.362648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.440 [2024-11-26 07:42:08.362705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.440 [2024-11-26 07:42:08.362715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.440 [2024-11-26 07:42:08.362721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.440 [2024-11-26 07:42:08.362725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.440 [2024-11-26 07:42:08.362735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.440 qpair failed and we were unable to recover it. 00:32:24.440 [2024-11-26 07:42:08.372727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.440 [2024-11-26 07:42:08.372773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.440 [2024-11-26 07:42:08.372783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.441 [2024-11-26 07:42:08.372788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.441 [2024-11-26 07:42:08.372793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.441 [2024-11-26 07:42:08.372804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.441 qpair failed and we were unable to recover it. 00:32:24.441 [2024-11-26 07:42:08.382693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.441 [2024-11-26 07:42:08.382733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.441 [2024-11-26 07:42:08.382744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.441 [2024-11-26 07:42:08.382749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.441 [2024-11-26 07:42:08.382753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.441 [2024-11-26 07:42:08.382767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.441 qpair failed and we were unable to recover it. 00:32:24.441 [2024-11-26 07:42:08.392760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.441 [2024-11-26 07:42:08.392803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.441 [2024-11-26 07:42:08.392814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.441 [2024-11-26 07:42:08.392819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.441 [2024-11-26 07:42:08.392824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.441 [2024-11-26 07:42:08.392835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.441 qpair failed and we were unable to recover it. 00:32:24.441 [2024-11-26 07:42:08.402757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.441 [2024-11-26 07:42:08.402798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.441 [2024-11-26 07:42:08.402808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.441 [2024-11-26 07:42:08.402814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.441 [2024-11-26 07:42:08.402818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.441 [2024-11-26 07:42:08.402829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.441 qpair failed and we were unable to recover it. 00:32:24.441 [2024-11-26 07:42:08.412835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.441 [2024-11-26 07:42:08.412888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.441 [2024-11-26 07:42:08.412898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.441 [2024-11-26 07:42:08.412903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.441 [2024-11-26 07:42:08.412908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.441 [2024-11-26 07:42:08.412918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.441 qpair failed and we were unable to recover it. 00:32:24.441 [2024-11-26 07:42:08.422818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.441 [2024-11-26 07:42:08.422860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.441 [2024-11-26 07:42:08.422873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.441 [2024-11-26 07:42:08.422879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.441 [2024-11-26 07:42:08.422883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.441 [2024-11-26 07:42:08.422894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.441 qpair failed and we were unable to recover it. 00:32:24.441 [2024-11-26 07:42:08.432882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.441 [2024-11-26 07:42:08.432931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.441 [2024-11-26 07:42:08.432941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.441 [2024-11-26 07:42:08.432946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.441 [2024-11-26 07:42:08.432951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.441 [2024-11-26 07:42:08.432961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.441 qpair failed and we were unable to recover it. 00:32:24.441 [2024-11-26 07:42:08.442907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.441 [2024-11-26 07:42:08.442975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.441 [2024-11-26 07:42:08.442984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.441 [2024-11-26 07:42:08.442990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.441 [2024-11-26 07:42:08.442994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.441 [2024-11-26 07:42:08.443005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.441 qpair failed and we were unable to recover it. 00:32:24.441 [2024-11-26 07:42:08.452942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.441 [2024-11-26 07:42:08.452987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.441 [2024-11-26 07:42:08.452998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.441 [2024-11-26 07:42:08.453004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.441 [2024-11-26 07:42:08.453010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.441 [2024-11-26 07:42:08.453021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.441 qpair failed and we were unable to recover it. 00:32:24.441 [2024-11-26 07:42:08.462877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.441 [2024-11-26 07:42:08.462917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.441 [2024-11-26 07:42:08.462928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.441 [2024-11-26 07:42:08.462933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.441 [2024-11-26 07:42:08.462938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.441 [2024-11-26 07:42:08.462948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.441 qpair failed and we were unable to recover it. 00:32:24.441 [2024-11-26 07:42:08.472977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.441 [2024-11-26 07:42:08.473023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.441 [2024-11-26 07:42:08.473036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.441 [2024-11-26 07:42:08.473042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.441 [2024-11-26 07:42:08.473046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.441 [2024-11-26 07:42:08.473058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.441 qpair failed and we were unable to recover it. 00:32:24.441 [2024-11-26 07:42:08.482971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.441 [2024-11-26 07:42:08.483013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.442 [2024-11-26 07:42:08.483023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.442 [2024-11-26 07:42:08.483029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.442 [2024-11-26 07:42:08.483034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.442 [2024-11-26 07:42:08.483044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.442 qpair failed and we were unable to recover it. 00:32:24.442 [2024-11-26 07:42:08.492949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.442 [2024-11-26 07:42:08.493002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.442 [2024-11-26 07:42:08.493012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.442 [2024-11-26 07:42:08.493017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.442 [2024-11-26 07:42:08.493022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.442 [2024-11-26 07:42:08.493033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.442 qpair failed and we were unable to recover it. 00:32:24.442 [2024-11-26 07:42:08.503031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.442 [2024-11-26 07:42:08.503077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.442 [2024-11-26 07:42:08.503087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.442 [2024-11-26 07:42:08.503093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.442 [2024-11-26 07:42:08.503097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.442 [2024-11-26 07:42:08.503107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.442 qpair failed and we were unable to recover it. 00:32:24.442 [2024-11-26 07:42:08.513092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.442 [2024-11-26 07:42:08.513140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.442 [2024-11-26 07:42:08.513152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.442 [2024-11-26 07:42:08.513157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.442 [2024-11-26 07:42:08.513165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.442 [2024-11-26 07:42:08.513178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.442 qpair failed and we were unable to recover it. 00:32:24.442 [2024-11-26 07:42:08.522949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.442 [2024-11-26 07:42:08.522991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.442 [2024-11-26 07:42:08.523002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.442 [2024-11-26 07:42:08.523007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.442 [2024-11-26 07:42:08.523012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.442 [2024-11-26 07:42:08.523022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.442 qpair failed and we were unable to recover it. 00:32:24.442 [2024-11-26 07:42:08.533172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.442 [2024-11-26 07:42:08.533239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.442 [2024-11-26 07:42:08.533249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.442 [2024-11-26 07:42:08.533255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.442 [2024-11-26 07:42:08.533259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.442 [2024-11-26 07:42:08.533269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.442 qpair failed and we were unable to recover it. 00:32:24.442 [2024-11-26 07:42:08.543055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.442 [2024-11-26 07:42:08.543094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.442 [2024-11-26 07:42:08.543104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.442 [2024-11-26 07:42:08.543109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.442 [2024-11-26 07:42:08.543114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.442 [2024-11-26 07:42:08.543124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.442 qpair failed and we were unable to recover it. 00:32:24.442 [2024-11-26 07:42:08.553187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.442 [2024-11-26 07:42:08.553234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.442 [2024-11-26 07:42:08.553244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.442 [2024-11-26 07:42:08.553250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.442 [2024-11-26 07:42:08.553255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.442 [2024-11-26 07:42:08.553265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.442 qpair failed and we were unable to recover it. 00:32:24.442 [2024-11-26 07:42:08.563167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.442 [2024-11-26 07:42:08.563208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.442 [2024-11-26 07:42:08.563219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.442 [2024-11-26 07:42:08.563224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.442 [2024-11-26 07:42:08.563229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.442 [2024-11-26 07:42:08.563240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.442 qpair failed and we were unable to recover it. 00:32:24.706 [2024-11-26 07:42:08.573267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.706 [2024-11-26 07:42:08.573317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.706 [2024-11-26 07:42:08.573328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.706 [2024-11-26 07:42:08.573333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.706 [2024-11-26 07:42:08.573337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.706 [2024-11-26 07:42:08.573347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.706 qpair failed and we were unable to recover it. 00:32:24.706 [2024-11-26 07:42:08.583098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.706 [2024-11-26 07:42:08.583137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.706 [2024-11-26 07:42:08.583147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.706 [2024-11-26 07:42:08.583153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.706 [2024-11-26 07:42:08.583158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.706 [2024-11-26 07:42:08.583167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.706 qpair failed and we were unable to recover it. 00:32:24.706 [2024-11-26 07:42:08.593347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.706 [2024-11-26 07:42:08.593405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.706 [2024-11-26 07:42:08.593415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.706 [2024-11-26 07:42:08.593421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.706 [2024-11-26 07:42:08.593426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.706 [2024-11-26 07:42:08.593436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.706 qpair failed and we were unable to recover it. 00:32:24.706 [2024-11-26 07:42:08.603463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.706 [2024-11-26 07:42:08.603507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.706 [2024-11-26 07:42:08.603519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.706 [2024-11-26 07:42:08.603525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.706 [2024-11-26 07:42:08.603529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.706 [2024-11-26 07:42:08.603539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.706 qpair failed and we were unable to recover it. 00:32:24.706 [2024-11-26 07:42:08.613381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.706 [2024-11-26 07:42:08.613434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.706 [2024-11-26 07:42:08.613444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.707 [2024-11-26 07:42:08.613449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.707 [2024-11-26 07:42:08.613454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.707 [2024-11-26 07:42:08.613464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.707 qpair failed and we were unable to recover it. 00:32:24.707 [2024-11-26 07:42:08.623365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.707 [2024-11-26 07:42:08.623405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.707 [2024-11-26 07:42:08.623415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.707 [2024-11-26 07:42:08.623422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.707 [2024-11-26 07:42:08.623427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.707 [2024-11-26 07:42:08.623437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.707 qpair failed and we were unable to recover it. 00:32:24.707 [2024-11-26 07:42:08.633275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.707 [2024-11-26 07:42:08.633321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.707 [2024-11-26 07:42:08.633331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.707 [2024-11-26 07:42:08.633336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.707 [2024-11-26 07:42:08.633342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.707 [2024-11-26 07:42:08.633352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.707 qpair failed and we were unable to recover it. 00:32:24.707 [2024-11-26 07:42:08.643388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.707 [2024-11-26 07:42:08.643436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.707 [2024-11-26 07:42:08.643445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.707 [2024-11-26 07:42:08.643453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.707 [2024-11-26 07:42:08.643458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.707 [2024-11-26 07:42:08.643468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.707 qpair failed and we were unable to recover it. 00:32:24.707 [2024-11-26 07:42:08.653477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.707 [2024-11-26 07:42:08.653523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.707 [2024-11-26 07:42:08.653533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.707 [2024-11-26 07:42:08.653538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.707 [2024-11-26 07:42:08.653542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.707 [2024-11-26 07:42:08.653552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.707 qpair failed and we were unable to recover it. 00:32:24.707 [2024-11-26 07:42:08.663318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.707 [2024-11-26 07:42:08.663358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.707 [2024-11-26 07:42:08.663368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.707 [2024-11-26 07:42:08.663374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.707 [2024-11-26 07:42:08.663378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.707 [2024-11-26 07:42:08.663389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.707 qpair failed and we were unable to recover it. 00:32:24.707 [2024-11-26 07:42:08.673387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.707 [2024-11-26 07:42:08.673432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.707 [2024-11-26 07:42:08.673442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.707 [2024-11-26 07:42:08.673447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.707 [2024-11-26 07:42:08.673452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.707 [2024-11-26 07:42:08.673463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.707 qpair failed and we were unable to recover it. 00:32:24.707 [2024-11-26 07:42:08.683514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.707 [2024-11-26 07:42:08.683559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.707 [2024-11-26 07:42:08.683568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.707 [2024-11-26 07:42:08.683575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.707 [2024-11-26 07:42:08.683580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.707 [2024-11-26 07:42:08.683590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.707 qpair failed and we were unable to recover it. 00:32:24.707 [2024-11-26 07:42:08.693613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.707 [2024-11-26 07:42:08.693664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.707 [2024-11-26 07:42:08.693675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.707 [2024-11-26 07:42:08.693682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.707 [2024-11-26 07:42:08.693688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.707 [2024-11-26 07:42:08.693699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.707 qpair failed and we were unable to recover it. 00:32:24.707 [2024-11-26 07:42:08.703533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.707 [2024-11-26 07:42:08.703570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.707 [2024-11-26 07:42:08.703580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.707 [2024-11-26 07:42:08.703585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.707 [2024-11-26 07:42:08.703591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.707 [2024-11-26 07:42:08.703601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.707 qpair failed and we were unable to recover it. 00:32:24.707 [2024-11-26 07:42:08.713613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.707 [2024-11-26 07:42:08.713698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.707 [2024-11-26 07:42:08.713708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.707 [2024-11-26 07:42:08.713713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.707 [2024-11-26 07:42:08.713719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.707 [2024-11-26 07:42:08.713729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.707 qpair failed and we were unable to recover it. 00:32:24.707 [2024-11-26 07:42:08.723589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.707 [2024-11-26 07:42:08.723634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.707 [2024-11-26 07:42:08.723645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.707 [2024-11-26 07:42:08.723650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.707 [2024-11-26 07:42:08.723655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.707 [2024-11-26 07:42:08.723666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.707 qpair failed and we were unable to recover it. 00:32:24.707 [2024-11-26 07:42:08.733703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.707 [2024-11-26 07:42:08.733753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.707 [2024-11-26 07:42:08.733763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.707 [2024-11-26 07:42:08.733768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.707 [2024-11-26 07:42:08.733773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.708 [2024-11-26 07:42:08.733783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.708 qpair failed and we were unable to recover it. 00:32:24.708 [2024-11-26 07:42:08.743659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.708 [2024-11-26 07:42:08.743709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.708 [2024-11-26 07:42:08.743719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.708 [2024-11-26 07:42:08.743724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.708 [2024-11-26 07:42:08.743729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.708 [2024-11-26 07:42:08.743739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.708 qpair failed and we were unable to recover it. 00:32:24.708 [2024-11-26 07:42:08.753602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.708 [2024-11-26 07:42:08.753644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.708 [2024-11-26 07:42:08.753655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.708 [2024-11-26 07:42:08.753660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.708 [2024-11-26 07:42:08.753665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.708 [2024-11-26 07:42:08.753675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.708 qpair failed and we were unable to recover it. 00:32:24.708 [2024-11-26 07:42:08.763594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.708 [2024-11-26 07:42:08.763636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.708 [2024-11-26 07:42:08.763646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.708 [2024-11-26 07:42:08.763651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.708 [2024-11-26 07:42:08.763656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.708 [2024-11-26 07:42:08.763667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.708 qpair failed and we were unable to recover it. 00:32:24.708 [2024-11-26 07:42:08.773775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.708 [2024-11-26 07:42:08.773822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.708 [2024-11-26 07:42:08.773832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.708 [2024-11-26 07:42:08.773841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.708 [2024-11-26 07:42:08.773845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.708 [2024-11-26 07:42:08.773856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.708 qpair failed and we were unable to recover it. 00:32:24.708 [2024-11-26 07:42:08.783777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.708 [2024-11-26 07:42:08.783817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.708 [2024-11-26 07:42:08.783827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.708 [2024-11-26 07:42:08.783832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.708 [2024-11-26 07:42:08.783837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.708 [2024-11-26 07:42:08.783847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.708 qpair failed and we were unable to recover it. 00:32:24.708 [2024-11-26 07:42:08.793711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.708 [2024-11-26 07:42:08.793757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.708 [2024-11-26 07:42:08.793767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.708 [2024-11-26 07:42:08.793773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.708 [2024-11-26 07:42:08.793777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.708 [2024-11-26 07:42:08.793787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.708 qpair failed and we were unable to recover it. 00:32:24.708 [2024-11-26 07:42:08.803843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.708 [2024-11-26 07:42:08.803893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.708 [2024-11-26 07:42:08.803903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.708 [2024-11-26 07:42:08.803908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.708 [2024-11-26 07:42:08.803913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.708 [2024-11-26 07:42:08.803923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.708 qpair failed and we were unable to recover it. 00:32:24.708 [2024-11-26 07:42:08.813931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.708 [2024-11-26 07:42:08.814006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.708 [2024-11-26 07:42:08.814016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.708 [2024-11-26 07:42:08.814021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.708 [2024-11-26 07:42:08.814027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.708 [2024-11-26 07:42:08.814040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.708 qpair failed and we were unable to recover it. 00:32:24.708 [2024-11-26 07:42:08.823879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.708 [2024-11-26 07:42:08.823919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.708 [2024-11-26 07:42:08.823929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.708 [2024-11-26 07:42:08.823935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.708 [2024-11-26 07:42:08.823940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.708 [2024-11-26 07:42:08.823950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.708 qpair failed and we were unable to recover it. 00:32:24.708 [2024-11-26 07:42:08.833929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.708 [2024-11-26 07:42:08.833987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.708 [2024-11-26 07:42:08.833997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.708 [2024-11-26 07:42:08.834002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.708 [2024-11-26 07:42:08.834007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.708 [2024-11-26 07:42:08.834018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.708 qpair failed and we were unable to recover it. 00:32:24.972 [2024-11-26 07:42:08.843901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.972 [2024-11-26 07:42:08.843945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.972 [2024-11-26 07:42:08.843955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.972 [2024-11-26 07:42:08.843960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.972 [2024-11-26 07:42:08.843965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.972 [2024-11-26 07:42:08.843975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.972 qpair failed and we were unable to recover it. 00:32:24.972 [2024-11-26 07:42:08.854007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.972 [2024-11-26 07:42:08.854054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.972 [2024-11-26 07:42:08.854065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.972 [2024-11-26 07:42:08.854071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.972 [2024-11-26 07:42:08.854076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.972 [2024-11-26 07:42:08.854086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.972 qpair failed and we were unable to recover it. 00:32:24.972 [2024-11-26 07:42:08.863970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.972 [2024-11-26 07:42:08.864009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.972 [2024-11-26 07:42:08.864020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.972 [2024-11-26 07:42:08.864025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.972 [2024-11-26 07:42:08.864030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.972 [2024-11-26 07:42:08.864041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.972 qpair failed and we were unable to recover it. 00:32:24.972 [2024-11-26 07:42:08.874087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.972 [2024-11-26 07:42:08.874134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.972 [2024-11-26 07:42:08.874144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.972 [2024-11-26 07:42:08.874149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.972 [2024-11-26 07:42:08.874155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.972 [2024-11-26 07:42:08.874165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.972 qpair failed and we were unable to recover it. 00:32:24.972 [2024-11-26 07:42:08.884061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.972 [2024-11-26 07:42:08.884103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.972 [2024-11-26 07:42:08.884112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.972 [2024-11-26 07:42:08.884118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.972 [2024-11-26 07:42:08.884123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.972 [2024-11-26 07:42:08.884133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.972 qpair failed and we were unable to recover it. 00:32:24.972 [2024-11-26 07:42:08.894106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.972 [2024-11-26 07:42:08.894157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.972 [2024-11-26 07:42:08.894167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.972 [2024-11-26 07:42:08.894173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.972 [2024-11-26 07:42:08.894178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.972 [2024-11-26 07:42:08.894188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.972 qpair failed and we were unable to recover it. 00:32:24.972 [2024-11-26 07:42:08.904115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.972 [2024-11-26 07:42:08.904155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.972 [2024-11-26 07:42:08.904167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.972 [2024-11-26 07:42:08.904173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.972 [2024-11-26 07:42:08.904178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.972 [2024-11-26 07:42:08.904189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.972 qpair failed and we were unable to recover it. 00:32:24.972 [2024-11-26 07:42:08.914043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.972 [2024-11-26 07:42:08.914086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.972 [2024-11-26 07:42:08.914097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.972 [2024-11-26 07:42:08.914102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.972 [2024-11-26 07:42:08.914107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.972 [2024-11-26 07:42:08.914118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.972 qpair failed and we were unable to recover it. 00:32:24.972 [2024-11-26 07:42:08.924180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.972 [2024-11-26 07:42:08.924222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.972 [2024-11-26 07:42:08.924232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.972 [2024-11-26 07:42:08.924237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.972 [2024-11-26 07:42:08.924242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.972 [2024-11-26 07:42:08.924252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.972 qpair failed and we were unable to recover it. 00:32:24.972 [2024-11-26 07:42:08.934249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.972 [2024-11-26 07:42:08.934321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.972 [2024-11-26 07:42:08.934330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.972 [2024-11-26 07:42:08.934336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.972 [2024-11-26 07:42:08.934341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.972 [2024-11-26 07:42:08.934351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.972 qpair failed and we were unable to recover it. 00:32:24.972 [2024-11-26 07:42:08.944209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.972 [2024-11-26 07:42:08.944251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.972 [2024-11-26 07:42:08.944260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.972 [2024-11-26 07:42:08.944265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.972 [2024-11-26 07:42:08.944273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.973 [2024-11-26 07:42:08.944283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.973 qpair failed and we were unable to recover it. 00:32:24.973 [2024-11-26 07:42:08.954160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.973 [2024-11-26 07:42:08.954210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.973 [2024-11-26 07:42:08.954221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.973 [2024-11-26 07:42:08.954227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.973 [2024-11-26 07:42:08.954232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.973 [2024-11-26 07:42:08.954243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.973 qpair failed and we were unable to recover it. 00:32:24.973 [2024-11-26 07:42:08.964138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.973 [2024-11-26 07:42:08.964183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.973 [2024-11-26 07:42:08.964193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.973 [2024-11-26 07:42:08.964198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.973 [2024-11-26 07:42:08.964203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.973 [2024-11-26 07:42:08.964213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.973 qpair failed and we were unable to recover it. 00:32:24.973 [2024-11-26 07:42:08.974356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.973 [2024-11-26 07:42:08.974399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.973 [2024-11-26 07:42:08.974409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.973 [2024-11-26 07:42:08.974415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.973 [2024-11-26 07:42:08.974420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.973 [2024-11-26 07:42:08.974430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.973 qpair failed and we were unable to recover it. 00:32:24.973 [2024-11-26 07:42:08.984340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.973 [2024-11-26 07:42:08.984379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.973 [2024-11-26 07:42:08.984389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.973 [2024-11-26 07:42:08.984394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.973 [2024-11-26 07:42:08.984399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.973 [2024-11-26 07:42:08.984409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.973 qpair failed and we were unable to recover it. 00:32:24.973 [2024-11-26 07:42:08.994404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.973 [2024-11-26 07:42:08.994453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.973 [2024-11-26 07:42:08.994464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.973 [2024-11-26 07:42:08.994469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.973 [2024-11-26 07:42:08.994474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.973 [2024-11-26 07:42:08.994485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.973 qpair failed and we were unable to recover it. 00:32:24.973 [2024-11-26 07:42:09.004393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.973 [2024-11-26 07:42:09.004437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.973 [2024-11-26 07:42:09.004448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.973 [2024-11-26 07:42:09.004453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.973 [2024-11-26 07:42:09.004458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.973 [2024-11-26 07:42:09.004468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.973 qpair failed and we were unable to recover it. 00:32:24.973 [2024-11-26 07:42:09.014430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.973 [2024-11-26 07:42:09.014476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.973 [2024-11-26 07:42:09.014486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.973 [2024-11-26 07:42:09.014492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.973 [2024-11-26 07:42:09.014497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.973 [2024-11-26 07:42:09.014508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.973 qpair failed and we were unable to recover it. 00:32:24.973 [2024-11-26 07:42:09.024430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.973 [2024-11-26 07:42:09.024468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.973 [2024-11-26 07:42:09.024478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.973 [2024-11-26 07:42:09.024483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.973 [2024-11-26 07:42:09.024488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.973 [2024-11-26 07:42:09.024498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.973 qpair failed and we were unable to recover it. 00:32:24.973 [2024-11-26 07:42:09.034470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.973 [2024-11-26 07:42:09.034516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.973 [2024-11-26 07:42:09.034529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.973 [2024-11-26 07:42:09.034535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.973 [2024-11-26 07:42:09.034540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.973 [2024-11-26 07:42:09.034550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.973 qpair failed and we were unable to recover it. 00:32:24.973 [2024-11-26 07:42:09.044495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.973 [2024-11-26 07:42:09.044543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.973 [2024-11-26 07:42:09.044553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.973 [2024-11-26 07:42:09.044558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.973 [2024-11-26 07:42:09.044563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.973 [2024-11-26 07:42:09.044574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.973 qpair failed and we were unable to recover it. 00:32:24.973 [2024-11-26 07:42:09.054541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.973 [2024-11-26 07:42:09.054584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.973 [2024-11-26 07:42:09.054594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.973 [2024-11-26 07:42:09.054600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.973 [2024-11-26 07:42:09.054604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.973 [2024-11-26 07:42:09.054614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.973 qpair failed and we were unable to recover it. 00:32:24.973 [2024-11-26 07:42:09.064542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.973 [2024-11-26 07:42:09.064581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.973 [2024-11-26 07:42:09.064591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.973 [2024-11-26 07:42:09.064596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.973 [2024-11-26 07:42:09.064601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.973 [2024-11-26 07:42:09.064611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.973 qpair failed and we were unable to recover it. 00:32:24.973 [2024-11-26 07:42:09.074635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.973 [2024-11-26 07:42:09.074683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.973 [2024-11-26 07:42:09.074693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.973 [2024-11-26 07:42:09.074698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.973 [2024-11-26 07:42:09.074706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.974 [2024-11-26 07:42:09.074716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.974 qpair failed and we were unable to recover it. 00:32:24.974 [2024-11-26 07:42:09.084610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.974 [2024-11-26 07:42:09.084654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.974 [2024-11-26 07:42:09.084664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.974 [2024-11-26 07:42:09.084669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.974 [2024-11-26 07:42:09.084674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.974 [2024-11-26 07:42:09.084684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.974 qpair failed and we were unable to recover it. 00:32:24.974 [2024-11-26 07:42:09.094548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.974 [2024-11-26 07:42:09.094592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.974 [2024-11-26 07:42:09.094602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.974 [2024-11-26 07:42:09.094607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.974 [2024-11-26 07:42:09.094612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:24.974 [2024-11-26 07:42:09.094622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.974 qpair failed and we were unable to recover it. 00:32:25.237 [2024-11-26 07:42:09.104611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.237 [2024-11-26 07:42:09.104656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.237 [2024-11-26 07:42:09.104675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.237 [2024-11-26 07:42:09.104681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.237 [2024-11-26 07:42:09.104686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.237 [2024-11-26 07:42:09.104701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.237 qpair failed and we were unable to recover it. 00:32:25.237 [2024-11-26 07:42:09.114716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.237 [2024-11-26 07:42:09.114766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.237 [2024-11-26 07:42:09.114784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.237 [2024-11-26 07:42:09.114791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.237 [2024-11-26 07:42:09.114796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.237 [2024-11-26 07:42:09.114810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.237 qpair failed and we were unable to recover it. 00:32:25.237 [2024-11-26 07:42:09.124716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.237 [2024-11-26 07:42:09.124782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.237 [2024-11-26 07:42:09.124794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.237 [2024-11-26 07:42:09.124799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.237 [2024-11-26 07:42:09.124804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.237 [2024-11-26 07:42:09.124816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.237 qpair failed and we were unable to recover it. 00:32:25.237 [2024-11-26 07:42:09.134758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.237 [2024-11-26 07:42:09.134798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.237 [2024-11-26 07:42:09.134808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.237 [2024-11-26 07:42:09.134814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.237 [2024-11-26 07:42:09.134819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.237 [2024-11-26 07:42:09.134830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.237 qpair failed and we were unable to recover it. 00:32:25.237 [2024-11-26 07:42:09.144721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.237 [2024-11-26 07:42:09.144759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.237 [2024-11-26 07:42:09.144769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.237 [2024-11-26 07:42:09.144775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.237 [2024-11-26 07:42:09.144780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.237 [2024-11-26 07:42:09.144790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.237 qpair failed and we were unable to recover it. 00:32:25.237 [2024-11-26 07:42:09.154815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.237 [2024-11-26 07:42:09.154859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.237 [2024-11-26 07:42:09.154872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.237 [2024-11-26 07:42:09.154877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.237 [2024-11-26 07:42:09.154882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.237 [2024-11-26 07:42:09.154892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.237 qpair failed and we were unable to recover it. 00:32:25.237 [2024-11-26 07:42:09.164770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.237 [2024-11-26 07:42:09.164811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.237 [2024-11-26 07:42:09.164823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.237 [2024-11-26 07:42:09.164829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.237 [2024-11-26 07:42:09.164833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.238 [2024-11-26 07:42:09.164844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.238 qpair failed and we were unable to recover it. 00:32:25.238 [2024-11-26 07:42:09.174887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.238 [2024-11-26 07:42:09.174932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.238 [2024-11-26 07:42:09.174942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.238 [2024-11-26 07:42:09.174947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.238 [2024-11-26 07:42:09.174952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.238 [2024-11-26 07:42:09.174962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.238 qpair failed and we were unable to recover it. 00:32:25.238 [2024-11-26 07:42:09.184854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.238 [2024-11-26 07:42:09.184899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.238 [2024-11-26 07:42:09.184909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.238 [2024-11-26 07:42:09.184915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.238 [2024-11-26 07:42:09.184920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.238 [2024-11-26 07:42:09.184930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.238 qpair failed and we were unable to recover it. 00:32:25.238 [2024-11-26 07:42:09.194925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.238 [2024-11-26 07:42:09.194967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.238 [2024-11-26 07:42:09.194976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.238 [2024-11-26 07:42:09.194982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.238 [2024-11-26 07:42:09.194987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.238 [2024-11-26 07:42:09.194997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.238 qpair failed and we were unable to recover it. 00:32:25.238 [2024-11-26 07:42:09.204901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.238 [2024-11-26 07:42:09.204943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.238 [2024-11-26 07:42:09.204953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.238 [2024-11-26 07:42:09.204961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.238 [2024-11-26 07:42:09.204967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.238 [2024-11-26 07:42:09.204978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.238 qpair failed and we were unable to recover it. 00:32:25.238 [2024-11-26 07:42:09.214824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.238 [2024-11-26 07:42:09.214871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.238 [2024-11-26 07:42:09.214882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.238 [2024-11-26 07:42:09.214887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.238 [2024-11-26 07:42:09.214892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.238 [2024-11-26 07:42:09.214903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.238 qpair failed and we were unable to recover it. 00:32:25.238 [2024-11-26 07:42:09.224970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.238 [2024-11-26 07:42:09.225016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.238 [2024-11-26 07:42:09.225026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.238 [2024-11-26 07:42:09.225031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.238 [2024-11-26 07:42:09.225036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.238 [2024-11-26 07:42:09.225047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.238 qpair failed and we were unable to recover it. 00:32:25.238 [2024-11-26 07:42:09.235040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.238 [2024-11-26 07:42:09.235087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.238 [2024-11-26 07:42:09.235097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.238 [2024-11-26 07:42:09.235103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.238 [2024-11-26 07:42:09.235109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.238 [2024-11-26 07:42:09.235119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.238 qpair failed and we were unable to recover it. 00:32:25.238 [2024-11-26 07:42:09.244938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.238 [2024-11-26 07:42:09.244994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.238 [2024-11-26 07:42:09.245006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.238 [2024-11-26 07:42:09.245011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.238 [2024-11-26 07:42:09.245016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.238 [2024-11-26 07:42:09.245027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.238 qpair failed and we were unable to recover it. 00:32:25.238 [2024-11-26 07:42:09.254934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.238 [2024-11-26 07:42:09.254974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.238 [2024-11-26 07:42:09.254985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.238 [2024-11-26 07:42:09.254990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.238 [2024-11-26 07:42:09.254995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.238 [2024-11-26 07:42:09.255005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.238 qpair failed and we were unable to recover it. 00:32:25.238 [2024-11-26 07:42:09.265098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.238 [2024-11-26 07:42:09.265137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.238 [2024-11-26 07:42:09.265148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.238 [2024-11-26 07:42:09.265155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.238 [2024-11-26 07:42:09.265160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.238 [2024-11-26 07:42:09.265170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.238 qpair failed and we were unable to recover it. 00:32:25.238 [2024-11-26 07:42:09.275154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.238 [2024-11-26 07:42:09.275202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.238 [2024-11-26 07:42:09.275212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.238 [2024-11-26 07:42:09.275218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.238 [2024-11-26 07:42:09.275223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.238 [2024-11-26 07:42:09.275233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.238 qpair failed and we were unable to recover it. 00:32:25.238 [2024-11-26 07:42:09.285202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.238 [2024-11-26 07:42:09.285243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.238 [2024-11-26 07:42:09.285253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.238 [2024-11-26 07:42:09.285258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.238 [2024-11-26 07:42:09.285263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.238 [2024-11-26 07:42:09.285273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.238 qpair failed and we were unable to recover it. 00:32:25.238 [2024-11-26 07:42:09.295184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.238 [2024-11-26 07:42:09.295231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.238 [2024-11-26 07:42:09.295241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.238 [2024-11-26 07:42:09.295247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.238 [2024-11-26 07:42:09.295251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.238 [2024-11-26 07:42:09.295261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.238 qpair failed and we were unable to recover it. 00:32:25.238 [2024-11-26 07:42:09.305207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.238 [2024-11-26 07:42:09.305247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.238 [2024-11-26 07:42:09.305257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.238 [2024-11-26 07:42:09.305262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.238 [2024-11-26 07:42:09.305267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.238 [2024-11-26 07:42:09.305277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.238 qpair failed and we were unable to recover it. 00:32:25.238 [2024-11-26 07:42:09.315229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.238 [2024-11-26 07:42:09.315277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.238 [2024-11-26 07:42:09.315286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.238 [2024-11-26 07:42:09.315292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.238 [2024-11-26 07:42:09.315296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.238 [2024-11-26 07:42:09.315307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.238 qpair failed and we were unable to recover it. 00:32:25.238 [2024-11-26 07:42:09.325244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.238 [2024-11-26 07:42:09.325288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.238 [2024-11-26 07:42:09.325297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.238 [2024-11-26 07:42:09.325302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.238 [2024-11-26 07:42:09.325307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.238 [2024-11-26 07:42:09.325317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.238 qpair failed and we were unable to recover it. 00:32:25.238 [2024-11-26 07:42:09.335343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.238 [2024-11-26 07:42:09.335399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.239 [2024-11-26 07:42:09.335409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.239 [2024-11-26 07:42:09.335417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.239 [2024-11-26 07:42:09.335421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.239 [2024-11-26 07:42:09.335432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.239 qpair failed and we were unable to recover it. 00:32:25.239 [2024-11-26 07:42:09.345299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.239 [2024-11-26 07:42:09.345339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.239 [2024-11-26 07:42:09.345350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.239 [2024-11-26 07:42:09.345355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.239 [2024-11-26 07:42:09.345360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.239 [2024-11-26 07:42:09.345370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.239 qpair failed and we were unable to recover it. 00:32:25.239 [2024-11-26 07:42:09.355336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.239 [2024-11-26 07:42:09.355383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.239 [2024-11-26 07:42:09.355393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.239 [2024-11-26 07:42:09.355399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.239 [2024-11-26 07:42:09.355404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.239 [2024-11-26 07:42:09.355416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.239 qpair failed and we were unable to recover it. 00:32:25.239 [2024-11-26 07:42:09.365353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.239 [2024-11-26 07:42:09.365393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.239 [2024-11-26 07:42:09.365403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.239 [2024-11-26 07:42:09.365408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.239 [2024-11-26 07:42:09.365413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.239 [2024-11-26 07:42:09.365423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.239 qpair failed and we were unable to recover it. 00:32:25.502 [2024-11-26 07:42:09.375398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.502 [2024-11-26 07:42:09.375448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.502 [2024-11-26 07:42:09.375458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.502 [2024-11-26 07:42:09.375464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.502 [2024-11-26 07:42:09.375468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.502 [2024-11-26 07:42:09.375481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.502 qpair failed and we were unable to recover it. 00:32:25.502 [2024-11-26 07:42:09.385397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.502 [2024-11-26 07:42:09.385439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.502 [2024-11-26 07:42:09.385449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.502 [2024-11-26 07:42:09.385454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.502 [2024-11-26 07:42:09.385459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.502 [2024-11-26 07:42:09.385470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.502 qpair failed and we were unable to recover it. 00:32:25.502 [2024-11-26 07:42:09.395465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.502 [2024-11-26 07:42:09.395515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.502 [2024-11-26 07:42:09.395524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.502 [2024-11-26 07:42:09.395530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.502 [2024-11-26 07:42:09.395535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.502 [2024-11-26 07:42:09.395545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.502 qpair failed and we were unable to recover it. 00:32:25.502 [2024-11-26 07:42:09.405445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.502 [2024-11-26 07:42:09.405486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.503 [2024-11-26 07:42:09.405497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.503 [2024-11-26 07:42:09.405502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.503 [2024-11-26 07:42:09.405507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.503 [2024-11-26 07:42:09.405517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.503 qpair failed and we were unable to recover it. 00:32:25.503 [2024-11-26 07:42:09.415475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.503 [2024-11-26 07:42:09.415522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.503 [2024-11-26 07:42:09.415533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.503 [2024-11-26 07:42:09.415539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.503 [2024-11-26 07:42:09.415544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.503 [2024-11-26 07:42:09.415554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.503 qpair failed and we were unable to recover it. 00:32:25.503 [2024-11-26 07:42:09.425510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.503 [2024-11-26 07:42:09.425557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.503 [2024-11-26 07:42:09.425568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.503 [2024-11-26 07:42:09.425574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.503 [2024-11-26 07:42:09.425578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.503 [2024-11-26 07:42:09.425589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.503 qpair failed and we were unable to recover it. 00:32:25.503 [2024-11-26 07:42:09.435579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.503 [2024-11-26 07:42:09.435627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.503 [2024-11-26 07:42:09.435637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.503 [2024-11-26 07:42:09.435643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.503 [2024-11-26 07:42:09.435647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.503 [2024-11-26 07:42:09.435658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.503 qpair failed and we were unable to recover it. 00:32:25.503 [2024-11-26 07:42:09.445592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.503 [2024-11-26 07:42:09.445666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.503 [2024-11-26 07:42:09.445676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.503 [2024-11-26 07:42:09.445681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.503 [2024-11-26 07:42:09.445686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.503 [2024-11-26 07:42:09.445696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.503 qpair failed and we were unable to recover it. 00:32:25.503 [2024-11-26 07:42:09.455609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.503 [2024-11-26 07:42:09.455651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.503 [2024-11-26 07:42:09.455661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.503 [2024-11-26 07:42:09.455666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.503 [2024-11-26 07:42:09.455671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.503 [2024-11-26 07:42:09.455681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.503 qpair failed and we were unable to recover it. 00:32:25.503 [2024-11-26 07:42:09.465610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.503 [2024-11-26 07:42:09.465659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.503 [2024-11-26 07:42:09.465672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.503 [2024-11-26 07:42:09.465677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.503 [2024-11-26 07:42:09.465682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.503 [2024-11-26 07:42:09.465693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.503 qpair failed and we were unable to recover it. 00:32:25.503 [2024-11-26 07:42:09.475684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.503 [2024-11-26 07:42:09.475728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.503 [2024-11-26 07:42:09.475738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.503 [2024-11-26 07:42:09.475743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.503 [2024-11-26 07:42:09.475748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.503 [2024-11-26 07:42:09.475758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.503 qpair failed and we were unable to recover it. 00:32:25.503 [2024-11-26 07:42:09.485554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.503 [2024-11-26 07:42:09.485596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.503 [2024-11-26 07:42:09.485606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.503 [2024-11-26 07:42:09.485611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.503 [2024-11-26 07:42:09.485616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.503 [2024-11-26 07:42:09.485626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.503 qpair failed and we were unable to recover it. 00:32:25.503 [2024-11-26 07:42:09.495723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.503 [2024-11-26 07:42:09.495764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.503 [2024-11-26 07:42:09.495774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.503 [2024-11-26 07:42:09.495780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.503 [2024-11-26 07:42:09.495784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.503 [2024-11-26 07:42:09.495794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.503 qpair failed and we were unable to recover it. 00:32:25.503 [2024-11-26 07:42:09.505731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.503 [2024-11-26 07:42:09.505769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.503 [2024-11-26 07:42:09.505779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.503 [2024-11-26 07:42:09.505784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.503 [2024-11-26 07:42:09.505792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.503 [2024-11-26 07:42:09.505802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.503 qpair failed and we were unable to recover it. 00:32:25.503 [2024-11-26 07:42:09.515785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.503 [2024-11-26 07:42:09.515840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.503 [2024-11-26 07:42:09.515849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.503 [2024-11-26 07:42:09.515855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.503 [2024-11-26 07:42:09.515860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.503 [2024-11-26 07:42:09.515875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.503 qpair failed and we were unable to recover it. 00:32:25.503 [2024-11-26 07:42:09.525783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.503 [2024-11-26 07:42:09.525824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.503 [2024-11-26 07:42:09.525834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.503 [2024-11-26 07:42:09.525839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.503 [2024-11-26 07:42:09.525844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.503 [2024-11-26 07:42:09.525854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.503 qpair failed and we were unable to recover it. 00:32:25.503 [2024-11-26 07:42:09.535796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.503 [2024-11-26 07:42:09.535836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.503 [2024-11-26 07:42:09.535847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.503 [2024-11-26 07:42:09.535853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.503 [2024-11-26 07:42:09.535857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.503 [2024-11-26 07:42:09.535873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.503 qpair failed and we were unable to recover it. 00:32:25.503 [2024-11-26 07:42:09.545731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.504 [2024-11-26 07:42:09.545771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.504 [2024-11-26 07:42:09.545781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.504 [2024-11-26 07:42:09.545786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.504 [2024-11-26 07:42:09.545791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.504 [2024-11-26 07:42:09.545802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.504 qpair failed and we were unable to recover it. 00:32:25.504 [2024-11-26 07:42:09.555900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.504 [2024-11-26 07:42:09.555947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.504 [2024-11-26 07:42:09.555959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.504 [2024-11-26 07:42:09.555965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.504 [2024-11-26 07:42:09.555970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.504 [2024-11-26 07:42:09.555981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.504 qpair failed and we were unable to recover it. 00:32:25.504 [2024-11-26 07:42:09.565770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.504 [2024-11-26 07:42:09.565813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.504 [2024-11-26 07:42:09.565823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.504 [2024-11-26 07:42:09.565828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.504 [2024-11-26 07:42:09.565833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.504 [2024-11-26 07:42:09.565844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.504 qpair failed and we were unable to recover it. 00:32:25.504 [2024-11-26 07:42:09.575937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.504 [2024-11-26 07:42:09.575984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.504 [2024-11-26 07:42:09.575994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.504 [2024-11-26 07:42:09.575999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.504 [2024-11-26 07:42:09.576003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.504 [2024-11-26 07:42:09.576014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.504 qpair failed and we were unable to recover it. 00:32:25.504 [2024-11-26 07:42:09.585909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.504 [2024-11-26 07:42:09.585948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.504 [2024-11-26 07:42:09.585958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.504 [2024-11-26 07:42:09.585963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.504 [2024-11-26 07:42:09.585968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.504 [2024-11-26 07:42:09.585979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.504 qpair failed and we were unable to recover it. 00:32:25.504 [2024-11-26 07:42:09.596003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.504 [2024-11-26 07:42:09.596044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.504 [2024-11-26 07:42:09.596056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.504 [2024-11-26 07:42:09.596062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.504 [2024-11-26 07:42:09.596067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.504 [2024-11-26 07:42:09.596077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.504 qpair failed and we were unable to recover it. 00:32:25.504 [2024-11-26 07:42:09.606001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.504 [2024-11-26 07:42:09.606046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.504 [2024-11-26 07:42:09.606056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.504 [2024-11-26 07:42:09.606061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.504 [2024-11-26 07:42:09.606067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.504 [2024-11-26 07:42:09.606077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.504 qpair failed and we were unable to recover it. 00:32:25.504 [2024-11-26 07:42:09.616001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.504 [2024-11-26 07:42:09.616045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.504 [2024-11-26 07:42:09.616055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.504 [2024-11-26 07:42:09.616061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.504 [2024-11-26 07:42:09.616066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.504 [2024-11-26 07:42:09.616076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.504 qpair failed and we were unable to recover it. 00:32:25.504 [2024-11-26 07:42:09.626054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.504 [2024-11-26 07:42:09.626128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.504 [2024-11-26 07:42:09.626138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.504 [2024-11-26 07:42:09.626143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.504 [2024-11-26 07:42:09.626148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.504 [2024-11-26 07:42:09.626158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.504 qpair failed and we were unable to recover it. 00:32:25.768 [2024-11-26 07:42:09.636108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.768 [2024-11-26 07:42:09.636181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.768 [2024-11-26 07:42:09.636191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.768 [2024-11-26 07:42:09.636197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.768 [2024-11-26 07:42:09.636204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.768 [2024-11-26 07:42:09.636215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.768 qpair failed and we were unable to recover it. 00:32:25.768 [2024-11-26 07:42:09.646101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.768 [2024-11-26 07:42:09.646147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.768 [2024-11-26 07:42:09.646157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.768 [2024-11-26 07:42:09.646162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.768 [2024-11-26 07:42:09.646167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.768 [2024-11-26 07:42:09.646177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.768 qpair failed and we were unable to recover it. 00:32:25.768 [2024-11-26 07:42:09.656166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.768 [2024-11-26 07:42:09.656245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.768 [2024-11-26 07:42:09.656255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.768 [2024-11-26 07:42:09.656260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.768 [2024-11-26 07:42:09.656265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.768 [2024-11-26 07:42:09.656276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.768 qpair failed and we were unable to recover it. 00:32:25.768 [2024-11-26 07:42:09.666161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.769 [2024-11-26 07:42:09.666201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.769 [2024-11-26 07:42:09.666211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.769 [2024-11-26 07:42:09.666216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.769 [2024-11-26 07:42:09.666221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.769 [2024-11-26 07:42:09.666232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.769 qpair failed and we were unable to recover it. 00:32:25.769 [2024-11-26 07:42:09.676230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.769 [2024-11-26 07:42:09.676291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.769 [2024-11-26 07:42:09.676302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.769 [2024-11-26 07:42:09.676307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.769 [2024-11-26 07:42:09.676312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.769 [2024-11-26 07:42:09.676322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.769 qpair failed and we were unable to recover it. 00:32:25.769 [2024-11-26 07:42:09.686198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.769 [2024-11-26 07:42:09.686240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.769 [2024-11-26 07:42:09.686249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.769 [2024-11-26 07:42:09.686255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.769 [2024-11-26 07:42:09.686260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.769 [2024-11-26 07:42:09.686269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.769 qpair failed and we were unable to recover it. 00:32:25.769 [2024-11-26 07:42:09.696244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.769 [2024-11-26 07:42:09.696291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.769 [2024-11-26 07:42:09.696301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.769 [2024-11-26 07:42:09.696307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.769 [2024-11-26 07:42:09.696312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.769 [2024-11-26 07:42:09.696322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.769 qpair failed and we were unable to recover it. 00:32:25.769 [2024-11-26 07:42:09.706135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.769 [2024-11-26 07:42:09.706180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.769 [2024-11-26 07:42:09.706190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.769 [2024-11-26 07:42:09.706197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.769 [2024-11-26 07:42:09.706203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.769 [2024-11-26 07:42:09.706214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.769 qpair failed and we were unable to recover it. 00:32:25.769 [2024-11-26 07:42:09.716287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.769 [2024-11-26 07:42:09.716357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.769 [2024-11-26 07:42:09.716366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.769 [2024-11-26 07:42:09.716372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.769 [2024-11-26 07:42:09.716376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.769 [2024-11-26 07:42:09.716386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.769 qpair failed and we were unable to recover it. 00:32:25.769 [2024-11-26 07:42:09.726316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.769 [2024-11-26 07:42:09.726364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.769 [2024-11-26 07:42:09.726377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.769 [2024-11-26 07:42:09.726382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.769 [2024-11-26 07:42:09.726388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.769 [2024-11-26 07:42:09.726398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.769 qpair failed and we were unable to recover it. 00:32:25.769 [2024-11-26 07:42:09.736360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.769 [2024-11-26 07:42:09.736405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.769 [2024-11-26 07:42:09.736415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.769 [2024-11-26 07:42:09.736420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.769 [2024-11-26 07:42:09.736425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.769 [2024-11-26 07:42:09.736435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.769 qpair failed and we were unable to recover it. 00:32:25.769 [2024-11-26 07:42:09.746410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.769 [2024-11-26 07:42:09.746456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.769 [2024-11-26 07:42:09.746465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.769 [2024-11-26 07:42:09.746471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.769 [2024-11-26 07:42:09.746475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.769 [2024-11-26 07:42:09.746485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.769 qpair failed and we were unable to recover it. 00:32:25.769 [2024-11-26 07:42:09.756428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.769 [2024-11-26 07:42:09.756467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.769 [2024-11-26 07:42:09.756477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.769 [2024-11-26 07:42:09.756482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.769 [2024-11-26 07:42:09.756487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.769 [2024-11-26 07:42:09.756497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.769 qpair failed and we were unable to recover it. 00:32:25.769 [2024-11-26 07:42:09.766443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.769 [2024-11-26 07:42:09.766485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.769 [2024-11-26 07:42:09.766495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.769 [2024-11-26 07:42:09.766504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.769 [2024-11-26 07:42:09.766508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.769 [2024-11-26 07:42:09.766518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.769 qpair failed and we were unable to recover it. 00:32:25.769 [2024-11-26 07:42:09.776315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.769 [2024-11-26 07:42:09.776361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.769 [2024-11-26 07:42:09.776371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.769 [2024-11-26 07:42:09.776377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.769 [2024-11-26 07:42:09.776381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.769 [2024-11-26 07:42:09.776391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.769 qpair failed and we were unable to recover it. 00:32:25.769 [2024-11-26 07:42:09.786465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.769 [2024-11-26 07:42:09.786512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.769 [2024-11-26 07:42:09.786522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.769 [2024-11-26 07:42:09.786527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.769 [2024-11-26 07:42:09.786532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.769 [2024-11-26 07:42:09.786542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.769 qpair failed and we were unable to recover it. 00:32:25.769 [2024-11-26 07:42:09.796500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.770 [2024-11-26 07:42:09.796541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.770 [2024-11-26 07:42:09.796551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.770 [2024-11-26 07:42:09.796556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.770 [2024-11-26 07:42:09.796561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.770 [2024-11-26 07:42:09.796571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.770 qpair failed and we were unable to recover it. 00:32:25.770 [2024-11-26 07:42:09.806520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.770 [2024-11-26 07:42:09.806563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.770 [2024-11-26 07:42:09.806573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.770 [2024-11-26 07:42:09.806578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.770 [2024-11-26 07:42:09.806583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.770 [2024-11-26 07:42:09.806593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.770 qpair failed and we were unable to recover it. 00:32:25.770 [2024-11-26 07:42:09.816532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.770 [2024-11-26 07:42:09.816574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.770 [2024-11-26 07:42:09.816585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.770 [2024-11-26 07:42:09.816592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.770 [2024-11-26 07:42:09.816598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.770 [2024-11-26 07:42:09.816609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.770 qpair failed and we were unable to recover it. 00:32:25.770 [2024-11-26 07:42:09.826542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.770 [2024-11-26 07:42:09.826584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.770 [2024-11-26 07:42:09.826594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.770 [2024-11-26 07:42:09.826600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.770 [2024-11-26 07:42:09.826605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.770 [2024-11-26 07:42:09.826615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.770 qpair failed and we were unable to recover it. 00:32:25.770 [2024-11-26 07:42:09.836636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.770 [2024-11-26 07:42:09.836679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.770 [2024-11-26 07:42:09.836689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.770 [2024-11-26 07:42:09.836694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.770 [2024-11-26 07:42:09.836699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.770 [2024-11-26 07:42:09.836709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.770 qpair failed and we were unable to recover it. 00:32:25.770 [2024-11-26 07:42:09.846607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.770 [2024-11-26 07:42:09.846648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.770 [2024-11-26 07:42:09.846659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.770 [2024-11-26 07:42:09.846665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.770 [2024-11-26 07:42:09.846669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.770 [2024-11-26 07:42:09.846680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.770 qpair failed and we were unable to recover it. 00:32:25.770 [2024-11-26 07:42:09.856557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.770 [2024-11-26 07:42:09.856606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.770 [2024-11-26 07:42:09.856617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.770 [2024-11-26 07:42:09.856623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.770 [2024-11-26 07:42:09.856628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.770 [2024-11-26 07:42:09.856639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.770 qpair failed and we were unable to recover it. 00:32:25.770 [2024-11-26 07:42:09.866667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.770 [2024-11-26 07:42:09.866709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.770 [2024-11-26 07:42:09.866719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.770 [2024-11-26 07:42:09.866724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.770 [2024-11-26 07:42:09.866729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.770 [2024-11-26 07:42:09.866740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.770 qpair failed and we were unable to recover it. 00:32:25.770 [2024-11-26 07:42:09.876717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.770 [2024-11-26 07:42:09.876760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.770 [2024-11-26 07:42:09.876770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.770 [2024-11-26 07:42:09.876775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.770 [2024-11-26 07:42:09.876780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.770 [2024-11-26 07:42:09.876790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.770 qpair failed and we were unable to recover it. 00:32:25.770 [2024-11-26 07:42:09.886719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.770 [2024-11-26 07:42:09.886760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.770 [2024-11-26 07:42:09.886770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.770 [2024-11-26 07:42:09.886775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.770 [2024-11-26 07:42:09.886780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:25.770 [2024-11-26 07:42:09.886791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:25.770 qpair failed and we were unable to recover it. 00:32:26.035 [2024-11-26 07:42:09.896778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.035 [2024-11-26 07:42:09.896823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.035 [2024-11-26 07:42:09.896833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.035 [2024-11-26 07:42:09.896841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.035 [2024-11-26 07:42:09.896846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.035 [2024-11-26 07:42:09.896857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.035 qpair failed and we were unable to recover it. 00:32:26.035 [2024-11-26 07:42:09.906787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.035 [2024-11-26 07:42:09.906823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.035 [2024-11-26 07:42:09.906834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.035 [2024-11-26 07:42:09.906840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.035 [2024-11-26 07:42:09.906844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.035 [2024-11-26 07:42:09.906855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.035 qpair failed and we were unable to recover it. 00:32:26.035 [2024-11-26 07:42:09.916838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.035 [2024-11-26 07:42:09.916887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.035 [2024-11-26 07:42:09.916898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.035 [2024-11-26 07:42:09.916904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.035 [2024-11-26 07:42:09.916909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.035 [2024-11-26 07:42:09.916921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.035 qpair failed and we were unable to recover it. 00:32:26.035 [2024-11-26 07:42:09.926822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.035 [2024-11-26 07:42:09.926875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.035 [2024-11-26 07:42:09.926887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.035 [2024-11-26 07:42:09.926892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.035 [2024-11-26 07:42:09.926897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.035 [2024-11-26 07:42:09.926908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.035 qpair failed and we were unable to recover it. 00:32:26.035 [2024-11-26 07:42:09.936896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.035 [2024-11-26 07:42:09.936940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.035 [2024-11-26 07:42:09.936950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.035 [2024-11-26 07:42:09.936955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.035 [2024-11-26 07:42:09.936961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.035 [2024-11-26 07:42:09.936974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.035 qpair failed and we were unable to recover it. 00:32:26.035 [2024-11-26 07:42:09.946966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.035 [2024-11-26 07:42:09.947006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.035 [2024-11-26 07:42:09.947016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.035 [2024-11-26 07:42:09.947021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.035 [2024-11-26 07:42:09.947026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.035 [2024-11-26 07:42:09.947037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.035 qpair failed and we were unable to recover it. 00:32:26.035 [2024-11-26 07:42:09.956974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.035 [2024-11-26 07:42:09.957019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.035 [2024-11-26 07:42:09.957029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.035 [2024-11-26 07:42:09.957034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.035 [2024-11-26 07:42:09.957038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.035 [2024-11-26 07:42:09.957049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.035 qpair failed and we were unable to recover it. 00:32:26.035 [2024-11-26 07:42:09.966966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.035 [2024-11-26 07:42:09.967050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.035 [2024-11-26 07:42:09.967060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.035 [2024-11-26 07:42:09.967065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.035 [2024-11-26 07:42:09.967071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.035 [2024-11-26 07:42:09.967081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.035 qpair failed and we were unable to recover it. 00:32:26.035 [2024-11-26 07:42:09.976966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.036 [2024-11-26 07:42:09.977025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.036 [2024-11-26 07:42:09.977034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.036 [2024-11-26 07:42:09.977040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.036 [2024-11-26 07:42:09.977045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.036 [2024-11-26 07:42:09.977055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.036 qpair failed and we were unable to recover it. 00:32:26.036 [2024-11-26 07:42:09.987009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.036 [2024-11-26 07:42:09.987048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.036 [2024-11-26 07:42:09.987057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.036 [2024-11-26 07:42:09.987062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.036 [2024-11-26 07:42:09.987067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.036 [2024-11-26 07:42:09.987078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.036 qpair failed and we were unable to recover it. 00:32:26.036 [2024-11-26 07:42:09.997078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.036 [2024-11-26 07:42:09.997120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.036 [2024-11-26 07:42:09.997129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.036 [2024-11-26 07:42:09.997135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.036 [2024-11-26 07:42:09.997139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.036 [2024-11-26 07:42:09.997150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.036 qpair failed and we were unable to recover it. 00:32:26.036 [2024-11-26 07:42:10.006958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.036 [2024-11-26 07:42:10.007001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.036 [2024-11-26 07:42:10.007012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.036 [2024-11-26 07:42:10.007017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.036 [2024-11-26 07:42:10.007022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.036 [2024-11-26 07:42:10.007032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.036 qpair failed and we were unable to recover it. 00:32:26.036 [2024-11-26 07:42:10.017094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.036 [2024-11-26 07:42:10.017183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.036 [2024-11-26 07:42:10.017193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.036 [2024-11-26 07:42:10.017199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.036 [2024-11-26 07:42:10.017204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.036 [2024-11-26 07:42:10.017214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.036 qpair failed and we were unable to recover it. 00:32:26.036 [2024-11-26 07:42:10.027000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.036 [2024-11-26 07:42:10.027087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.036 [2024-11-26 07:42:10.027100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.036 [2024-11-26 07:42:10.027106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.036 [2024-11-26 07:42:10.027111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.036 [2024-11-26 07:42:10.027122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.036 qpair failed and we were unable to recover it. 00:32:26.036 [2024-11-26 07:42:10.037171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.036 [2024-11-26 07:42:10.037217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.036 [2024-11-26 07:42:10.037227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.036 [2024-11-26 07:42:10.037233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.036 [2024-11-26 07:42:10.037237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.036 [2024-11-26 07:42:10.037248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.036 qpair failed and we were unable to recover it. 00:32:26.036 [2024-11-26 07:42:10.047139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.036 [2024-11-26 07:42:10.047184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.036 [2024-11-26 07:42:10.047195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.036 [2024-11-26 07:42:10.047201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.036 [2024-11-26 07:42:10.047206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.036 [2024-11-26 07:42:10.047217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.036 qpair failed and we were unable to recover it. 00:32:26.036 [2024-11-26 07:42:10.057190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.036 [2024-11-26 07:42:10.057234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.036 [2024-11-26 07:42:10.057246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.036 [2024-11-26 07:42:10.057252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.036 [2024-11-26 07:42:10.057257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.036 [2024-11-26 07:42:10.057268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.036 qpair failed and we were unable to recover it. 00:32:26.036 [2024-11-26 07:42:10.067111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.036 [2024-11-26 07:42:10.067155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.036 [2024-11-26 07:42:10.067165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.036 [2024-11-26 07:42:10.067170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.036 [2024-11-26 07:42:10.067178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.036 [2024-11-26 07:42:10.067188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.037 qpair failed and we were unable to recover it. 00:32:26.037 [2024-11-26 07:42:10.077265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.037 [2024-11-26 07:42:10.077308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.037 [2024-11-26 07:42:10.077318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.037 [2024-11-26 07:42:10.077323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.037 [2024-11-26 07:42:10.077328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.037 [2024-11-26 07:42:10.077339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.037 qpair failed and we were unable to recover it. 00:32:26.037 [2024-11-26 07:42:10.087252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.037 [2024-11-26 07:42:10.087324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.037 [2024-11-26 07:42:10.087334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.037 [2024-11-26 07:42:10.087339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.037 [2024-11-26 07:42:10.087345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.037 [2024-11-26 07:42:10.087355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.037 qpair failed and we were unable to recover it. 00:32:26.037 [2024-11-26 07:42:10.097298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.037 [2024-11-26 07:42:10.097339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.037 [2024-11-26 07:42:10.097349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.037 [2024-11-26 07:42:10.097354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.037 [2024-11-26 07:42:10.097359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.037 [2024-11-26 07:42:10.097369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.037 qpair failed and we were unable to recover it. 00:32:26.037 [2024-11-26 07:42:10.107277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.037 [2024-11-26 07:42:10.107318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.037 [2024-11-26 07:42:10.107328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.037 [2024-11-26 07:42:10.107333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.037 [2024-11-26 07:42:10.107338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.037 [2024-11-26 07:42:10.107348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.037 qpair failed and we were unable to recover it. 00:32:26.037 [2024-11-26 07:42:10.117335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.037 [2024-11-26 07:42:10.117375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.037 [2024-11-26 07:42:10.117385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.037 [2024-11-26 07:42:10.117390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.037 [2024-11-26 07:42:10.117395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.037 [2024-11-26 07:42:10.117405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.037 qpair failed and we were unable to recover it. 00:32:26.037 [2024-11-26 07:42:10.127220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.037 [2024-11-26 07:42:10.127262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.037 [2024-11-26 07:42:10.127271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.037 [2024-11-26 07:42:10.127277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.037 [2024-11-26 07:42:10.127282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.037 [2024-11-26 07:42:10.127292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.037 qpair failed and we were unable to recover it. 00:32:26.037 [2024-11-26 07:42:10.137399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.037 [2024-11-26 07:42:10.137445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.037 [2024-11-26 07:42:10.137455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.037 [2024-11-26 07:42:10.137461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.037 [2024-11-26 07:42:10.137466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.037 [2024-11-26 07:42:10.137476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.037 qpair failed and we were unable to recover it. 00:32:26.037 [2024-11-26 07:42:10.147414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.037 [2024-11-26 07:42:10.147503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.037 [2024-11-26 07:42:10.147513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.037 [2024-11-26 07:42:10.147519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.037 [2024-11-26 07:42:10.147524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.037 [2024-11-26 07:42:10.147534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.037 qpair failed and we were unable to recover it. 00:32:26.037 [2024-11-26 07:42:10.157402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.037 [2024-11-26 07:42:10.157478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.037 [2024-11-26 07:42:10.157491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.037 [2024-11-26 07:42:10.157496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.037 [2024-11-26 07:42:10.157501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.037 [2024-11-26 07:42:10.157512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.037 qpair failed and we were unable to recover it. 00:32:26.301 [2024-11-26 07:42:10.167473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.301 [2024-11-26 07:42:10.167513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.301 [2024-11-26 07:42:10.167524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.301 [2024-11-26 07:42:10.167529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.301 [2024-11-26 07:42:10.167533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.301 [2024-11-26 07:42:10.167544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.301 qpair failed and we were unable to recover it. 00:32:26.301 [2024-11-26 07:42:10.177503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.301 [2024-11-26 07:42:10.177546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.301 [2024-11-26 07:42:10.177556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.301 [2024-11-26 07:42:10.177561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.301 [2024-11-26 07:42:10.177566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.301 [2024-11-26 07:42:10.177576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.301 qpair failed and we were unable to recover it. 00:32:26.301 [2024-11-26 07:42:10.187538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.301 [2024-11-26 07:42:10.187578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.301 [2024-11-26 07:42:10.187588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.301 [2024-11-26 07:42:10.187593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.302 [2024-11-26 07:42:10.187598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.302 [2024-11-26 07:42:10.187608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.302 qpair failed and we were unable to recover it. 00:32:26.302 [2024-11-26 07:42:10.197407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.302 [2024-11-26 07:42:10.197447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.302 [2024-11-26 07:42:10.197457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.302 [2024-11-26 07:42:10.197462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.302 [2024-11-26 07:42:10.197470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.302 [2024-11-26 07:42:10.197480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.302 qpair failed and we were unable to recover it. 00:32:26.302 [2024-11-26 07:42:10.207587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.302 [2024-11-26 07:42:10.207632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.302 [2024-11-26 07:42:10.207642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.302 [2024-11-26 07:42:10.207647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.302 [2024-11-26 07:42:10.207652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.302 [2024-11-26 07:42:10.207662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.302 qpair failed and we were unable to recover it. 00:32:26.302 [2024-11-26 07:42:10.217588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.302 [2024-11-26 07:42:10.217634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.302 [2024-11-26 07:42:10.217643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.302 [2024-11-26 07:42:10.217648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.302 [2024-11-26 07:42:10.217653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.302 [2024-11-26 07:42:10.217663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.302 qpair failed and we were unable to recover it. 00:32:26.302 [2024-11-26 07:42:10.227631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.302 [2024-11-26 07:42:10.227671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.302 [2024-11-26 07:42:10.227681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.302 [2024-11-26 07:42:10.227686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.302 [2024-11-26 07:42:10.227691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.302 [2024-11-26 07:42:10.227701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.302 qpair failed and we were unable to recover it. 00:32:26.302 [2024-11-26 07:42:10.237665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.302 [2024-11-26 07:42:10.237705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.302 [2024-11-26 07:42:10.237714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.302 [2024-11-26 07:42:10.237720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.302 [2024-11-26 07:42:10.237724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.302 [2024-11-26 07:42:10.237734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.302 qpair failed and we were unable to recover it. 00:32:26.302 [2024-11-26 07:42:10.247698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.302 [2024-11-26 07:42:10.247743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.302 [2024-11-26 07:42:10.247753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.302 [2024-11-26 07:42:10.247759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.302 [2024-11-26 07:42:10.247763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.302 [2024-11-26 07:42:10.247774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.302 qpair failed and we were unable to recover it. 00:32:26.302 [2024-11-26 07:42:10.257696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.302 [2024-11-26 07:42:10.257738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.302 [2024-11-26 07:42:10.257748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.302 [2024-11-26 07:42:10.257753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.302 [2024-11-26 07:42:10.257758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.302 [2024-11-26 07:42:10.257768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.302 qpair failed and we were unable to recover it. 00:32:26.302 [2024-11-26 07:42:10.267742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.302 [2024-11-26 07:42:10.267808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.302 [2024-11-26 07:42:10.267818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.302 [2024-11-26 07:42:10.267823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.302 [2024-11-26 07:42:10.267828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.302 [2024-11-26 07:42:10.267839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.302 qpair failed and we were unable to recover it. 00:32:26.302 [2024-11-26 07:42:10.277771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.302 [2024-11-26 07:42:10.277815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.302 [2024-11-26 07:42:10.277825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.302 [2024-11-26 07:42:10.277830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.302 [2024-11-26 07:42:10.277835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.302 [2024-11-26 07:42:10.277845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.302 qpair failed and we were unable to recover it. 00:32:26.302 [2024-11-26 07:42:10.287806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.302 [2024-11-26 07:42:10.287877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.302 [2024-11-26 07:42:10.287890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.302 [2024-11-26 07:42:10.287896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.302 [2024-11-26 07:42:10.287901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.303 [2024-11-26 07:42:10.287911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.303 qpair failed and we were unable to recover it. 00:32:26.303 [2024-11-26 07:42:10.297837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.303 [2024-11-26 07:42:10.297884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.303 [2024-11-26 07:42:10.297894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.303 [2024-11-26 07:42:10.297900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.303 [2024-11-26 07:42:10.297904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.303 [2024-11-26 07:42:10.297915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.303 qpair failed and we were unable to recover it. 00:32:26.303 [2024-11-26 07:42:10.307834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.303 [2024-11-26 07:42:10.307877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.303 [2024-11-26 07:42:10.307887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.303 [2024-11-26 07:42:10.307892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.303 [2024-11-26 07:42:10.307897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.303 [2024-11-26 07:42:10.307908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.303 qpair failed and we were unable to recover it. 00:32:26.303 [2024-11-26 07:42:10.317737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.303 [2024-11-26 07:42:10.317775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.303 [2024-11-26 07:42:10.317785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.303 [2024-11-26 07:42:10.317790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.303 [2024-11-26 07:42:10.317795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.303 [2024-11-26 07:42:10.317805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.303 qpair failed and we were unable to recover it. 00:32:26.303 [2024-11-26 07:42:10.327924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.303 [2024-11-26 07:42:10.327966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.303 [2024-11-26 07:42:10.327976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.303 [2024-11-26 07:42:10.327984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.303 [2024-11-26 07:42:10.327988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.303 [2024-11-26 07:42:10.327999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.303 qpair failed and we were unable to recover it. 00:32:26.303 [2024-11-26 07:42:10.337920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.303 [2024-11-26 07:42:10.337972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.303 [2024-11-26 07:42:10.337981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.303 [2024-11-26 07:42:10.337987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.303 [2024-11-26 07:42:10.337991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.303 [2024-11-26 07:42:10.338002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.303 qpair failed and we were unable to recover it. 00:32:26.303 [2024-11-26 07:42:10.347919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.303 [2024-11-26 07:42:10.347979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.303 [2024-11-26 07:42:10.347988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.303 [2024-11-26 07:42:10.347994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.303 [2024-11-26 07:42:10.347998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.303 [2024-11-26 07:42:10.348009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.303 qpair failed and we were unable to recover it. 00:32:26.303 [2024-11-26 07:42:10.358006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.303 [2024-11-26 07:42:10.358048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.303 [2024-11-26 07:42:10.358058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.303 [2024-11-26 07:42:10.358064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.303 [2024-11-26 07:42:10.358069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.303 [2024-11-26 07:42:10.358079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.303 qpair failed and we were unable to recover it. 00:32:26.303 [2024-11-26 07:42:10.367874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.303 [2024-11-26 07:42:10.367916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.303 [2024-11-26 07:42:10.367926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.303 [2024-11-26 07:42:10.367932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.303 [2024-11-26 07:42:10.367937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.303 [2024-11-26 07:42:10.367950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.303 qpair failed and we were unable to recover it. 00:32:26.303 [2024-11-26 07:42:10.378066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.303 [2024-11-26 07:42:10.378110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.303 [2024-11-26 07:42:10.378120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.303 [2024-11-26 07:42:10.378125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.303 [2024-11-26 07:42:10.378130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.303 [2024-11-26 07:42:10.378140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.303 qpair failed and we were unable to recover it. 00:32:26.303 [2024-11-26 07:42:10.388057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.303 [2024-11-26 07:42:10.388099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.303 [2024-11-26 07:42:10.388109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.303 [2024-11-26 07:42:10.388114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.303 [2024-11-26 07:42:10.388119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.304 [2024-11-26 07:42:10.388129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.304 qpair failed and we were unable to recover it. 00:32:26.304 [2024-11-26 07:42:10.398099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.304 [2024-11-26 07:42:10.398139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.304 [2024-11-26 07:42:10.398149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.304 [2024-11-26 07:42:10.398154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.304 [2024-11-26 07:42:10.398159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.304 [2024-11-26 07:42:10.398169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.304 qpair failed and we were unable to recover it. 00:32:26.304 [2024-11-26 07:42:10.407999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.304 [2024-11-26 07:42:10.408058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.304 [2024-11-26 07:42:10.408068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.304 [2024-11-26 07:42:10.408073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.304 [2024-11-26 07:42:10.408078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.304 [2024-11-26 07:42:10.408088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.304 qpair failed and we were unable to recover it. 00:32:26.304 [2024-11-26 07:42:10.418139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.304 [2024-11-26 07:42:10.418187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.304 [2024-11-26 07:42:10.418196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.304 [2024-11-26 07:42:10.418201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.304 [2024-11-26 07:42:10.418206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.304 [2024-11-26 07:42:10.418216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.304 qpair failed and we were unable to recover it. 00:32:26.304 [2024-11-26 07:42:10.428197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.304 [2024-11-26 07:42:10.428242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.304 [2024-11-26 07:42:10.428251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.304 [2024-11-26 07:42:10.428256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.304 [2024-11-26 07:42:10.428261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.304 [2024-11-26 07:42:10.428271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.304 qpair failed and we were unable to recover it. 00:32:26.566 [2024-11-26 07:42:10.438216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.566 [2024-11-26 07:42:10.438258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.566 [2024-11-26 07:42:10.438268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.566 [2024-11-26 07:42:10.438274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.566 [2024-11-26 07:42:10.438279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.566 [2024-11-26 07:42:10.438289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.566 qpair failed and we were unable to recover it. 00:32:26.566 [2024-11-26 07:42:10.448240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.566 [2024-11-26 07:42:10.448284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.566 [2024-11-26 07:42:10.448294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.566 [2024-11-26 07:42:10.448299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.566 [2024-11-26 07:42:10.448304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.566 [2024-11-26 07:42:10.448314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.566 qpair failed and we were unable to recover it. 00:32:26.566 [2024-11-26 07:42:10.458180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.566 [2024-11-26 07:42:10.458222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.566 [2024-11-26 07:42:10.458232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.566 [2024-11-26 07:42:10.458241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.566 [2024-11-26 07:42:10.458246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.566 [2024-11-26 07:42:10.458257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.566 qpair failed and we were unable to recover it. 00:32:26.566 [2024-11-26 07:42:10.468273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.566 [2024-11-26 07:42:10.468339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.566 [2024-11-26 07:42:10.468349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.566 [2024-11-26 07:42:10.468355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.566 [2024-11-26 07:42:10.468360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.566 [2024-11-26 07:42:10.468370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.566 qpair failed and we were unable to recover it. 00:32:26.566 [2024-11-26 07:42:10.478314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.567 [2024-11-26 07:42:10.478357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.567 [2024-11-26 07:42:10.478367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.567 [2024-11-26 07:42:10.478373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.567 [2024-11-26 07:42:10.478378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.567 [2024-11-26 07:42:10.478389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.567 qpair failed and we were unable to recover it. 00:32:26.567 [2024-11-26 07:42:10.488310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.567 [2024-11-26 07:42:10.488361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.567 [2024-11-26 07:42:10.488371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.567 [2024-11-26 07:42:10.488376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.567 [2024-11-26 07:42:10.488381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.567 [2024-11-26 07:42:10.488391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.567 qpair failed and we were unable to recover it. 00:32:26.567 [2024-11-26 07:42:10.498362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.567 [2024-11-26 07:42:10.498450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.567 [2024-11-26 07:42:10.498460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.567 [2024-11-26 07:42:10.498466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.567 [2024-11-26 07:42:10.498471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.567 [2024-11-26 07:42:10.498483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.567 qpair failed and we were unable to recover it. 00:32:26.567 [2024-11-26 07:42:10.508238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.567 [2024-11-26 07:42:10.508278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.567 [2024-11-26 07:42:10.508288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.567 [2024-11-26 07:42:10.508293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.567 [2024-11-26 07:42:10.508298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.567 [2024-11-26 07:42:10.508308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.567 qpair failed and we were unable to recover it. 00:32:26.567 [2024-11-26 07:42:10.518473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.567 [2024-11-26 07:42:10.518533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.567 [2024-11-26 07:42:10.518543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.567 [2024-11-26 07:42:10.518548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.567 [2024-11-26 07:42:10.518553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.567 [2024-11-26 07:42:10.518563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.567 qpair failed and we were unable to recover it. 00:32:26.567 [2024-11-26 07:42:10.528449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.567 [2024-11-26 07:42:10.528490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.567 [2024-11-26 07:42:10.528501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.567 [2024-11-26 07:42:10.528506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.567 [2024-11-26 07:42:10.528511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.567 [2024-11-26 07:42:10.528521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.567 qpair failed and we were unable to recover it. 00:32:26.567 [2024-11-26 07:42:10.538475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.567 [2024-11-26 07:42:10.538522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.567 [2024-11-26 07:42:10.538532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.567 [2024-11-26 07:42:10.538538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.567 [2024-11-26 07:42:10.538542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.567 [2024-11-26 07:42:10.538553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.567 qpair failed and we were unable to recover it. 00:32:26.567 [2024-11-26 07:42:10.548464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.567 [2024-11-26 07:42:10.548504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.567 [2024-11-26 07:42:10.548515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.567 [2024-11-26 07:42:10.548520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.567 [2024-11-26 07:42:10.548524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.567 [2024-11-26 07:42:10.548535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.567 qpair failed and we were unable to recover it. 00:32:26.567 [2024-11-26 07:42:10.558519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.567 [2024-11-26 07:42:10.558563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.567 [2024-11-26 07:42:10.558574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.567 [2024-11-26 07:42:10.558579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.567 [2024-11-26 07:42:10.558584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.567 [2024-11-26 07:42:10.558595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.567 qpair failed and we were unable to recover it. 00:32:26.567 [2024-11-26 07:42:10.568550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.567 [2024-11-26 07:42:10.568596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.567 [2024-11-26 07:42:10.568605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.567 [2024-11-26 07:42:10.568611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.567 [2024-11-26 07:42:10.568615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.567 [2024-11-26 07:42:10.568626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.567 qpair failed and we were unable to recover it. 00:32:26.567 [2024-11-26 07:42:10.578573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.567 [2024-11-26 07:42:10.578617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.567 [2024-11-26 07:42:10.578626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.567 [2024-11-26 07:42:10.578632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.567 [2024-11-26 07:42:10.578636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.568 [2024-11-26 07:42:10.578647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.568 qpair failed and we were unable to recover it. 00:32:26.568 [2024-11-26 07:42:10.588454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.568 [2024-11-26 07:42:10.588492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.568 [2024-11-26 07:42:10.588505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.568 [2024-11-26 07:42:10.588511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.568 [2024-11-26 07:42:10.588515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.568 [2024-11-26 07:42:10.588526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.568 qpair failed and we were unable to recover it. 00:32:26.568 [2024-11-26 07:42:10.598591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.568 [2024-11-26 07:42:10.598643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.568 [2024-11-26 07:42:10.598653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.568 [2024-11-26 07:42:10.598658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.568 [2024-11-26 07:42:10.598663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.568 [2024-11-26 07:42:10.598674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.568 qpair failed and we were unable to recover it. 00:32:26.568 [2024-11-26 07:42:10.608654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.568 [2024-11-26 07:42:10.608699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.568 [2024-11-26 07:42:10.608717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.568 [2024-11-26 07:42:10.608724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.568 [2024-11-26 07:42:10.608729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.568 [2024-11-26 07:42:10.608743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.568 qpair failed and we were unable to recover it. 00:32:26.568 [2024-11-26 07:42:10.618687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.568 [2024-11-26 07:42:10.618728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.568 [2024-11-26 07:42:10.618740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.568 [2024-11-26 07:42:10.618745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.568 [2024-11-26 07:42:10.618750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.568 [2024-11-26 07:42:10.618762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.568 qpair failed and we were unable to recover it. 00:32:26.568 [2024-11-26 07:42:10.628707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.568 [2024-11-26 07:42:10.628749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.568 [2024-11-26 07:42:10.628759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.568 [2024-11-26 07:42:10.628764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.568 [2024-11-26 07:42:10.628773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.568 [2024-11-26 07:42:10.628783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.568 qpair failed and we were unable to recover it. 00:32:26.568 [2024-11-26 07:42:10.638731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.568 [2024-11-26 07:42:10.638773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.568 [2024-11-26 07:42:10.638783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.568 [2024-11-26 07:42:10.638788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.568 [2024-11-26 07:42:10.638793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.568 [2024-11-26 07:42:10.638804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.568 qpair failed and we were unable to recover it. 00:32:26.568 [2024-11-26 07:42:10.648766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.568 [2024-11-26 07:42:10.648811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.568 [2024-11-26 07:42:10.648821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.568 [2024-11-26 07:42:10.648826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.568 [2024-11-26 07:42:10.648830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.568 [2024-11-26 07:42:10.648840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.568 qpair failed and we were unable to recover it. 00:32:26.568 [2024-11-26 07:42:10.658645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.568 [2024-11-26 07:42:10.658689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.568 [2024-11-26 07:42:10.658699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.568 [2024-11-26 07:42:10.658705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.568 [2024-11-26 07:42:10.658710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.568 [2024-11-26 07:42:10.658720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.568 qpair failed and we were unable to recover it. 00:32:26.568 [2024-11-26 07:42:10.668835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.568 [2024-11-26 07:42:10.668924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.568 [2024-11-26 07:42:10.668935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.568 [2024-11-26 07:42:10.668940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.568 [2024-11-26 07:42:10.668945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.568 [2024-11-26 07:42:10.668956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.568 qpair failed and we were unable to recover it. 00:32:26.568 [2024-11-26 07:42:10.678829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:26.568 [2024-11-26 07:42:10.678871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:26.568 [2024-11-26 07:42:10.678882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:26.568 [2024-11-26 07:42:10.678887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:26.568 [2024-11-26 07:42:10.678892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f90f4000b90 00:32:26.568 [2024-11-26 07:42:10.678903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:26.568 qpair failed and we were unable to recover it. 00:32:26.568 [2024-11-26 07:42:10.679018] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:32:26.568 A controller has encountered a failure and is being reset. 00:32:26.569 [2024-11-26 07:42:10.679069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1473020 (9): Bad file descriptor 00:32:26.829 Controller properly reset. 00:32:26.829 Initializing NVMe Controllers 00:32:26.829 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:26.829 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:26.829 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:32:26.829 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:32:26.829 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:32:26.829 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:32:26.829 Initialization complete. Launching workers. 00:32:26.829 Starting thread on core 1 00:32:26.829 Starting thread on core 2 00:32:26.829 Starting thread on core 3 00:32:26.829 Starting thread on core 0 00:32:26.829 07:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:32:26.829 00:32:26.829 real 0m11.408s 00:32:26.829 user 0m21.745s 00:32:26.829 sys 0m3.834s 00:32:26.829 07:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:26.829 07:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:26.829 ************************************ 00:32:26.829 END TEST nvmf_target_disconnect_tc2 00:32:26.829 ************************************ 00:32:26.829 07:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:32:26.829 07:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:32:26.829 07:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:32:26.829 07:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:26.829 07:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:32:26.829 07:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:26.829 07:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:32:26.829 07:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:26.829 07:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:26.829 rmmod nvme_tcp 00:32:26.829 rmmod nvme_fabrics 00:32:26.829 rmmod nvme_keyring 00:32:26.829 07:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:27.090 07:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:32:27.090 07:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:32:27.090 07:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2317968 ']' 00:32:27.090 07:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2317968 00:32:27.090 07:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2317968 ']' 00:32:27.090 07:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2317968 00:32:27.090 07:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:32:27.090 07:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:27.090 07:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2317968 00:32:27.090 07:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:32:27.090 07:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:32:27.090 07:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2317968' 00:32:27.090 killing process with pid 2317968 00:32:27.090 07:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2317968 00:32:27.090 07:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2317968 00:32:27.090 07:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:27.090 07:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:27.090 07:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:27.090 07:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:32:27.091 07:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:32:27.091 07:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:32:27.091 07:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:27.091 07:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:27.091 07:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:27.091 07:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:27.091 07:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:27.091 07:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:29.689 07:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:29.689 00:32:29.689 real 0m22.612s 00:32:29.689 user 0m49.857s 00:32:29.689 sys 0m10.481s 00:32:29.689 07:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:29.689 07:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:29.689 ************************************ 00:32:29.689 END TEST nvmf_target_disconnect 00:32:29.689 ************************************ 00:32:29.689 07:42:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:32:29.689 00:32:29.689 real 6m47.496s 00:32:29.689 user 11m28.948s 00:32:29.689 sys 2m25.149s 00:32:29.689 07:42:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:29.689 07:42:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.689 ************************************ 00:32:29.689 END TEST nvmf_host 00:32:29.689 ************************************ 00:32:29.689 07:42:13 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:32:29.689 07:42:13 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:32:29.689 07:42:13 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:32:29.689 07:42:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:29.689 07:42:13 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:29.689 07:42:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:29.689 ************************************ 00:32:29.689 START TEST nvmf_target_core_interrupt_mode 00:32:29.689 ************************************ 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:32:29.689 * Looking for test storage... 00:32:29.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:29.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.689 --rc genhtml_branch_coverage=1 00:32:29.689 --rc genhtml_function_coverage=1 00:32:29.689 --rc genhtml_legend=1 00:32:29.689 --rc geninfo_all_blocks=1 00:32:29.689 --rc geninfo_unexecuted_blocks=1 00:32:29.689 00:32:29.689 ' 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:29.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.689 --rc genhtml_branch_coverage=1 00:32:29.689 --rc genhtml_function_coverage=1 00:32:29.689 --rc genhtml_legend=1 00:32:29.689 --rc geninfo_all_blocks=1 00:32:29.689 --rc geninfo_unexecuted_blocks=1 00:32:29.689 00:32:29.689 ' 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:29.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.689 --rc genhtml_branch_coverage=1 00:32:29.689 --rc genhtml_function_coverage=1 00:32:29.689 --rc genhtml_legend=1 00:32:29.689 --rc geninfo_all_blocks=1 00:32:29.689 --rc geninfo_unexecuted_blocks=1 00:32:29.689 00:32:29.689 ' 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:29.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.689 --rc genhtml_branch_coverage=1 00:32:29.689 --rc genhtml_function_coverage=1 00:32:29.689 --rc genhtml_legend=1 00:32:29.689 --rc geninfo_all_blocks=1 00:32:29.689 --rc geninfo_unexecuted_blocks=1 00:32:29.689 00:32:29.689 ' 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:29.689 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:29.690 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:29.690 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.690 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.690 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.690 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:32:29.690 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.690 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:32:29.690 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:29.690 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:29.690 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:29.690 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:29.690 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:29.690 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:29.690 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:29.690 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:29.690 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:29.690 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:29.690 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:32:29.690 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:32:29.690 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:32:29.690 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:32:29.690 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:29.690 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:29.690 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:29.690 ************************************ 00:32:29.690 START TEST nvmf_abort 00:32:29.690 ************************************ 00:32:29.690 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:32:29.690 * Looking for test storage... 00:32:29.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:29.690 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:29.690 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:32:29.690 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:29.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.952 --rc genhtml_branch_coverage=1 00:32:29.952 --rc genhtml_function_coverage=1 00:32:29.952 --rc genhtml_legend=1 00:32:29.952 --rc geninfo_all_blocks=1 00:32:29.952 --rc geninfo_unexecuted_blocks=1 00:32:29.952 00:32:29.952 ' 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:29.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.952 --rc genhtml_branch_coverage=1 00:32:29.952 --rc genhtml_function_coverage=1 00:32:29.952 --rc genhtml_legend=1 00:32:29.952 --rc geninfo_all_blocks=1 00:32:29.952 --rc geninfo_unexecuted_blocks=1 00:32:29.952 00:32:29.952 ' 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:29.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.952 --rc genhtml_branch_coverage=1 00:32:29.952 --rc genhtml_function_coverage=1 00:32:29.952 --rc genhtml_legend=1 00:32:29.952 --rc geninfo_all_blocks=1 00:32:29.952 --rc geninfo_unexecuted_blocks=1 00:32:29.952 00:32:29.952 ' 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:29.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.952 --rc genhtml_branch_coverage=1 00:32:29.952 --rc genhtml_function_coverage=1 00:32:29.952 --rc genhtml_legend=1 00:32:29.952 --rc geninfo_all_blocks=1 00:32:29.952 --rc geninfo_unexecuted_blocks=1 00:32:29.952 00:32:29.952 ' 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.952 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.953 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:32:29.953 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.953 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:32:29.953 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:29.953 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:29.953 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:29.953 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:29.953 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:29.953 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:29.953 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:29.953 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:29.953 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:29.953 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:29.953 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:29.953 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:32:29.953 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:32:29.953 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:29.953 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:29.953 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:29.953 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:29.953 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:29.953 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:29.953 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:29.953 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:29.953 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:29.953 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:29.953 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:32:29.953 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:38.097 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:38.097 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:38.097 Found net devices under 0000:31:00.0: cvl_0_0 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:38.097 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:38.098 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:38.098 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:38.098 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:38.098 Found net devices under 0000:31:00.1: cvl_0_1 00:32:38.098 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:38.098 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:38.098 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:32:38.098 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:38.098 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:38.098 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:38.098 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:38.098 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:38.098 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:38.098 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:38.098 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:38.098 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:38.098 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:38.098 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:38.098 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:38.098 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:38.098 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:38.098 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:38.098 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:38.098 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:38.098 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:38.098 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:38.098 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:38.098 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:38.098 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:38.098 07:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:38.098 07:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:38.098 07:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:38.098 07:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:38.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:38.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.541 ms 00:32:38.098 00:32:38.098 --- 10.0.0.2 ping statistics --- 00:32:38.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:38.098 rtt min/avg/max/mdev = 0.541/0.541/0.541/0.000 ms 00:32:38.098 07:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:38.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:38.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:32:38.098 00:32:38.098 --- 10.0.0.1 ping statistics --- 00:32:38.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:38.098 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:32:38.098 07:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:38.098 07:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:32:38.098 07:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:38.098 07:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:38.098 07:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:38.098 07:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:38.098 07:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:38.098 07:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:38.098 07:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:38.098 07:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:32:38.098 07:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:38.098 07:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:38.098 07:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:38.098 07:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2324453 00:32:38.098 07:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2324453 00:32:38.098 07:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:32:38.098 07:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2324453 ']' 00:32:38.098 07:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:38.098 07:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:38.098 07:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:38.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:38.098 07:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:38.098 07:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:38.098 [2024-11-26 07:42:22.213327] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:38.098 [2024-11-26 07:42:22.214452] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:32:38.098 [2024-11-26 07:42:22.214506] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:38.360 [2024-11-26 07:42:22.320850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:38.360 [2024-11-26 07:42:22.372299] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:38.360 [2024-11-26 07:42:22.372350] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:38.360 [2024-11-26 07:42:22.372358] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:38.360 [2024-11-26 07:42:22.372365] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:38.360 [2024-11-26 07:42:22.372372] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:38.360 [2024-11-26 07:42:22.374193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:38.360 [2024-11-26 07:42:22.374363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:38.360 [2024-11-26 07:42:22.374363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:38.360 [2024-11-26 07:42:22.449654] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:38.360 [2024-11-26 07:42:22.449718] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:38.360 [2024-11-26 07:42:22.450408] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:38.360 [2024-11-26 07:42:22.450688] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:38.932 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:38.932 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:32:38.932 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:38.932 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:38.932 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:38.933 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:38.933 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:32:38.933 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.933 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:39.193 [2024-11-26 07:42:23.067267] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:39.193 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.193 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:32:39.193 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.193 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:39.193 Malloc0 00:32:39.193 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.194 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:39.194 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.194 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:39.194 Delay0 00:32:39.194 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.194 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:39.194 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.194 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:39.194 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.194 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:32:39.194 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.194 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:39.194 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.194 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:39.194 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.194 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:39.194 [2024-11-26 07:42:23.163216] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:39.194 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.194 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:39.194 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.194 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:39.194 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.194 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:32:39.194 [2024-11-26 07:42:23.287589] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:32:41.742 Initializing NVMe Controllers 00:32:41.742 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:32:41.742 controller IO queue size 128 less than required 00:32:41.742 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:32:41.742 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:32:41.742 Initialization complete. Launching workers. 00:32:41.742 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29022 00:32:41.742 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29079, failed to submit 66 00:32:41.742 success 29022, unsuccessful 57, failed 0 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:41.742 rmmod nvme_tcp 00:32:41.742 rmmod nvme_fabrics 00:32:41.742 rmmod nvme_keyring 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2324453 ']' 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2324453 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2324453 ']' 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2324453 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2324453 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2324453' 00:32:41.742 killing process with pid 2324453 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2324453 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2324453 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:41.742 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:44.292 07:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:44.292 00:32:44.292 real 0m14.208s 00:32:44.292 user 0m11.469s 00:32:44.292 sys 0m7.448s 00:32:44.292 07:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:44.292 07:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:44.292 ************************************ 00:32:44.292 END TEST nvmf_abort 00:32:44.292 ************************************ 00:32:44.292 07:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:32:44.292 07:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:44.292 07:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:44.292 07:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:44.292 ************************************ 00:32:44.292 START TEST nvmf_ns_hotplug_stress 00:32:44.292 ************************************ 00:32:44.292 07:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:32:44.292 * Looking for test storage... 00:32:44.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:44.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.292 --rc genhtml_branch_coverage=1 00:32:44.292 --rc genhtml_function_coverage=1 00:32:44.292 --rc genhtml_legend=1 00:32:44.292 --rc geninfo_all_blocks=1 00:32:44.292 --rc geninfo_unexecuted_blocks=1 00:32:44.292 00:32:44.292 ' 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:44.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.292 --rc genhtml_branch_coverage=1 00:32:44.292 --rc genhtml_function_coverage=1 00:32:44.292 --rc genhtml_legend=1 00:32:44.292 --rc geninfo_all_blocks=1 00:32:44.292 --rc geninfo_unexecuted_blocks=1 00:32:44.292 00:32:44.292 ' 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:44.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.292 --rc genhtml_branch_coverage=1 00:32:44.292 --rc genhtml_function_coverage=1 00:32:44.292 --rc genhtml_legend=1 00:32:44.292 --rc geninfo_all_blocks=1 00:32:44.292 --rc geninfo_unexecuted_blocks=1 00:32:44.292 00:32:44.292 ' 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:44.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.292 --rc genhtml_branch_coverage=1 00:32:44.292 --rc genhtml_function_coverage=1 00:32:44.292 --rc genhtml_legend=1 00:32:44.292 --rc geninfo_all_blocks=1 00:32:44.292 --rc geninfo_unexecuted_blocks=1 00:32:44.292 00:32:44.292 ' 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:44.292 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:32:44.293 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:52.445 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:52.445 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:52.445 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:52.446 Found net devices under 0000:31:00.0: cvl_0_0 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:52.446 Found net devices under 0000:31:00.1: cvl_0_1 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:52.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:52.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:32:52.446 00:32:52.446 --- 10.0.0.2 ping statistics --- 00:32:52.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:52.446 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:52.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:52.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:32:52.446 00:32:52.446 --- 10.0.0.1 ping statistics --- 00:32:52.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:52.446 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:52.446 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:52.707 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2329809 00:32:52.707 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2329809 00:32:52.707 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:32:52.707 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2329809 ']' 00:32:52.707 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:52.707 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:52.707 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:52.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:52.707 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:52.707 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:52.707 [2024-11-26 07:42:36.632222] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:52.707 [2024-11-26 07:42:36.633390] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:32:52.707 [2024-11-26 07:42:36.633441] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:52.707 [2024-11-26 07:42:36.739289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:52.707 [2024-11-26 07:42:36.791202] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:52.707 [2024-11-26 07:42:36.791263] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:52.707 [2024-11-26 07:42:36.791272] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:52.707 [2024-11-26 07:42:36.791278] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:52.707 [2024-11-26 07:42:36.791284] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:52.707 [2024-11-26 07:42:36.793094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:52.707 [2024-11-26 07:42:36.793262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:52.707 [2024-11-26 07:42:36.793262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:52.968 [2024-11-26 07:42:36.868054] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:52.968 [2024-11-26 07:42:36.868134] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:52.968 [2024-11-26 07:42:36.868812] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:52.968 [2024-11-26 07:42:36.869107] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:53.539 07:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:53.539 07:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:32:53.539 07:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:53.539 07:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:53.539 07:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:53.539 07:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:53.539 07:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:32:53.539 07:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:53.539 [2024-11-26 07:42:37.646182] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:53.799 07:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:53.799 07:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:54.060 [2024-11-26 07:42:38.011007] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:54.060 07:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:54.320 07:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:32:54.320 Malloc0 00:32:54.320 07:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:54.581 Delay0 00:32:54.581 07:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:54.841 07:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:32:54.841 NULL1 00:32:54.841 07:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:32:55.101 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2330266 00:32:55.101 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:32:55.101 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:32:55.101 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:55.361 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:55.361 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:32:55.361 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:32:55.621 true 00:32:55.621 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:32:55.621 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:55.881 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:56.141 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:32:56.141 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:32:56.141 true 00:32:56.141 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:32:56.141 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:56.401 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:56.661 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:32:56.661 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:32:56.661 true 00:32:56.921 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:32:56.921 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:56.921 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:57.182 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:32:57.182 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:32:57.443 true 00:32:57.443 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:32:57.443 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:57.443 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:57.705 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:32:57.705 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:32:57.966 true 00:32:57.966 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:32:57.966 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:58.226 07:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:58.226 07:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:32:58.226 07:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:32:58.486 true 00:32:58.486 07:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:32:58.486 07:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:58.747 07:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:59.009 07:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:32:59.009 07:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:32:59.009 true 00:32:59.009 07:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:32:59.009 07:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:59.271 07:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:59.534 07:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:32:59.534 07:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:32:59.534 true 00:32:59.534 07:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:32:59.534 07:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:59.794 07:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:00.054 07:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:33:00.054 07:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:33:00.054 true 00:33:00.054 07:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:00.054 07:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:00.315 07:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:00.576 07:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:33:00.576 07:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:33:00.576 true 00:33:00.837 07:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:00.837 07:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:00.837 07:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:01.098 07:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:33:01.098 07:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:33:01.359 true 00:33:01.359 07:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:01.359 07:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:01.359 07:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:01.619 07:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:33:01.619 07:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:33:01.880 true 00:33:01.880 07:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:01.880 07:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:01.880 07:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:02.140 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:33:02.140 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:33:02.400 true 00:33:02.400 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:02.400 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:02.661 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:02.661 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:33:02.661 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:33:02.923 true 00:33:02.923 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:02.923 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:03.184 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:03.184 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:33:03.184 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:33:03.447 true 00:33:03.447 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:03.447 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:03.708 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:03.708 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:33:03.708 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:33:03.968 true 00:33:03.968 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:03.968 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:04.240 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:04.502 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:33:04.502 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:33:04.502 true 00:33:04.502 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:04.502 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:04.762 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:05.023 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:33:05.023 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:33:05.023 true 00:33:05.023 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:05.023 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:05.284 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:05.544 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:33:05.544 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:33:05.544 true 00:33:05.805 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:05.805 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:05.805 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:06.065 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:33:06.065 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:33:06.326 true 00:33:06.326 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:06.326 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:06.326 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:06.586 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:33:06.586 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:33:06.846 true 00:33:06.846 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:06.846 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:06.846 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:07.106 07:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:33:07.106 07:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:33:07.367 true 00:33:07.367 07:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:07.367 07:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:07.367 07:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:07.627 07:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:33:07.627 07:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:33:07.888 true 00:33:07.888 07:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:07.888 07:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:08.149 07:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:08.149 07:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:33:08.149 07:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:33:08.409 true 00:33:08.409 07:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:08.409 07:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:08.668 07:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:08.668 07:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:33:08.668 07:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:33:08.928 true 00:33:08.928 07:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:08.928 07:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:09.189 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:09.449 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:33:09.449 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:33:09.449 true 00:33:09.449 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:09.449 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:09.710 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:09.971 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:33:09.971 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:33:09.971 true 00:33:09.971 07:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:09.971 07:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:10.232 07:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:10.493 07:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:33:10.493 07:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:33:10.493 true 00:33:10.754 07:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:10.754 07:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:10.755 07:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:11.016 07:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:33:11.016 07:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:33:11.277 true 00:33:11.277 07:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:11.277 07:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:11.538 07:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:11.538 07:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:33:11.538 07:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:33:11.798 true 00:33:11.798 07:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:11.798 07:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:12.059 07:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:12.320 07:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:33:12.320 07:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:33:12.320 true 00:33:12.320 07:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:12.320 07:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:12.580 07:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:12.841 07:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:33:12.841 07:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:33:12.841 true 00:33:12.841 07:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:12.841 07:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:13.103 07:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:13.363 07:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:33:13.363 07:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:33:13.363 true 00:33:13.624 07:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:13.624 07:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:13.624 07:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:13.884 07:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:33:13.884 07:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:33:13.884 true 00:33:14.145 07:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:14.145 07:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:14.145 07:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:14.404 07:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:33:14.404 07:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:33:14.664 true 00:33:14.664 07:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:14.664 07:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:14.664 07:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:14.924 07:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:33:14.924 07:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:33:15.185 true 00:33:15.185 07:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:15.185 07:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:15.444 07:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:15.444 07:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:33:15.444 07:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:33:15.704 true 00:33:15.704 07:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:15.704 07:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:15.965 07:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:15.965 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:33:15.965 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:33:16.225 true 00:33:16.225 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:16.225 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:16.485 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:16.485 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:33:16.485 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:33:16.745 true 00:33:16.745 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:16.746 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:17.007 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:17.268 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:33:17.268 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:33:17.268 true 00:33:17.268 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:17.268 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:17.528 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:17.788 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:33:17.788 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:33:17.788 true 00:33:17.788 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:17.788 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:18.050 07:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:18.310 07:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:33:18.310 07:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:33:18.571 true 00:33:18.571 07:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:18.571 07:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:18.571 07:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:18.832 07:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:33:18.832 07:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:33:19.094 true 00:33:19.094 07:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:19.094 07:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:19.094 07:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:19.355 07:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:33:19.355 07:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:33:19.616 true 00:33:19.616 07:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:19.616 07:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:19.877 07:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:19.877 07:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:33:19.877 07:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:33:20.137 true 00:33:20.137 07:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:20.137 07:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:20.398 07:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:20.398 07:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:33:20.398 07:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:33:20.658 true 00:33:20.658 07:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:20.658 07:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:20.918 07:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:21.179 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:33:21.179 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:33:21.179 true 00:33:21.179 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:21.179 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:21.440 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:21.702 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:33:21.702 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:33:21.702 true 00:33:21.702 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:21.702 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:21.963 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:22.224 07:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:33:22.224 07:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:33:22.224 true 00:33:22.486 07:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:22.486 07:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:22.486 07:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:22.747 07:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:33:22.747 07:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:33:23.020 true 00:33:23.020 07:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:23.020 07:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:23.020 07:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:23.323 07:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:33:23.323 07:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:33:23.323 true 00:33:23.600 07:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:23.600 07:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:23.600 07:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:23.872 07:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:33:23.872 07:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:33:23.872 true 00:33:23.872 07:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:23.872 07:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:24.131 07:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:24.391 07:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:33:24.391 07:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:33:24.651 true 00:33:24.651 07:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:24.651 07:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:24.651 07:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:24.911 07:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:33:24.911 07:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:33:25.172 true 00:33:25.172 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:25.172 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:25.172 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:25.432 Initializing NVMe Controllers 00:33:25.432 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:25.432 Controller IO queue size 128, less than required. 00:33:25.432 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:25.432 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:33:25.432 Initialization complete. Launching workers. 00:33:25.432 ======================================================== 00:33:25.432 Latency(us) 00:33:25.432 Device Information : IOPS MiB/s Average min max 00:33:25.432 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 29683.56 14.49 4312.07 1485.62 10757.31 00:33:25.432 ======================================================== 00:33:25.432 Total : 29683.56 14.49 4312.07 1485.62 10757.31 00:33:25.432 00:33:25.432 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:33:25.432 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:33:25.692 true 00:33:25.692 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2330266 00:33:25.692 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2330266) - No such process 00:33:25.692 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2330266 00:33:25.692 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:25.692 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:25.953 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:33:25.953 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:33:25.953 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:33:25.953 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:25.953 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:33:26.214 null0 00:33:26.214 07:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:26.214 07:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:26.214 07:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:33:26.214 null1 00:33:26.214 07:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:26.214 07:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:26.214 07:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:33:26.474 null2 00:33:26.474 07:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:26.474 07:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:26.474 07:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:33:26.735 null3 00:33:26.735 07:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:26.735 07:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:26.735 07:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:33:26.735 null4 00:33:26.735 07:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:26.735 07:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:26.735 07:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:33:26.996 null5 00:33:26.996 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:26.996 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:26.996 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:33:27.257 null6 00:33:27.257 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:27.257 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:27.257 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:33:27.257 null7 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:27.519 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:27.520 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:27.520 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:27.520 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:33:27.520 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:27.520 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:33:27.520 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:27.520 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:27.520 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:27.520 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:27.520 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:33:27.520 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:27.520 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:33:27.520 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:27.520 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:27.520 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:27.520 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:27.520 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:27.520 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:27.520 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2336462 2336465 2336466 2336469 2336471 2336473 2336474 2336476 00:33:27.520 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:33:27.520 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:33:27.520 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:27.520 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:27.520 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:27.520 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:27.520 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:27.520 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:27.520 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:27.793 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:27.793 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:27.793 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:27.793 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:27.793 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:27.793 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:27.793 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:27.793 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:27.793 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:27.793 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:27.793 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:27.793 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:27.793 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:27.793 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:27.793 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:27.793 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:27.793 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:27.793 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:27.793 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:27.793 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:27.793 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:27.793 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:27.793 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:27.793 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:27.793 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:27.793 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:27.793 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:27.793 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:27.793 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:28.071 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:28.071 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:28.071 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:28.071 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:28.071 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:28.071 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:28.071 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:28.071 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:28.071 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:28.071 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:28.071 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:28.071 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:28.071 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:28.071 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:28.071 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:28.071 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:28.349 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:28.349 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:28.349 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:28.349 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:28.349 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:28.349 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:28.349 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:28.349 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:28.349 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:28.349 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:28.349 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:28.349 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:28.349 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:28.349 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:28.349 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:28.349 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:28.349 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:28.349 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:28.349 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:28.349 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:28.349 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:28.349 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:28.349 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:28.349 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:28.349 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:28.349 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:28.616 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:28.616 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:28.616 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:28.616 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:28.616 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:28.616 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:28.616 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:28.616 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:28.616 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:28.616 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:28.616 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:28.616 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:28.616 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:28.616 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:28.616 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:28.616 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:28.616 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:28.616 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:28.616 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:28.616 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:28.616 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:28.616 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:28.616 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:28.879 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:28.879 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:28.879 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:28.879 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:28.879 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:28.879 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:28.879 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:28.879 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:28.879 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:28.879 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:28.879 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:28.879 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:28.879 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:28.879 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:28.879 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:28.879 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:28.879 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:28.879 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:28.879 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:28.879 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:28.879 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:28.879 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:28.879 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:28.879 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:28.879 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:28.879 07:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:28.879 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:29.142 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:29.142 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:29.142 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:29.142 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:29.142 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:29.143 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:29.143 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:29.143 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:29.143 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:29.143 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:29.143 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:29.143 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:29.143 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:29.143 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:29.143 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:29.143 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:29.143 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:29.405 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:29.405 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:29.405 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:29.405 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:29.405 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:29.405 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:29.405 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:29.405 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:29.405 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:29.405 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:29.405 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:29.405 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:29.405 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:29.405 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:29.406 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:29.406 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:29.406 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:29.406 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:29.406 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:29.406 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:29.406 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:29.406 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:29.406 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:29.406 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:29.406 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:29.668 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:29.668 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:29.668 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:29.668 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:29.668 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:29.668 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:29.668 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:29.668 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:29.668 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:29.668 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:29.668 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:29.668 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:29.668 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:29.668 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:29.668 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:29.668 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:29.668 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:29.668 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:29.668 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:29.668 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:29.668 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:29.668 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:29.930 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:29.930 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:29.930 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:29.930 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:29.930 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:29.930 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:29.930 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:29.930 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:29.930 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:29.930 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:29.930 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:29.930 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:29.930 07:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:29.930 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:29.930 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:29.930 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:29.930 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:29.930 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:29.930 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:29.930 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:29.930 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:29.930 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:29.930 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:29.930 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:29.930 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:30.193 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:30.193 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:30.193 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:30.193 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:30.193 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:30.193 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:30.193 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:30.193 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:30.193 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:30.193 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:30.193 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:30.194 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:30.194 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:30.194 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:30.194 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:30.194 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:30.194 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:30.455 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:30.455 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:30.455 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:30.455 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:30.455 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:30.455 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:30.455 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:30.455 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:30.455 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:30.455 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:30.455 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:30.455 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:30.455 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:30.455 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:30.455 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:30.455 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:30.455 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:30.455 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:30.455 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:30.455 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:30.455 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:30.455 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:30.455 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:30.456 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:30.456 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:30.718 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:30.718 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:30.718 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:30.718 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:30.718 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:30.718 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:30.718 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:30.718 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:30.718 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:30.718 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:30.718 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:30.718 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:30.718 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:30.718 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:30.718 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:30.718 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:30.718 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:30.718 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:30.718 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:30.718 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:30.718 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:30.718 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:30.718 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:30.718 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:30.718 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:30.718 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:30.978 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:30.978 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:30.978 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:30.978 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:30.979 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:30.979 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:30.979 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:30.979 07:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:30.979 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:30.979 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:30.979 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:30.979 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:30.979 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:30.979 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:30.979 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:30.979 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:30.979 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:30.979 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:30.979 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:31.240 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:31.240 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:31.240 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:31.240 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:31.240 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:31.240 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:31.240 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:31.240 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:31.240 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:31.501 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:31.501 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:31.501 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:33:31.501 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:33:31.501 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:31.501 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:33:31.501 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:31.501 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:33:31.501 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:31.501 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:31.501 rmmod nvme_tcp 00:33:31.501 rmmod nvme_fabrics 00:33:31.501 rmmod nvme_keyring 00:33:31.501 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:31.501 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:33:31.501 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:33:31.501 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2329809 ']' 00:33:31.501 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2329809 00:33:31.501 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2329809 ']' 00:33:31.501 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2329809 00:33:31.501 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:33:31.501 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:31.501 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2329809 00:33:31.501 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:31.501 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:31.501 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2329809' 00:33:31.501 killing process with pid 2329809 00:33:31.501 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2329809 00:33:31.501 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2329809 00:33:31.762 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:31.762 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:31.762 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:31.762 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:33:31.762 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:33:31.762 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:31.762 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:33:31.762 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:31.762 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:31.762 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:31.762 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:31.762 07:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:33.674 07:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:33.674 00:33:33.674 real 0m49.866s 00:33:33.674 user 3m4.124s 00:33:33.674 sys 0m22.684s 00:33:33.674 07:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:33.674 07:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:33.674 ************************************ 00:33:33.674 END TEST nvmf_ns_hotplug_stress 00:33:33.674 ************************************ 00:33:33.935 07:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:33:33.935 07:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:33.935 07:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:33.935 07:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:33.935 ************************************ 00:33:33.935 START TEST nvmf_delete_subsystem 00:33:33.935 ************************************ 00:33:33.935 07:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:33:33.935 * Looking for test storage... 00:33:33.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:33.935 07:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:33.935 07:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:33:33.935 07:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:33.935 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:33.935 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:33.935 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:33.935 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:33.935 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:33:33.935 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:33:33.935 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:33:33.935 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:33:33.935 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:33:33.935 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:33:33.935 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:33:33.935 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:33.935 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:33:33.935 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:33:33.935 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:33.935 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:33.935 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:33:33.935 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:33:33.935 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:33.935 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:34.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.197 --rc genhtml_branch_coverage=1 00:33:34.197 --rc genhtml_function_coverage=1 00:33:34.197 --rc genhtml_legend=1 00:33:34.197 --rc geninfo_all_blocks=1 00:33:34.197 --rc geninfo_unexecuted_blocks=1 00:33:34.197 00:33:34.197 ' 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:34.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.197 --rc genhtml_branch_coverage=1 00:33:34.197 --rc genhtml_function_coverage=1 00:33:34.197 --rc genhtml_legend=1 00:33:34.197 --rc geninfo_all_blocks=1 00:33:34.197 --rc geninfo_unexecuted_blocks=1 00:33:34.197 00:33:34.197 ' 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:34.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.197 --rc genhtml_branch_coverage=1 00:33:34.197 --rc genhtml_function_coverage=1 00:33:34.197 --rc genhtml_legend=1 00:33:34.197 --rc geninfo_all_blocks=1 00:33:34.197 --rc geninfo_unexecuted_blocks=1 00:33:34.197 00:33:34.197 ' 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:34.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.197 --rc genhtml_branch_coverage=1 00:33:34.197 --rc genhtml_function_coverage=1 00:33:34.197 --rc genhtml_legend=1 00:33:34.197 --rc geninfo_all_blocks=1 00:33:34.197 --rc geninfo_unexecuted_blocks=1 00:33:34.197 00:33:34.197 ' 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.197 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.198 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.198 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:33:34.198 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.198 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:33:34.198 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:34.198 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:34.198 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:34.198 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:34.198 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:34.198 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:34.198 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:34.198 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:34.198 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:34.198 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:34.198 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:33:34.198 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:34.198 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:34.198 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:34.198 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:34.198 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:34.198 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:34.198 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:34.198 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:34.198 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:34.198 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:34.198 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:33:34.198 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:42.353 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:42.353 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:42.353 Found net devices under 0000:31:00.0: cvl_0_0 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:42.353 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:42.354 Found net devices under 0000:31:00.1: cvl_0_1 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:42.354 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:42.354 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:33:42.354 00:33:42.354 --- 10.0.0.2 ping statistics --- 00:33:42.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:42.354 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:42.354 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:42.354 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:33:42.354 00:33:42.354 --- 10.0.0.1 ping statistics --- 00:33:42.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:42.354 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2342005 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2342005 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2342005 ']' 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:42.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:42.354 07:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:42.354 [2024-11-26 07:43:25.916280] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:42.354 [2024-11-26 07:43:25.917346] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:33:42.354 [2024-11-26 07:43:25.917390] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:42.354 [2024-11-26 07:43:26.006362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:42.354 [2024-11-26 07:43:26.046192] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:42.354 [2024-11-26 07:43:26.046226] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:42.354 [2024-11-26 07:43:26.046235] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:42.354 [2024-11-26 07:43:26.046242] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:42.354 [2024-11-26 07:43:26.046248] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:42.354 [2024-11-26 07:43:26.047460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:42.354 [2024-11-26 07:43:26.047463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:42.354 [2024-11-26 07:43:26.102572] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:42.354 [2024-11-26 07:43:26.103086] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:42.354 [2024-11-26 07:43:26.103430] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:42.616 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:42.616 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:33:42.616 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:42.616 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:42.616 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:42.879 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:42.879 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:42.879 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.879 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:42.879 [2024-11-26 07:43:26.760400] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:42.879 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.879 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:42.879 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.879 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:42.879 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.879 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:42.879 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.879 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:42.879 [2024-11-26 07:43:26.788736] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:42.879 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.879 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:33:42.879 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.879 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:42.879 NULL1 00:33:42.879 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.879 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:42.879 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.879 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:42.879 Delay0 00:33:42.879 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.879 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:42.879 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.879 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:42.879 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.879 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2342323 00:33:42.879 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:33:42.879 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:33:42.879 [2024-11-26 07:43:26.885284] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:44.797 07:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:44.797 07:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.797 07:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 starting I/O failed: -6 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 starting I/O failed: -6 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 starting I/O failed: -6 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 starting I/O failed: -6 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 starting I/O failed: -6 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 starting I/O failed: -6 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 starting I/O failed: -6 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 starting I/O failed: -6 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 starting I/O failed: -6 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 starting I/O failed: -6 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 starting I/O failed: -6 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 [2024-11-26 07:43:29.130834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64f00 is same with the state(6) to be set 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 starting I/O failed: -6 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 starting I/O failed: -6 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 starting I/O failed: -6 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 starting I/O failed: -6 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 starting I/O failed: -6 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 starting I/O failed: -6 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 starting I/O failed: -6 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 starting I/O failed: -6 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 starting I/O failed: -6 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 starting I/O failed: -6 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 starting I/O failed: -6 00:33:45.060 [2024-11-26 07:43:29.133347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f71f8000c40 is same with the state(6) to be set 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Write completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.060 Read completed with error (sct=0, sc=8) 00:33:45.061 Write completed with error (sct=0, sc=8) 00:33:45.061 Read completed with error (sct=0, sc=8) 00:33:45.061 Read completed with error (sct=0, sc=8) 00:33:45.061 Write completed with error (sct=0, sc=8) 00:33:45.061 Read completed with error (sct=0, sc=8) 00:33:45.061 Read completed with error (sct=0, sc=8) 00:33:45.061 Read completed with error (sct=0, sc=8) 00:33:45.061 Read completed with error (sct=0, sc=8) 00:33:45.061 Read completed with error (sct=0, sc=8) 00:33:45.061 Read completed with error (sct=0, sc=8) 00:33:45.061 Read completed with error (sct=0, sc=8) 00:33:45.061 Read completed with error (sct=0, sc=8) 00:33:45.061 Read completed with error (sct=0, sc=8) 00:33:45.061 Read completed with error (sct=0, sc=8) 00:33:45.061 Write completed with error (sct=0, sc=8) 00:33:45.061 Read completed with error (sct=0, sc=8) 00:33:45.061 Read completed with error (sct=0, sc=8) 00:33:45.061 Read completed with error (sct=0, sc=8) 00:33:45.061 Write completed with error (sct=0, sc=8) 00:33:45.061 Read completed with error (sct=0, sc=8) 00:33:45.061 Read completed with error (sct=0, sc=8) 00:33:45.061 Read completed with error (sct=0, sc=8) 00:33:45.061 Read completed with error (sct=0, sc=8) 00:33:45.061 Read completed with error (sct=0, sc=8) 00:33:45.061 Write completed with error (sct=0, sc=8) 00:33:45.061 Write completed with error (sct=0, sc=8) 00:33:45.061 Read completed with error (sct=0, sc=8) 00:33:45.061 Read completed with error (sct=0, sc=8) 00:33:45.061 Read completed with error (sct=0, sc=8) 00:33:45.061 Write completed with error (sct=0, sc=8) 00:33:45.061 Read completed with error (sct=0, sc=8) 00:33:45.061 Read completed with error (sct=0, sc=8) 00:33:45.061 Read completed with error (sct=0, sc=8) 00:33:45.061 Read completed with error (sct=0, sc=8) 00:33:45.061 Write completed with error (sct=0, sc=8) 00:33:46.003 [2024-11-26 07:43:30.108884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc665e0 is same with the state(6) to be set 00:33:46.265 Write completed with error (sct=0, sc=8) 00:33:46.265 Write completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Write completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Write completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Write completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 [2024-11-26 07:43:30.134419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc650e0 is same with the state(6) to be set 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Write completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Write completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Write completed with error (sct=0, sc=8) 00:33:46.265 Write completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Write completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 [2024-11-26 07:43:30.134581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc654a0 is same with the state(6) to be set 00:33:46.265 Write completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Write completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Write completed with error (sct=0, sc=8) 00:33:46.265 Write completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Write completed with error (sct=0, sc=8) 00:33:46.265 Write completed with error (sct=0, sc=8) 00:33:46.265 Write completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 [2024-11-26 07:43:30.135316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f71f800d7e0 is same with the state(6) to be set 00:33:46.265 Write completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Write completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Write completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Write completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Write completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 Read completed with error (sct=0, sc=8) 00:33:46.265 [2024-11-26 07:43:30.135616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f71f800d020 is same with the state(6) to be set 00:33:46.265 Initializing NVMe Controllers 00:33:46.266 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:46.266 Controller IO queue size 128, less than required. 00:33:46.266 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:46.266 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:46.266 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:46.266 Initialization complete. Launching workers. 00:33:46.266 ======================================================== 00:33:46.266 Latency(us) 00:33:46.266 Device Information : IOPS MiB/s Average min max 00:33:46.266 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 167.26 0.08 900301.03 291.60 1045437.07 00:33:46.266 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 166.27 0.08 902313.53 329.74 1010728.05 00:33:46.266 ======================================================== 00:33:46.266 Total : 333.53 0.16 901304.27 291.60 1045437.07 00:33:46.266 00:33:46.266 [2024-11-26 07:43:30.136278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc665e0 (9): Bad file descriptor 00:33:46.266 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:33:46.266 07:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.266 07:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:33:46.266 07:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2342323 00:33:46.266 07:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:33:46.527 07:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:33:46.527 07:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2342323 00:33:46.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2342323) - No such process 00:33:46.527 07:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2342323 00:33:46.527 07:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:33:46.527 07:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2342323 00:33:46.527 07:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:33:46.527 07:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:46.527 07:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:33:46.527 07:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:46.527 07:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2342323 00:33:46.527 07:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:33:46.527 07:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:46.527 07:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:46.527 07:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:46.527 07:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:46.527 07:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.527 07:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:46.787 07:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.787 07:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:46.787 07:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.787 07:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:46.787 [2024-11-26 07:43:30.664517] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:46.787 07:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.787 07:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:46.787 07:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.787 07:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:46.787 07:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.787 07:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2342997 00:33:46.787 07:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:33:46.787 07:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2342997 00:33:46.787 07:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:33:46.787 07:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:46.787 [2024-11-26 07:43:30.741736] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:47.359 07:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:47.359 07:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2342997 00:33:47.359 07:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:47.621 07:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:47.621 07:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2342997 00:33:47.621 07:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:48.192 07:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:48.192 07:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2342997 00:33:48.192 07:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:48.765 07:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:48.765 07:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2342997 00:33:48.765 07:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:49.338 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:49.338 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2342997 00:33:49.338 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:49.599 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:49.599 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2342997 00:33:49.599 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:49.860 Initializing NVMe Controllers 00:33:49.860 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:49.860 Controller IO queue size 128, less than required. 00:33:49.860 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:49.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:49.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:49.860 Initialization complete. Launching workers. 00:33:49.860 ======================================================== 00:33:49.860 Latency(us) 00:33:49.860 Device Information : IOPS MiB/s Average min max 00:33:49.860 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002337.58 1000186.87 1041504.96 00:33:49.860 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003789.29 1000312.46 1010140.57 00:33:49.860 ======================================================== 00:33:49.860 Total : 256.00 0.12 1003063.44 1000186.87 1041504.96 00:33:49.860 00:33:50.122 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:50.122 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2342997 00:33:50.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2342997) - No such process 00:33:50.122 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2342997 00:33:50.122 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:33:50.122 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:33:50.122 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:50.122 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:33:50.122 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:50.122 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:33:50.122 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:50.122 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:50.122 rmmod nvme_tcp 00:33:50.122 rmmod nvme_fabrics 00:33:50.383 rmmod nvme_keyring 00:33:50.383 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:50.383 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:33:50.383 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:33:50.383 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2342005 ']' 00:33:50.383 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2342005 00:33:50.383 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2342005 ']' 00:33:50.383 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2342005 00:33:50.383 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:33:50.383 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:50.383 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2342005 00:33:50.383 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:50.383 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:50.383 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2342005' 00:33:50.383 killing process with pid 2342005 00:33:50.383 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2342005 00:33:50.383 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2342005 00:33:50.383 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:50.383 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:50.383 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:50.383 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:33:50.383 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:33:50.383 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:50.383 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:33:50.383 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:50.383 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:50.383 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:50.383 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:50.383 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:52.926 00:33:52.926 real 0m18.688s 00:33:52.926 user 0m26.593s 00:33:52.926 sys 0m7.739s 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:52.926 ************************************ 00:33:52.926 END TEST nvmf_delete_subsystem 00:33:52.926 ************************************ 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:52.926 ************************************ 00:33:52.926 START TEST nvmf_host_management 00:33:52.926 ************************************ 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:33:52.926 * Looking for test storage... 00:33:52.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:52.926 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:52.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:52.926 --rc genhtml_branch_coverage=1 00:33:52.926 --rc genhtml_function_coverage=1 00:33:52.926 --rc genhtml_legend=1 00:33:52.926 --rc geninfo_all_blocks=1 00:33:52.926 --rc geninfo_unexecuted_blocks=1 00:33:52.926 00:33:52.926 ' 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:52.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:52.927 --rc genhtml_branch_coverage=1 00:33:52.927 --rc genhtml_function_coverage=1 00:33:52.927 --rc genhtml_legend=1 00:33:52.927 --rc geninfo_all_blocks=1 00:33:52.927 --rc geninfo_unexecuted_blocks=1 00:33:52.927 00:33:52.927 ' 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:52.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:52.927 --rc genhtml_branch_coverage=1 00:33:52.927 --rc genhtml_function_coverage=1 00:33:52.927 --rc genhtml_legend=1 00:33:52.927 --rc geninfo_all_blocks=1 00:33:52.927 --rc geninfo_unexecuted_blocks=1 00:33:52.927 00:33:52.927 ' 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:52.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:52.927 --rc genhtml_branch_coverage=1 00:33:52.927 --rc genhtml_function_coverage=1 00:33:52.927 --rc genhtml_legend=1 00:33:52.927 --rc geninfo_all_blocks=1 00:33:52.927 --rc geninfo_unexecuted_blocks=1 00:33:52.927 00:33:52.927 ' 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:33:52.927 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:01.073 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:01.073 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:01.073 Found net devices under 0000:31:00.0: cvl_0_0 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:01.073 Found net devices under 0000:31:00.1: cvl_0_1 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:01.073 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:01.074 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:01.074 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:01.074 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:01.074 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:01.074 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:01.074 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:01.074 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:01.074 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:01.074 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:01.074 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:01.074 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:01.074 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:01.074 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:01.074 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:01.074 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:01.074 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:01.074 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:01.074 07:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:01.074 07:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:01.074 07:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:01.074 07:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:01.074 07:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:01.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:01.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:34:01.074 00:34:01.074 --- 10.0.0.2 ping statistics --- 00:34:01.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:01.074 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:34:01.074 07:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:01.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:01.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:34:01.074 00:34:01.074 --- 10.0.0.1 ping statistics --- 00:34:01.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:01.074 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:34:01.074 07:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:01.074 07:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:34:01.074 07:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:01.074 07:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:01.074 07:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:01.074 07:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:01.074 07:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:01.074 07:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:01.074 07:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:01.074 07:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:34:01.074 07:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:34:01.074 07:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:34:01.074 07:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:01.074 07:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:01.074 07:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:01.074 07:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2348357 00:34:01.074 07:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2348357 00:34:01.074 07:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:34:01.074 07:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2348357 ']' 00:34:01.074 07:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:01.074 07:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:01.074 07:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:01.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:01.074 07:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:01.074 07:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:01.074 [2024-11-26 07:43:45.143988] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:01.074 [2024-11-26 07:43:45.144981] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:34:01.074 [2024-11-26 07:43:45.145020] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:01.335 [2024-11-26 07:43:45.247021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:01.335 [2024-11-26 07:43:45.292888] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:01.335 [2024-11-26 07:43:45.292929] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:01.335 [2024-11-26 07:43:45.292937] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:01.335 [2024-11-26 07:43:45.292944] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:01.335 [2024-11-26 07:43:45.292950] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:01.335 [2024-11-26 07:43:45.294529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:01.335 [2024-11-26 07:43:45.294690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:01.335 [2024-11-26 07:43:45.294846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:01.335 [2024-11-26 07:43:45.294847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:01.335 [2024-11-26 07:43:45.350906] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:01.335 [2024-11-26 07:43:45.351462] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:01.335 [2024-11-26 07:43:45.352435] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:01.335 [2024-11-26 07:43:45.352594] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:01.335 [2024-11-26 07:43:45.352789] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:01.906 07:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:01.906 07:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:34:01.906 07:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:01.906 07:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:01.906 07:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:01.906 07:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:01.906 07:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:01.906 07:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.906 07:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:01.906 [2024-11-26 07:43:45.975597] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:01.906 07:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.906 07:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:34:01.906 07:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:01.907 07:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:01.907 07:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:34:01.907 07:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:34:01.907 07:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:34:01.907 07:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.907 07:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:02.168 Malloc0 00:34:02.168 [2024-11-26 07:43:46.079836] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:02.168 07:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.168 07:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:34:02.168 07:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:02.168 07:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:02.168 07:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2348679 00:34:02.168 07:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2348679 /var/tmp/bdevperf.sock 00:34:02.168 07:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2348679 ']' 00:34:02.168 07:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:02.168 07:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:02.168 07:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:02.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:02.168 07:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:34:02.168 07:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:34:02.168 07:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:02.168 07:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:02.168 07:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:34:02.168 07:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:34:02.168 07:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:02.168 07:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:02.168 { 00:34:02.168 "params": { 00:34:02.168 "name": "Nvme$subsystem", 00:34:02.168 "trtype": "$TEST_TRANSPORT", 00:34:02.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:02.168 "adrfam": "ipv4", 00:34:02.168 "trsvcid": "$NVMF_PORT", 00:34:02.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:02.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:02.168 "hdgst": ${hdgst:-false}, 00:34:02.168 "ddgst": ${ddgst:-false} 00:34:02.168 }, 00:34:02.168 "method": "bdev_nvme_attach_controller" 00:34:02.168 } 00:34:02.168 EOF 00:34:02.168 )") 00:34:02.168 07:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:34:02.168 07:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:34:02.168 07:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:34:02.168 07:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:02.168 "params": { 00:34:02.168 "name": "Nvme0", 00:34:02.168 "trtype": "tcp", 00:34:02.168 "traddr": "10.0.0.2", 00:34:02.168 "adrfam": "ipv4", 00:34:02.168 "trsvcid": "4420", 00:34:02.168 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:02.168 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:02.168 "hdgst": false, 00:34:02.168 "ddgst": false 00:34:02.168 }, 00:34:02.168 "method": "bdev_nvme_attach_controller" 00:34:02.168 }' 00:34:02.168 [2024-11-26 07:43:46.183831] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:34:02.168 [2024-11-26 07:43:46.183891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2348679 ] 00:34:02.168 [2024-11-26 07:43:46.261465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:02.168 [2024-11-26 07:43:46.297527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:02.429 Running I/O for 10 seconds... 00:34:03.004 07:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:03.004 07:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:34:03.004 07:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:34:03.004 07:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.004 07:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:03.004 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.004 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:03.004 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:34:03.004 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:34:03.004 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:34:03.004 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:34:03.004 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:34:03.004 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:34:03.004 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:34:03.004 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:34:03.004 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:34:03.004 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.004 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:03.004 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.004 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1013 00:34:03.004 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1013 -ge 100 ']' 00:34:03.004 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:34:03.004 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:34:03.004 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:34:03.004 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:34:03.004 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.004 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:03.004 [2024-11-26 07:43:47.056124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25800 is same with the state(6) to be set 00:34:03.004 [2024-11-26 07:43:47.056174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25800 is same with the state(6) to be set 00:34:03.004 [2024-11-26 07:43:47.056183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25800 is same with the state(6) to be set 00:34:03.004 [2024-11-26 07:43:47.056190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25800 is same with the state(6) to be set 00:34:03.004 [2024-11-26 07:43:47.056198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25800 is same with the state(6) to be set 00:34:03.004 [2024-11-26 07:43:47.056204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25800 is same with the state(6) to be set 00:34:03.004 [2024-11-26 07:43:47.056211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25800 is same with the state(6) to be set 00:34:03.004 [2024-11-26 07:43:47.056218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25800 is same with the state(6) to be set 00:34:03.004 [2024-11-26 07:43:47.056225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25800 is same with the state(6) to be set 00:34:03.004 [2024-11-26 07:43:47.056232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25800 is same with the state(6) to be set 00:34:03.004 [2024-11-26 07:43:47.056239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25800 is same with the state(6) to be set 00:34:03.004 [2024-11-26 07:43:47.056246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25800 is same with the state(6) to be set 00:34:03.004 [2024-11-26 07:43:47.056253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25800 is same with the state(6) to be set 00:34:03.004 [2024-11-26 07:43:47.056259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25800 is same with the state(6) to be set 00:34:03.004 [2024-11-26 07:43:47.056266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25800 is same with the state(6) to be set 00:34:03.004 [2024-11-26 07:43:47.056273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25800 is same with the state(6) to be set 00:34:03.004 [2024-11-26 07:43:47.056280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25800 is same with the state(6) to be set 00:34:03.004 [2024-11-26 07:43:47.056297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25800 is same with the state(6) to be set 00:34:03.004 [2024-11-26 07:43:47.056304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25800 is same with the state(6) to be set 00:34:03.004 [2024-11-26 07:43:47.056311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25800 is same with the state(6) to be set 00:34:03.004 [2024-11-26 07:43:47.056318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25800 is same with the state(6) to be set 00:34:03.004 [2024-11-26 07:43:47.056325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25800 is same with the state(6) to be set 00:34:03.004 [2024-11-26 07:43:47.056332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25800 is same with the state(6) to be set 00:34:03.004 [2024-11-26 07:43:47.056338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25800 is same with the state(6) to be set 00:34:03.004 [2024-11-26 07:43:47.056345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25800 is same with the state(6) to be set 00:34:03.004 [2024-11-26 07:43:47.056352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25800 is same with the state(6) to be set 00:34:03.004 [2024-11-26 07:43:47.056359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25800 is same with the state(6) to be set 00:34:03.004 [2024-11-26 07:43:47.056366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25800 is same with the state(6) to be set 00:34:03.004 [2024-11-26 07:43:47.056374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25800 is same with the state(6) to be set 00:34:03.004 [2024-11-26 07:43:47.056381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25800 is same with the state(6) to be set 00:34:03.004 [2024-11-26 07:43:47.056586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.004 [2024-11-26 07:43:47.056627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.004 [2024-11-26 07:43:47.056645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.004 [2024-11-26 07:43:47.056654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.004 [2024-11-26 07:43:47.056664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.004 [2024-11-26 07:43:47.056672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.004 [2024-11-26 07:43:47.056682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.004 [2024-11-26 07:43:47.056689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.004 [2024-11-26 07:43:47.056699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.004 [2024-11-26 07:43:47.056707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.056717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.056724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.056734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.056741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.056751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.056758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.056768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.056775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.056784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.056792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.056801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.056808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.056820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.056828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.056843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.056850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.056860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.056874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.056884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.056891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.056900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.056908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.056917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.056925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.056934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.056942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.056952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.056959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.056969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.056976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.056985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.056993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.057003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.057010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.057020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.057027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.057037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.057045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.057054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.057064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.057073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.057080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.057090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.057098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.057108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.057115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.057125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.057132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.057141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.057149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.057159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.057166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.057175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.057183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.057192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.057201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.057210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.057219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.057229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.057236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.057246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.057254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.057265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.057273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.057284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.057292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.057302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.057310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.057319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.057327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.057336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.057344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.057353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.057361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.057370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.057378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.057387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.005 [2024-11-26 07:43:47.057395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.005 [2024-11-26 07:43:47.057404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.006 [2024-11-26 07:43:47.057413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.006 [2024-11-26 07:43:47.057422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.006 [2024-11-26 07:43:47.057430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.006 [2024-11-26 07:43:47.057439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.006 [2024-11-26 07:43:47.057447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.006 [2024-11-26 07:43:47.057459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.006 [2024-11-26 07:43:47.057467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.006 [2024-11-26 07:43:47.057477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.006 [2024-11-26 07:43:47.057485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.006 [2024-11-26 07:43:47.057494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.006 [2024-11-26 07:43:47.057503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.006 [2024-11-26 07:43:47.057512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.006 [2024-11-26 07:43:47.057520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.006 [2024-11-26 07:43:47.057530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.006 [2024-11-26 07:43:47.057540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.006 [2024-11-26 07:43:47.057550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.006 [2024-11-26 07:43:47.057557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.006 [2024-11-26 07:43:47.057566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.006 [2024-11-26 07:43:47.057575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.006 [2024-11-26 07:43:47.057584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.006 [2024-11-26 07:43:47.057592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.006 [2024-11-26 07:43:47.057601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.006 [2024-11-26 07:43:47.057608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.006 [2024-11-26 07:43:47.057618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.006 [2024-11-26 07:43:47.057626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.006 [2024-11-26 07:43:47.057636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.006 [2024-11-26 07:43:47.057644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.006 [2024-11-26 07:43:47.057653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.006 [2024-11-26 07:43:47.057660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.006 [2024-11-26 07:43:47.057670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.006 [2024-11-26 07:43:47.057677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.006 [2024-11-26 07:43:47.057687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.006 [2024-11-26 07:43:47.057694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.006 [2024-11-26 07:43:47.057704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.006 [2024-11-26 07:43:47.057711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.006 [2024-11-26 07:43:47.057722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.006 [2024-11-26 07:43:47.057730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.006 [2024-11-26 07:43:47.057739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.006 [2024-11-26 07:43:47.057747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.006 [2024-11-26 07:43:47.058986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:03.006 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.006 task offset: 9728 on job bdev=Nvme0n1 fails 00:34:03.006 00:34:03.006 Latency(us) 00:34:03.006 [2024-11-26T06:43:47.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:03.006 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:03.006 Job: Nvme0n1 ended in about 0.62 seconds with error 00:34:03.006 Verification LBA range: start 0x0 length 0x400 00:34:03.006 Nvme0n1 : 0.62 1757.43 109.84 103.38 0.00 33564.91 3181.23 32986.45 00:34:03.006 [2024-11-26T06:43:47.143Z] =================================================================================================================== 00:34:03.006 [2024-11-26T06:43:47.143Z] Total : 1757.43 109.84 103.38 0.00 33564.91 3181.23 32986.45 00:34:03.006 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:34:03.006 [2024-11-26 07:43:47.061008] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:03.006 [2024-11-26 07:43:47.061032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x214db00 (9): Bad file descriptor 00:34:03.006 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.006 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:03.006 [2024-11-26 07:43:47.062328] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:34:03.006 [2024-11-26 07:43:47.062405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:34:03.006 [2024-11-26 07:43:47.062425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.006 [2024-11-26 07:43:47.062440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:34:03.006 [2024-11-26 07:43:47.062448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:34:03.006 [2024-11-26 07:43:47.062456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.006 [2024-11-26 07:43:47.062463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x214db00 00:34:03.006 [2024-11-26 07:43:47.062482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x214db00 (9): Bad file descriptor 00:34:03.006 [2024-11-26 07:43:47.062494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:03.006 [2024-11-26 07:43:47.062501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:03.006 [2024-11-26 07:43:47.062510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:03.006 [2024-11-26 07:43:47.062523] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:03.006 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.006 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:34:03.946 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2348679 00:34:03.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2348679) - No such process 00:34:04.206 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:34:04.206 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:34:04.206 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:34:04.206 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:34:04.206 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:34:04.206 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:34:04.206 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:04.206 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:04.206 { 00:34:04.206 "params": { 00:34:04.206 "name": "Nvme$subsystem", 00:34:04.206 "trtype": "$TEST_TRANSPORT", 00:34:04.206 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:04.206 "adrfam": "ipv4", 00:34:04.206 "trsvcid": "$NVMF_PORT", 00:34:04.206 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:04.206 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:04.206 "hdgst": ${hdgst:-false}, 00:34:04.206 "ddgst": ${ddgst:-false} 00:34:04.206 }, 00:34:04.206 "method": "bdev_nvme_attach_controller" 00:34:04.206 } 00:34:04.206 EOF 00:34:04.206 )") 00:34:04.206 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:34:04.206 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:34:04.206 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:34:04.206 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:04.206 "params": { 00:34:04.206 "name": "Nvme0", 00:34:04.206 "trtype": "tcp", 00:34:04.206 "traddr": "10.0.0.2", 00:34:04.206 "adrfam": "ipv4", 00:34:04.206 "trsvcid": "4420", 00:34:04.206 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:04.206 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:04.206 "hdgst": false, 00:34:04.206 "ddgst": false 00:34:04.206 }, 00:34:04.206 "method": "bdev_nvme_attach_controller" 00:34:04.206 }' 00:34:04.206 [2024-11-26 07:43:48.131467] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:34:04.206 [2024-11-26 07:43:48.131519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2349078 ] 00:34:04.206 [2024-11-26 07:43:48.209419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:04.206 [2024-11-26 07:43:48.245412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:04.467 Running I/O for 1 seconds... 00:34:05.409 1664.00 IOPS, 104.00 MiB/s 00:34:05.409 Latency(us) 00:34:05.409 [2024-11-26T06:43:49.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:05.409 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:05.409 Verification LBA range: start 0x0 length 0x400 00:34:05.409 Nvme0n1 : 1.01 1712.43 107.03 0.00 0.00 36696.62 6990.51 32986.45 00:34:05.409 [2024-11-26T06:43:49.546Z] =================================================================================================================== 00:34:05.409 [2024-11-26T06:43:49.546Z] Total : 1712.43 107.03 0.00 0.00 36696.62 6990.51 32986.45 00:34:05.669 07:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:34:05.669 07:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:34:05.669 07:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:34:05.669 07:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:34:05.669 07:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:34:05.669 07:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:05.669 07:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:34:05.669 07:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:05.669 07:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:34:05.669 07:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:05.669 07:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:05.669 rmmod nvme_tcp 00:34:05.669 rmmod nvme_fabrics 00:34:05.670 rmmod nvme_keyring 00:34:05.670 07:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:05.670 07:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:34:05.670 07:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:34:05.670 07:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2348357 ']' 00:34:05.670 07:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2348357 00:34:05.670 07:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2348357 ']' 00:34:05.670 07:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2348357 00:34:05.670 07:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:34:05.670 07:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:05.670 07:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2348357 00:34:05.670 07:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:05.670 07:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:05.670 07:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2348357' 00:34:05.670 killing process with pid 2348357 00:34:05.670 07:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2348357 00:34:05.670 07:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2348357 00:34:05.930 [2024-11-26 07:43:49.814231] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:34:05.930 07:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:05.930 07:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:05.930 07:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:05.930 07:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:34:05.930 07:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:34:05.930 07:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:05.930 07:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:34:05.930 07:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:05.930 07:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:05.930 07:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:05.930 07:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:05.930 07:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:07.844 07:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:07.844 07:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:34:07.844 00:34:07.844 real 0m15.270s 00:34:07.844 user 0m18.947s 00:34:07.844 sys 0m8.002s 00:34:07.844 07:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:07.844 07:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:07.844 ************************************ 00:34:07.844 END TEST nvmf_host_management 00:34:07.844 ************************************ 00:34:07.844 07:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:34:07.844 07:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:07.844 07:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:07.844 07:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:08.106 ************************************ 00:34:08.106 START TEST nvmf_lvol 00:34:08.106 ************************************ 00:34:08.106 07:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:34:08.106 * Looking for test storage... 00:34:08.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:08.106 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:08.106 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:34:08.106 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:08.106 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:08.106 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:08.106 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:08.106 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:08.106 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:34:08.106 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:34:08.106 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:34:08.106 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:34:08.106 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:34:08.106 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:08.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.107 --rc genhtml_branch_coverage=1 00:34:08.107 --rc genhtml_function_coverage=1 00:34:08.107 --rc genhtml_legend=1 00:34:08.107 --rc geninfo_all_blocks=1 00:34:08.107 --rc geninfo_unexecuted_blocks=1 00:34:08.107 00:34:08.107 ' 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:08.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.107 --rc genhtml_branch_coverage=1 00:34:08.107 --rc genhtml_function_coverage=1 00:34:08.107 --rc genhtml_legend=1 00:34:08.107 --rc geninfo_all_blocks=1 00:34:08.107 --rc geninfo_unexecuted_blocks=1 00:34:08.107 00:34:08.107 ' 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:08.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.107 --rc genhtml_branch_coverage=1 00:34:08.107 --rc genhtml_function_coverage=1 00:34:08.107 --rc genhtml_legend=1 00:34:08.107 --rc geninfo_all_blocks=1 00:34:08.107 --rc geninfo_unexecuted_blocks=1 00:34:08.107 00:34:08.107 ' 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:08.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.107 --rc genhtml_branch_coverage=1 00:34:08.107 --rc genhtml_function_coverage=1 00:34:08.107 --rc genhtml_legend=1 00:34:08.107 --rc geninfo_all_blocks=1 00:34:08.107 --rc geninfo_unexecuted_blocks=1 00:34:08.107 00:34:08.107 ' 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:08.107 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:08.108 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:08.108 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:08.108 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:08.108 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:08.108 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:08.108 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:34:08.108 07:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:16.257 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:16.257 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:16.257 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:16.258 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:16.258 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:16.258 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:16.258 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:16.258 Found net devices under 0000:31:00.0: cvl_0_0 00:34:16.258 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:16.258 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:16.258 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:16.258 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:16.258 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:16.258 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:16.258 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:16.258 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:16.258 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:16.258 Found net devices under 0000:31:00.1: cvl_0_1 00:34:16.258 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:16.258 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:16.258 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:34:16.258 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:16.258 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:16.258 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:16.258 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:16.258 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:16.258 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:16.258 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:16.258 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:16.258 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:16.258 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:16.258 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:16.258 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:16.258 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:16.258 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:16.258 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:16.258 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:16.258 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:16.258 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:16.551 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:16.551 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:16.551 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:16.551 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:16.551 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:16.551 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:16.551 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:16.551 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:16.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:16.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:34:16.551 00:34:16.551 --- 10.0.0.2 ping statistics --- 00:34:16.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:16.551 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:34:16.551 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:16.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:16.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:34:16.551 00:34:16.551 --- 10.0.0.1 ping statistics --- 00:34:16.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:16.551 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:34:16.551 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:16.551 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:34:16.551 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:16.551 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:16.551 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:16.551 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:16.551 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:16.551 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:16.551 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:16.935 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:34:16.935 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:16.935 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:16.935 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:16.935 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2354017 00:34:16.935 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2354017 00:34:16.935 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2354017 ']' 00:34:16.935 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:16.935 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:16.935 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:16.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:16.935 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:16.935 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:16.935 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:34:16.935 [2024-11-26 07:44:00.734764] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:16.935 [2024-11-26 07:44:00.735818] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:34:16.935 [2024-11-26 07:44:00.735871] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:16.935 [2024-11-26 07:44:00.824481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:16.935 [2024-11-26 07:44:00.864119] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:16.935 [2024-11-26 07:44:00.864154] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:16.935 [2024-11-26 07:44:00.864162] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:16.935 [2024-11-26 07:44:00.864169] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:16.935 [2024-11-26 07:44:00.864174] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:16.935 [2024-11-26 07:44:00.865661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:16.935 [2024-11-26 07:44:00.865775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:16.935 [2024-11-26 07:44:00.865777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:16.935 [2024-11-26 07:44:00.921006] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:16.935 [2024-11-26 07:44:00.921482] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:16.935 [2024-11-26 07:44:00.921808] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:16.935 [2024-11-26 07:44:00.922101] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:17.526 07:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:17.526 07:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:34:17.526 07:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:17.526 07:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:17.526 07:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:17.526 07:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:17.526 07:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:17.787 [2024-11-26 07:44:01.714610] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:17.787 07:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:18.048 07:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:34:18.048 07:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:18.048 07:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:34:18.048 07:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:34:18.309 07:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:34:18.571 07:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=4425257e-0068-4336-9312-7052dd4315be 00:34:18.571 07:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4425257e-0068-4336-9312-7052dd4315be lvol 20 00:34:18.571 07:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=81d64b04-97a3-4fee-a07e-de1dd87e141e 00:34:18.571 07:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:18.833 07:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 81d64b04-97a3-4fee-a07e-de1dd87e141e 00:34:19.093 07:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:19.093 [2024-11-26 07:44:03.166451] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:19.093 07:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:19.353 07:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2354478 00:34:19.353 07:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:34:19.353 07:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:34:20.294 07:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 81d64b04-97a3-4fee-a07e-de1dd87e141e MY_SNAPSHOT 00:34:20.554 07:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f5a9ebd5-ced3-4bf8-b7a6-85a0c1b88be6 00:34:20.554 07:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 81d64b04-97a3-4fee-a07e-de1dd87e141e 30 00:34:20.815 07:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone f5a9ebd5-ced3-4bf8-b7a6-85a0c1b88be6 MY_CLONE 00:34:21.075 07:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ed742250-aadb-4888-a750-f867d318e5af 00:34:21.075 07:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ed742250-aadb-4888-a750-f867d318e5af 00:34:21.336 07:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2354478 00:34:29.519 Initializing NVMe Controllers 00:34:29.519 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:34:29.519 Controller IO queue size 128, less than required. 00:34:29.519 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:29.519 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:34:29.519 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:34:29.519 Initialization complete. Launching workers. 00:34:29.519 ======================================================== 00:34:29.519 Latency(us) 00:34:29.519 Device Information : IOPS MiB/s Average min max 00:34:29.519 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12298.03 48.04 10413.95 1601.12 68714.13 00:34:29.519 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15794.33 61.70 8104.35 4028.86 62319.58 00:34:29.519 ======================================================== 00:34:29.519 Total : 28092.36 109.74 9115.42 1601.12 68714.13 00:34:29.519 00:34:29.519 07:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:29.780 07:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 81d64b04-97a3-4fee-a07e-de1dd87e141e 00:34:30.041 07:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4425257e-0068-4336-9312-7052dd4315be 00:34:30.041 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:34:30.041 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:34:30.041 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:34:30.041 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:30.041 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:34:30.041 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:30.041 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:34:30.041 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:30.041 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:30.041 rmmod nvme_tcp 00:34:30.041 rmmod nvme_fabrics 00:34:30.301 rmmod nvme_keyring 00:34:30.301 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:30.301 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:34:30.301 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:34:30.301 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2354017 ']' 00:34:30.301 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2354017 00:34:30.301 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2354017 ']' 00:34:30.301 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2354017 00:34:30.301 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:34:30.301 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:30.301 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2354017 00:34:30.301 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:30.301 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:30.301 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2354017' 00:34:30.301 killing process with pid 2354017 00:34:30.301 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2354017 00:34:30.301 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2354017 00:34:30.301 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:30.301 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:30.301 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:30.301 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:34:30.301 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:34:30.301 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:30.301 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:34:30.562 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:30.562 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:30.562 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:30.562 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:30.562 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:32.475 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:32.475 00:34:32.475 real 0m24.518s 00:34:32.475 user 0m55.419s 00:34:32.475 sys 0m11.349s 00:34:32.475 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:32.475 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:32.475 ************************************ 00:34:32.475 END TEST nvmf_lvol 00:34:32.475 ************************************ 00:34:32.475 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:34:32.475 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:32.475 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:32.475 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:32.475 ************************************ 00:34:32.475 START TEST nvmf_lvs_grow 00:34:32.475 ************************************ 00:34:32.476 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:34:32.738 * Looking for test storage... 00:34:32.738 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:32.738 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:32.738 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:34:32.738 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:32.738 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:32.738 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:32.738 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:32.738 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:32.738 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:34:32.738 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:34:32.738 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:34:32.738 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:34:32.738 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:34:32.738 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:34:32.738 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:34:32.738 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:32.738 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:34:32.738 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:34:32.738 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:32.738 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:32.738 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:34:32.738 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:34:32.738 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:32.738 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:34:32.738 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:34:32.738 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:34:32.738 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:34:32.738 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:32.738 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:34:32.738 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:34:32.738 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:32.738 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:32.738 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:34:32.738 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:32.738 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:32.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.738 --rc genhtml_branch_coverage=1 00:34:32.738 --rc genhtml_function_coverage=1 00:34:32.738 --rc genhtml_legend=1 00:34:32.738 --rc geninfo_all_blocks=1 00:34:32.738 --rc geninfo_unexecuted_blocks=1 00:34:32.738 00:34:32.738 ' 00:34:32.738 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:32.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.738 --rc genhtml_branch_coverage=1 00:34:32.738 --rc genhtml_function_coverage=1 00:34:32.738 --rc genhtml_legend=1 00:34:32.738 --rc geninfo_all_blocks=1 00:34:32.738 --rc geninfo_unexecuted_blocks=1 00:34:32.738 00:34:32.738 ' 00:34:32.738 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:32.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.738 --rc genhtml_branch_coverage=1 00:34:32.738 --rc genhtml_function_coverage=1 00:34:32.738 --rc genhtml_legend=1 00:34:32.738 --rc geninfo_all_blocks=1 00:34:32.738 --rc geninfo_unexecuted_blocks=1 00:34:32.738 00:34:32.738 ' 00:34:32.738 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:32.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.738 --rc genhtml_branch_coverage=1 00:34:32.738 --rc genhtml_function_coverage=1 00:34:32.738 --rc genhtml_legend=1 00:34:32.738 --rc geninfo_all_blocks=1 00:34:32.738 --rc geninfo_unexecuted_blocks=1 00:34:32.738 00:34:32.738 ' 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:34:32.739 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:40.881 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:40.881 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:40.881 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:40.882 Found net devices under 0000:31:00.0: cvl_0_0 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:40.882 Found net devices under 0000:31:00.1: cvl_0_1 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:40.882 07:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:41.142 07:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:41.143 07:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:41.143 07:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:41.143 07:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:41.143 07:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:41.143 07:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:41.143 07:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:41.143 07:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:41.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:41.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:34:41.143 00:34:41.143 --- 10.0.0.2 ping statistics --- 00:34:41.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:41.143 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:34:41.143 07:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:41.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:41.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:34:41.143 00:34:41.143 --- 10.0.0.1 ping statistics --- 00:34:41.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:41.143 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:34:41.143 07:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:41.143 07:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:34:41.143 07:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:41.143 07:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:41.143 07:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:41.143 07:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:41.143 07:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:41.143 07:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:41.143 07:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:41.143 07:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:34:41.143 07:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:41.143 07:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:41.143 07:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:41.143 07:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2361177 00:34:41.143 07:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2361177 00:34:41.143 07:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:34:41.143 07:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2361177 ']' 00:34:41.143 07:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:41.143 07:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:41.143 07:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:41.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:41.143 07:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:41.143 07:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:41.404 [2024-11-26 07:44:25.283453] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:41.404 [2024-11-26 07:44:25.284471] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:34:41.404 [2024-11-26 07:44:25.284509] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:41.404 [2024-11-26 07:44:25.372716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:41.404 [2024-11-26 07:44:25.409142] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:41.404 [2024-11-26 07:44:25.409178] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:41.404 [2024-11-26 07:44:25.409187] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:41.404 [2024-11-26 07:44:25.409195] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:41.404 [2024-11-26 07:44:25.409202] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:41.404 [2024-11-26 07:44:25.409787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:41.404 [2024-11-26 07:44:25.464379] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:41.404 [2024-11-26 07:44:25.464629] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:41.976 07:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:41.976 07:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:34:41.976 07:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:41.976 07:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:41.976 07:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:42.237 07:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:42.237 07:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:42.237 [2024-11-26 07:44:26.266547] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:42.237 07:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:34:42.237 07:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:42.237 07:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:42.237 07:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:42.237 ************************************ 00:34:42.237 START TEST lvs_grow_clean 00:34:42.237 ************************************ 00:34:42.237 07:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:34:42.237 07:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:34:42.237 07:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:34:42.237 07:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:34:42.237 07:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:34:42.237 07:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:34:42.237 07:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:34:42.237 07:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:42.237 07:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:42.237 07:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:42.498 07:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:34:42.498 07:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:34:42.759 07:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f91b9b38-a8f8-492d-bc25-a76c80dc6faf 00:34:42.759 07:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f91b9b38-a8f8-492d-bc25-a76c80dc6faf 00:34:42.759 07:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:34:42.759 07:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:34:42.759 07:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:34:42.759 07:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f91b9b38-a8f8-492d-bc25-a76c80dc6faf lvol 150 00:34:43.020 07:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=b0a10842-eb3c-4a9d-997b-380ea6c417cc 00:34:43.020 07:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:43.020 07:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:34:43.282 [2024-11-26 07:44:27.186291] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:34:43.282 [2024-11-26 07:44:27.186457] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:34:43.282 true 00:34:43.282 07:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f91b9b38-a8f8-492d-bc25-a76c80dc6faf 00:34:43.282 07:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:34:43.282 07:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:34:43.282 07:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:43.542 07:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b0a10842-eb3c-4a9d-997b-380ea6c417cc 00:34:43.805 07:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:43.805 [2024-11-26 07:44:27.874439] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:43.805 07:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:44.066 07:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2361885 00:34:44.066 07:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:44.066 07:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:34:44.066 07:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2361885 /var/tmp/bdevperf.sock 00:34:44.066 07:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2361885 ']' 00:34:44.066 07:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:44.066 07:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:44.066 07:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:44.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:44.066 07:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:44.066 07:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:34:44.066 [2024-11-26 07:44:28.116293] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:34:44.066 [2024-11-26 07:44:28.116349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2361885 ] 00:34:44.327 [2024-11-26 07:44:28.212064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:44.327 [2024-11-26 07:44:28.248797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:44.899 07:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:44.899 07:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:34:44.899 07:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:34:45.162 Nvme0n1 00:34:45.162 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:34:45.423 [ 00:34:45.423 { 00:34:45.423 "name": "Nvme0n1", 00:34:45.423 "aliases": [ 00:34:45.423 "b0a10842-eb3c-4a9d-997b-380ea6c417cc" 00:34:45.423 ], 00:34:45.423 "product_name": "NVMe disk", 00:34:45.423 "block_size": 4096, 00:34:45.423 "num_blocks": 38912, 00:34:45.423 "uuid": "b0a10842-eb3c-4a9d-997b-380ea6c417cc", 00:34:45.423 "numa_id": 0, 00:34:45.423 "assigned_rate_limits": { 00:34:45.423 "rw_ios_per_sec": 0, 00:34:45.423 "rw_mbytes_per_sec": 0, 00:34:45.423 "r_mbytes_per_sec": 0, 00:34:45.423 "w_mbytes_per_sec": 0 00:34:45.423 }, 00:34:45.423 "claimed": false, 00:34:45.423 "zoned": false, 00:34:45.423 "supported_io_types": { 00:34:45.423 "read": true, 00:34:45.423 "write": true, 00:34:45.423 "unmap": true, 00:34:45.423 "flush": true, 00:34:45.423 "reset": true, 00:34:45.423 "nvme_admin": true, 00:34:45.423 "nvme_io": true, 00:34:45.423 "nvme_io_md": false, 00:34:45.423 "write_zeroes": true, 00:34:45.423 "zcopy": false, 00:34:45.423 "get_zone_info": false, 00:34:45.423 "zone_management": false, 00:34:45.423 "zone_append": false, 00:34:45.423 "compare": true, 00:34:45.423 "compare_and_write": true, 00:34:45.423 "abort": true, 00:34:45.423 "seek_hole": false, 00:34:45.423 "seek_data": false, 00:34:45.423 "copy": true, 00:34:45.423 "nvme_iov_md": false 00:34:45.423 }, 00:34:45.423 "memory_domains": [ 00:34:45.423 { 00:34:45.423 "dma_device_id": "system", 00:34:45.423 "dma_device_type": 1 00:34:45.423 } 00:34:45.423 ], 00:34:45.423 "driver_specific": { 00:34:45.423 "nvme": [ 00:34:45.423 { 00:34:45.423 "trid": { 00:34:45.423 "trtype": "TCP", 00:34:45.423 "adrfam": "IPv4", 00:34:45.423 "traddr": "10.0.0.2", 00:34:45.423 "trsvcid": "4420", 00:34:45.423 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:34:45.423 }, 00:34:45.423 "ctrlr_data": { 00:34:45.423 "cntlid": 1, 00:34:45.423 "vendor_id": "0x8086", 00:34:45.423 "model_number": "SPDK bdev Controller", 00:34:45.423 "serial_number": "SPDK0", 00:34:45.423 "firmware_revision": "25.01", 00:34:45.423 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:45.423 "oacs": { 00:34:45.423 "security": 0, 00:34:45.423 "format": 0, 00:34:45.423 "firmware": 0, 00:34:45.423 "ns_manage": 0 00:34:45.423 }, 00:34:45.423 "multi_ctrlr": true, 00:34:45.423 "ana_reporting": false 00:34:45.423 }, 00:34:45.423 "vs": { 00:34:45.423 "nvme_version": "1.3" 00:34:45.423 }, 00:34:45.423 "ns_data": { 00:34:45.423 "id": 1, 00:34:45.423 "can_share": true 00:34:45.423 } 00:34:45.423 } 00:34:45.423 ], 00:34:45.423 "mp_policy": "active_passive" 00:34:45.423 } 00:34:45.423 } 00:34:45.423 ] 00:34:45.423 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2362071 00:34:45.423 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:34:45.423 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:45.423 Running I/O for 10 seconds... 00:34:46.367 Latency(us) 00:34:46.367 [2024-11-26T06:44:30.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:46.367 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:46.368 Nvme0n1 : 1.00 17790.00 69.49 0.00 0.00 0.00 0.00 0.00 00:34:46.368 [2024-11-26T06:44:30.505Z] =================================================================================================================== 00:34:46.368 [2024-11-26T06:44:30.505Z] Total : 17790.00 69.49 0.00 0.00 0.00 0.00 0.00 00:34:46.368 00:34:47.312 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f91b9b38-a8f8-492d-bc25-a76c80dc6faf 00:34:47.312 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:47.312 Nvme0n1 : 2.00 17912.00 69.97 0.00 0.00 0.00 0.00 0.00 00:34:47.312 [2024-11-26T06:44:31.449Z] =================================================================================================================== 00:34:47.312 [2024-11-26T06:44:31.449Z] Total : 17912.00 69.97 0.00 0.00 0.00 0.00 0.00 00:34:47.312 00:34:47.573 true 00:34:47.573 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f91b9b38-a8f8-492d-bc25-a76c80dc6faf 00:34:47.573 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:34:47.573 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:34:47.573 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:34:47.573 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2362071 00:34:48.515 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:48.515 Nvme0n1 : 3.00 17952.67 70.13 0.00 0.00 0.00 0.00 0.00 00:34:48.515 [2024-11-26T06:44:32.652Z] =================================================================================================================== 00:34:48.515 [2024-11-26T06:44:32.652Z] Total : 17952.67 70.13 0.00 0.00 0.00 0.00 0.00 00:34:48.515 00:34:49.458 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:49.458 Nvme0n1 : 4.00 18004.75 70.33 0.00 0.00 0.00 0.00 0.00 00:34:49.458 [2024-11-26T06:44:33.595Z] =================================================================================================================== 00:34:49.458 [2024-11-26T06:44:33.595Z] Total : 18004.75 70.33 0.00 0.00 0.00 0.00 0.00 00:34:49.458 00:34:50.400 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:50.400 Nvme0n1 : 5.00 18036.00 70.45 0.00 0.00 0.00 0.00 0.00 00:34:50.400 [2024-11-26T06:44:34.537Z] =================================================================================================================== 00:34:50.400 [2024-11-26T06:44:34.537Z] Total : 18036.00 70.45 0.00 0.00 0.00 0.00 0.00 00:34:50.400 00:34:51.340 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:51.340 Nvme0n1 : 6.00 18056.83 70.53 0.00 0.00 0.00 0.00 0.00 00:34:51.340 [2024-11-26T06:44:35.477Z] =================================================================================================================== 00:34:51.340 [2024-11-26T06:44:35.477Z] Total : 18056.83 70.53 0.00 0.00 0.00 0.00 0.00 00:34:51.340 00:34:52.281 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:52.281 Nvme0n1 : 7.00 18071.71 70.59 0.00 0.00 0.00 0.00 0.00 00:34:52.281 [2024-11-26T06:44:36.418Z] =================================================================================================================== 00:34:52.281 [2024-11-26T06:44:36.418Z] Total : 18071.71 70.59 0.00 0.00 0.00 0.00 0.00 00:34:52.281 00:34:53.664 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:53.664 Nvme0n1 : 8.00 18082.88 70.64 0.00 0.00 0.00 0.00 0.00 00:34:53.664 [2024-11-26T06:44:37.801Z] =================================================================================================================== 00:34:53.664 [2024-11-26T06:44:37.801Z] Total : 18082.88 70.64 0.00 0.00 0.00 0.00 0.00 00:34:53.664 00:34:54.607 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:54.607 Nvme0n1 : 9.00 18091.56 70.67 0.00 0.00 0.00 0.00 0.00 00:34:54.607 [2024-11-26T06:44:38.744Z] =================================================================================================================== 00:34:54.607 [2024-11-26T06:44:38.744Z] Total : 18091.56 70.67 0.00 0.00 0.00 0.00 0.00 00:34:54.607 00:34:55.598 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:55.598 Nvme0n1 : 10.00 18104.90 70.72 0.00 0.00 0.00 0.00 0.00 00:34:55.598 [2024-11-26T06:44:39.735Z] =================================================================================================================== 00:34:55.598 [2024-11-26T06:44:39.735Z] Total : 18104.90 70.72 0.00 0.00 0.00 0.00 0.00 00:34:55.598 00:34:55.598 00:34:55.598 Latency(us) 00:34:55.598 [2024-11-26T06:44:39.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:55.598 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:55.598 Nvme0n1 : 10.00 18109.74 70.74 0.00 0.00 7065.55 2307.41 13544.11 00:34:55.598 [2024-11-26T06:44:39.735Z] =================================================================================================================== 00:34:55.598 [2024-11-26T06:44:39.735Z] Total : 18109.74 70.74 0.00 0.00 7065.55 2307.41 13544.11 00:34:55.598 { 00:34:55.598 "results": [ 00:34:55.598 { 00:34:55.598 "job": "Nvme0n1", 00:34:55.598 "core_mask": "0x2", 00:34:55.598 "workload": "randwrite", 00:34:55.598 "status": "finished", 00:34:55.598 "queue_depth": 128, 00:34:55.598 "io_size": 4096, 00:34:55.598 "runtime": 10.004398, 00:34:55.598 "iops": 18109.73533839817, 00:34:55.598 "mibps": 70.74115366561786, 00:34:55.598 "io_failed": 0, 00:34:55.598 "io_timeout": 0, 00:34:55.598 "avg_latency_us": 7065.5499992456735, 00:34:55.598 "min_latency_us": 2307.4133333333334, 00:34:55.598 "max_latency_us": 13544.106666666667 00:34:55.598 } 00:34:55.598 ], 00:34:55.598 "core_count": 1 00:34:55.598 } 00:34:55.598 07:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2361885 00:34:55.598 07:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2361885 ']' 00:34:55.599 07:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2361885 00:34:55.599 07:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:34:55.599 07:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:55.599 07:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2361885 00:34:55.599 07:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:55.599 07:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:55.599 07:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2361885' 00:34:55.599 killing process with pid 2361885 00:34:55.599 07:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2361885 00:34:55.599 Received shutdown signal, test time was about 10.000000 seconds 00:34:55.599 00:34:55.599 Latency(us) 00:34:55.599 [2024-11-26T06:44:39.736Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:55.599 [2024-11-26T06:44:39.736Z] =================================================================================================================== 00:34:55.599 [2024-11-26T06:44:39.736Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:55.599 07:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2361885 00:34:55.599 07:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:55.859 07:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:55.859 07:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f91b9b38-a8f8-492d-bc25-a76c80dc6faf 00:34:55.859 07:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:34:56.120 07:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:34:56.120 07:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:34:56.120 07:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:56.380 [2024-11-26 07:44:40.310168] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:34:56.380 07:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f91b9b38-a8f8-492d-bc25-a76c80dc6faf 00:34:56.380 07:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:34:56.380 07:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f91b9b38-a8f8-492d-bc25-a76c80dc6faf 00:34:56.380 07:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:56.380 07:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:56.380 07:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:56.380 07:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:56.380 07:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:56.380 07:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:56.381 07:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:56.381 07:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:34:56.381 07:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f91b9b38-a8f8-492d-bc25-a76c80dc6faf 00:34:56.641 request: 00:34:56.641 { 00:34:56.641 "uuid": "f91b9b38-a8f8-492d-bc25-a76c80dc6faf", 00:34:56.641 "method": "bdev_lvol_get_lvstores", 00:34:56.641 "req_id": 1 00:34:56.641 } 00:34:56.641 Got JSON-RPC error response 00:34:56.641 response: 00:34:56.641 { 00:34:56.641 "code": -19, 00:34:56.641 "message": "No such device" 00:34:56.641 } 00:34:56.641 07:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:34:56.641 07:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:56.641 07:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:56.641 07:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:56.641 07:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:56.641 aio_bdev 00:34:56.642 07:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b0a10842-eb3c-4a9d-997b-380ea6c417cc 00:34:56.642 07:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=b0a10842-eb3c-4a9d-997b-380ea6c417cc 00:34:56.642 07:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:56.642 07:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:34:56.642 07:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:56.642 07:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:56.642 07:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:56.902 07:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b0a10842-eb3c-4a9d-997b-380ea6c417cc -t 2000 00:34:57.164 [ 00:34:57.164 { 00:34:57.164 "name": "b0a10842-eb3c-4a9d-997b-380ea6c417cc", 00:34:57.164 "aliases": [ 00:34:57.164 "lvs/lvol" 00:34:57.164 ], 00:34:57.164 "product_name": "Logical Volume", 00:34:57.164 "block_size": 4096, 00:34:57.164 "num_blocks": 38912, 00:34:57.164 "uuid": "b0a10842-eb3c-4a9d-997b-380ea6c417cc", 00:34:57.164 "assigned_rate_limits": { 00:34:57.164 "rw_ios_per_sec": 0, 00:34:57.164 "rw_mbytes_per_sec": 0, 00:34:57.164 "r_mbytes_per_sec": 0, 00:34:57.164 "w_mbytes_per_sec": 0 00:34:57.164 }, 00:34:57.164 "claimed": false, 00:34:57.164 "zoned": false, 00:34:57.164 "supported_io_types": { 00:34:57.164 "read": true, 00:34:57.164 "write": true, 00:34:57.164 "unmap": true, 00:34:57.164 "flush": false, 00:34:57.164 "reset": true, 00:34:57.164 "nvme_admin": false, 00:34:57.164 "nvme_io": false, 00:34:57.164 "nvme_io_md": false, 00:34:57.164 "write_zeroes": true, 00:34:57.164 "zcopy": false, 00:34:57.164 "get_zone_info": false, 00:34:57.164 "zone_management": false, 00:34:57.164 "zone_append": false, 00:34:57.164 "compare": false, 00:34:57.164 "compare_and_write": false, 00:34:57.164 "abort": false, 00:34:57.164 "seek_hole": true, 00:34:57.164 "seek_data": true, 00:34:57.164 "copy": false, 00:34:57.164 "nvme_iov_md": false 00:34:57.164 }, 00:34:57.164 "driver_specific": { 00:34:57.164 "lvol": { 00:34:57.164 "lvol_store_uuid": "f91b9b38-a8f8-492d-bc25-a76c80dc6faf", 00:34:57.164 "base_bdev": "aio_bdev", 00:34:57.164 "thin_provision": false, 00:34:57.164 "num_allocated_clusters": 38, 00:34:57.164 "snapshot": false, 00:34:57.164 "clone": false, 00:34:57.164 "esnap_clone": false 00:34:57.164 } 00:34:57.164 } 00:34:57.164 } 00:34:57.164 ] 00:34:57.164 07:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:34:57.164 07:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f91b9b38-a8f8-492d-bc25-a76c80dc6faf 00:34:57.164 07:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:34:57.164 07:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:34:57.164 07:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f91b9b38-a8f8-492d-bc25-a76c80dc6faf 00:34:57.164 07:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:34:57.425 07:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:34:57.425 07:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b0a10842-eb3c-4a9d-997b-380ea6c417cc 00:34:57.686 07:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f91b9b38-a8f8-492d-bc25-a76c80dc6faf 00:34:57.686 07:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:57.947 07:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:57.947 00:34:57.947 real 0m15.688s 00:34:57.947 user 0m15.361s 00:34:57.947 sys 0m1.378s 00:34:57.947 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:57.947 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:34:57.947 ************************************ 00:34:57.947 END TEST lvs_grow_clean 00:34:57.947 ************************************ 00:34:57.947 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:34:57.947 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:57.947 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:57.947 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:58.209 ************************************ 00:34:58.209 START TEST lvs_grow_dirty 00:34:58.209 ************************************ 00:34:58.209 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:34:58.209 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:34:58.209 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:34:58.209 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:34:58.209 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:34:58.210 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:34:58.210 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:34:58.210 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:58.210 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:58.210 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:58.210 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:34:58.210 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:34:58.472 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a7352963-26e0-4c00-bdd4-b6cb057a2b89 00:34:58.472 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a7352963-26e0-4c00-bdd4-b6cb057a2b89 00:34:58.472 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:34:58.734 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:34:58.734 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:34:58.734 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a7352963-26e0-4c00-bdd4-b6cb057a2b89 lvol 150 00:34:58.734 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=6b572106-045e-4107-ae1a-2fa4533845eb 00:34:58.734 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:58.734 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:34:58.996 [2024-11-26 07:44:43.006243] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:34:58.996 [2024-11-26 07:44:43.006389] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:34:58.996 true 00:34:58.996 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a7352963-26e0-4c00-bdd4-b6cb057a2b89 00:34:58.996 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:34:59.256 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:34:59.256 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:59.256 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6b572106-045e-4107-ae1a-2fa4533845eb 00:34:59.517 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:59.778 [2024-11-26 07:44:43.662431] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:59.778 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:59.778 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:34:59.778 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2364886 00:34:59.778 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:59.778 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2364886 /var/tmp/bdevperf.sock 00:34:59.778 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2364886 ']' 00:34:59.778 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:59.778 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:59.778 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:59.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:59.778 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:59.778 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:59.778 [2024-11-26 07:44:43.877172] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:34:59.778 [2024-11-26 07:44:43.877224] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2364886 ] 00:35:00.038 [2024-11-26 07:44:43.967456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:00.038 [2024-11-26 07:44:43.997270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:00.038 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:00.038 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:35:00.038 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:35:00.298 Nvme0n1 00:35:00.558 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:35:00.558 [ 00:35:00.558 { 00:35:00.558 "name": "Nvme0n1", 00:35:00.558 "aliases": [ 00:35:00.558 "6b572106-045e-4107-ae1a-2fa4533845eb" 00:35:00.558 ], 00:35:00.558 "product_name": "NVMe disk", 00:35:00.558 "block_size": 4096, 00:35:00.558 "num_blocks": 38912, 00:35:00.558 "uuid": "6b572106-045e-4107-ae1a-2fa4533845eb", 00:35:00.558 "numa_id": 0, 00:35:00.558 "assigned_rate_limits": { 00:35:00.558 "rw_ios_per_sec": 0, 00:35:00.558 "rw_mbytes_per_sec": 0, 00:35:00.558 "r_mbytes_per_sec": 0, 00:35:00.558 "w_mbytes_per_sec": 0 00:35:00.558 }, 00:35:00.558 "claimed": false, 00:35:00.558 "zoned": false, 00:35:00.558 "supported_io_types": { 00:35:00.558 "read": true, 00:35:00.558 "write": true, 00:35:00.558 "unmap": true, 00:35:00.558 "flush": true, 00:35:00.558 "reset": true, 00:35:00.558 "nvme_admin": true, 00:35:00.558 "nvme_io": true, 00:35:00.558 "nvme_io_md": false, 00:35:00.558 "write_zeroes": true, 00:35:00.558 "zcopy": false, 00:35:00.558 "get_zone_info": false, 00:35:00.558 "zone_management": false, 00:35:00.558 "zone_append": false, 00:35:00.558 "compare": true, 00:35:00.558 "compare_and_write": true, 00:35:00.558 "abort": true, 00:35:00.558 "seek_hole": false, 00:35:00.558 "seek_data": false, 00:35:00.558 "copy": true, 00:35:00.558 "nvme_iov_md": false 00:35:00.558 }, 00:35:00.558 "memory_domains": [ 00:35:00.558 { 00:35:00.558 "dma_device_id": "system", 00:35:00.558 "dma_device_type": 1 00:35:00.558 } 00:35:00.558 ], 00:35:00.558 "driver_specific": { 00:35:00.558 "nvme": [ 00:35:00.558 { 00:35:00.558 "trid": { 00:35:00.558 "trtype": "TCP", 00:35:00.558 "adrfam": "IPv4", 00:35:00.558 "traddr": "10.0.0.2", 00:35:00.558 "trsvcid": "4420", 00:35:00.559 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:35:00.559 }, 00:35:00.559 "ctrlr_data": { 00:35:00.559 "cntlid": 1, 00:35:00.559 "vendor_id": "0x8086", 00:35:00.559 "model_number": "SPDK bdev Controller", 00:35:00.559 "serial_number": "SPDK0", 00:35:00.559 "firmware_revision": "25.01", 00:35:00.559 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:00.559 "oacs": { 00:35:00.559 "security": 0, 00:35:00.559 "format": 0, 00:35:00.559 "firmware": 0, 00:35:00.559 "ns_manage": 0 00:35:00.559 }, 00:35:00.559 "multi_ctrlr": true, 00:35:00.559 "ana_reporting": false 00:35:00.559 }, 00:35:00.559 "vs": { 00:35:00.559 "nvme_version": "1.3" 00:35:00.559 }, 00:35:00.559 "ns_data": { 00:35:00.559 "id": 1, 00:35:00.559 "can_share": true 00:35:00.559 } 00:35:00.559 } 00:35:00.559 ], 00:35:00.559 "mp_policy": "active_passive" 00:35:00.559 } 00:35:00.559 } 00:35:00.559 ] 00:35:00.559 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2364977 00:35:00.559 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:35:00.559 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:00.819 Running I/O for 10 seconds... 00:35:01.762 Latency(us) 00:35:01.762 [2024-11-26T06:44:45.899Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:01.763 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:01.763 Nvme0n1 : 1.00 17788.00 69.48 0.00 0.00 0.00 0.00 0.00 00:35:01.763 [2024-11-26T06:44:45.900Z] =================================================================================================================== 00:35:01.763 [2024-11-26T06:44:45.900Z] Total : 17788.00 69.48 0.00 0.00 0.00 0.00 0.00 00:35:01.763 00:35:02.704 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a7352963-26e0-4c00-bdd4-b6cb057a2b89 00:35:02.704 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:02.704 Nvme0n1 : 2.00 17911.00 69.96 0.00 0.00 0.00 0.00 0.00 00:35:02.704 [2024-11-26T06:44:46.841Z] =================================================================================================================== 00:35:02.704 [2024-11-26T06:44:46.841Z] Total : 17911.00 69.96 0.00 0.00 0.00 0.00 0.00 00:35:02.704 00:35:02.704 true 00:35:02.704 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a7352963-26e0-4c00-bdd4-b6cb057a2b89 00:35:02.704 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:35:02.964 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:35:02.964 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:35:02.964 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2364977 00:35:03.905 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:03.905 Nvme0n1 : 3.00 17952.00 70.12 0.00 0.00 0.00 0.00 0.00 00:35:03.905 [2024-11-26T06:44:48.042Z] =================================================================================================================== 00:35:03.905 [2024-11-26T06:44:48.042Z] Total : 17952.00 70.12 0.00 0.00 0.00 0.00 0.00 00:35:03.905 00:35:04.848 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:04.848 Nvme0n1 : 4.00 18004.25 70.33 0.00 0.00 0.00 0.00 0.00 00:35:04.848 [2024-11-26T06:44:48.985Z] =================================================================================================================== 00:35:04.848 [2024-11-26T06:44:48.985Z] Total : 18004.25 70.33 0.00 0.00 0.00 0.00 0.00 00:35:04.848 00:35:05.791 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:05.792 Nvme0n1 : 5.00 18035.60 70.45 0.00 0.00 0.00 0.00 0.00 00:35:05.792 [2024-11-26T06:44:49.929Z] =================================================================================================================== 00:35:05.792 [2024-11-26T06:44:49.929Z] Total : 18035.60 70.45 0.00 0.00 0.00 0.00 0.00 00:35:05.792 00:35:06.733 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:06.733 Nvme0n1 : 6.00 18035.33 70.45 0.00 0.00 0.00 0.00 0.00 00:35:06.733 [2024-11-26T06:44:50.870Z] =================================================================================================================== 00:35:06.733 [2024-11-26T06:44:50.870Z] Total : 18035.33 70.45 0.00 0.00 0.00 0.00 0.00 00:35:06.733 00:35:07.675 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:07.676 Nvme0n1 : 7.00 18053.29 70.52 0.00 0.00 0.00 0.00 0.00 00:35:07.676 [2024-11-26T06:44:51.813Z] =================================================================================================================== 00:35:07.676 [2024-11-26T06:44:51.813Z] Total : 18053.29 70.52 0.00 0.00 0.00 0.00 0.00 00:35:07.676 00:35:08.618 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:08.618 Nvme0n1 : 8.00 18068.88 70.58 0.00 0.00 0.00 0.00 0.00 00:35:08.618 [2024-11-26T06:44:52.755Z] =================================================================================================================== 00:35:08.618 [2024-11-26T06:44:52.755Z] Total : 18068.88 70.58 0.00 0.00 0.00 0.00 0.00 00:35:08.618 00:35:10.002 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:10.002 Nvme0n1 : 9.00 18086.22 70.65 0.00 0.00 0.00 0.00 0.00 00:35:10.002 [2024-11-26T06:44:54.139Z] =================================================================================================================== 00:35:10.002 [2024-11-26T06:44:54.139Z] Total : 18086.22 70.65 0.00 0.00 0.00 0.00 0.00 00:35:10.002 00:35:10.946 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:10.946 Nvme0n1 : 10.00 18100.00 70.70 0.00 0.00 0.00 0.00 0.00 00:35:10.946 [2024-11-26T06:44:55.083Z] =================================================================================================================== 00:35:10.946 [2024-11-26T06:44:55.083Z] Total : 18100.00 70.70 0.00 0.00 0.00 0.00 0.00 00:35:10.946 00:35:10.946 00:35:10.946 Latency(us) 00:35:10.946 [2024-11-26T06:44:55.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:10.946 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:10.946 Nvme0n1 : 10.01 18102.04 70.71 0.00 0.00 7069.84 1774.93 13926.40 00:35:10.946 [2024-11-26T06:44:55.083Z] =================================================================================================================== 00:35:10.946 [2024-11-26T06:44:55.083Z] Total : 18102.04 70.71 0.00 0.00 7069.84 1774.93 13926.40 00:35:10.946 { 00:35:10.946 "results": [ 00:35:10.946 { 00:35:10.946 "job": "Nvme0n1", 00:35:10.946 "core_mask": "0x2", 00:35:10.946 "workload": "randwrite", 00:35:10.946 "status": "finished", 00:35:10.946 "queue_depth": 128, 00:35:10.946 "io_size": 4096, 00:35:10.946 "runtime": 10.005946, 00:35:10.946 "iops": 18102.03652907981, 00:35:10.946 "mibps": 70.71108019171801, 00:35:10.946 "io_failed": 0, 00:35:10.946 "io_timeout": 0, 00:35:10.946 "avg_latency_us": 7069.839040089513, 00:35:10.946 "min_latency_us": 1774.9333333333334, 00:35:10.946 "max_latency_us": 13926.4 00:35:10.946 } 00:35:10.946 ], 00:35:10.946 "core_count": 1 00:35:10.946 } 00:35:10.946 07:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2364886 00:35:10.946 07:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2364886 ']' 00:35:10.946 07:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2364886 00:35:10.946 07:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:35:10.946 07:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:10.946 07:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2364886 00:35:10.946 07:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:10.946 07:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:10.946 07:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2364886' 00:35:10.946 killing process with pid 2364886 00:35:10.946 07:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2364886 00:35:10.946 Received shutdown signal, test time was about 10.000000 seconds 00:35:10.946 00:35:10.946 Latency(us) 00:35:10.946 [2024-11-26T06:44:55.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:10.946 [2024-11-26T06:44:55.083Z] =================================================================================================================== 00:35:10.946 [2024-11-26T06:44:55.083Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:10.946 07:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2364886 00:35:10.946 07:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:11.208 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:11.208 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a7352963-26e0-4c00-bdd4-b6cb057a2b89 00:35:11.208 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:35:11.468 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:35:11.469 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:35:11.469 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2361177 00:35:11.469 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2361177 00:35:11.469 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2361177 Killed "${NVMF_APP[@]}" "$@" 00:35:11.469 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:35:11.469 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:35:11.469 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:11.469 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:11.469 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:11.469 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2366989 00:35:11.469 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2366989 00:35:11.469 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:35:11.469 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2366989 ']' 00:35:11.469 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:11.469 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:11.469 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:11.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:11.469 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:11.469 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:11.469 [2024-11-26 07:44:55.538832] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:11.469 [2024-11-26 07:44:55.539817] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:35:11.469 [2024-11-26 07:44:55.539859] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:11.730 [2024-11-26 07:44:55.626155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:11.730 [2024-11-26 07:44:55.663090] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:11.730 [2024-11-26 07:44:55.663121] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:11.730 [2024-11-26 07:44:55.663129] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:11.730 [2024-11-26 07:44:55.663136] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:11.730 [2024-11-26 07:44:55.663141] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:11.730 [2024-11-26 07:44:55.663700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:11.730 [2024-11-26 07:44:55.718132] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:11.730 [2024-11-26 07:44:55.718383] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:12.301 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:12.301 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:35:12.301 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:12.301 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:12.301 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:12.301 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:12.301 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:12.564 [2024-11-26 07:44:56.530591] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:35:12.564 [2024-11-26 07:44:56.530721] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:35:12.564 [2024-11-26 07:44:56.530754] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:35:12.564 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:35:12.564 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 6b572106-045e-4107-ae1a-2fa4533845eb 00:35:12.564 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=6b572106-045e-4107-ae1a-2fa4533845eb 00:35:12.564 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:12.564 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:35:12.564 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:12.564 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:12.564 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:12.826 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6b572106-045e-4107-ae1a-2fa4533845eb -t 2000 00:35:12.826 [ 00:35:12.826 { 00:35:12.826 "name": "6b572106-045e-4107-ae1a-2fa4533845eb", 00:35:12.826 "aliases": [ 00:35:12.826 "lvs/lvol" 00:35:12.826 ], 00:35:12.826 "product_name": "Logical Volume", 00:35:12.826 "block_size": 4096, 00:35:12.826 "num_blocks": 38912, 00:35:12.826 "uuid": "6b572106-045e-4107-ae1a-2fa4533845eb", 00:35:12.826 "assigned_rate_limits": { 00:35:12.826 "rw_ios_per_sec": 0, 00:35:12.826 "rw_mbytes_per_sec": 0, 00:35:12.826 "r_mbytes_per_sec": 0, 00:35:12.826 "w_mbytes_per_sec": 0 00:35:12.826 }, 00:35:12.826 "claimed": false, 00:35:12.826 "zoned": false, 00:35:12.826 "supported_io_types": { 00:35:12.826 "read": true, 00:35:12.826 "write": true, 00:35:12.826 "unmap": true, 00:35:12.826 "flush": false, 00:35:12.826 "reset": true, 00:35:12.826 "nvme_admin": false, 00:35:12.826 "nvme_io": false, 00:35:12.826 "nvme_io_md": false, 00:35:12.826 "write_zeroes": true, 00:35:12.826 "zcopy": false, 00:35:12.826 "get_zone_info": false, 00:35:12.826 "zone_management": false, 00:35:12.826 "zone_append": false, 00:35:12.826 "compare": false, 00:35:12.826 "compare_and_write": false, 00:35:12.826 "abort": false, 00:35:12.826 "seek_hole": true, 00:35:12.826 "seek_data": true, 00:35:12.826 "copy": false, 00:35:12.826 "nvme_iov_md": false 00:35:12.826 }, 00:35:12.826 "driver_specific": { 00:35:12.826 "lvol": { 00:35:12.826 "lvol_store_uuid": "a7352963-26e0-4c00-bdd4-b6cb057a2b89", 00:35:12.826 "base_bdev": "aio_bdev", 00:35:12.826 "thin_provision": false, 00:35:12.826 "num_allocated_clusters": 38, 00:35:12.826 "snapshot": false, 00:35:12.826 "clone": false, 00:35:12.826 "esnap_clone": false 00:35:12.826 } 00:35:12.826 } 00:35:12.826 } 00:35:12.826 ] 00:35:12.826 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:35:12.826 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a7352963-26e0-4c00-bdd4-b6cb057a2b89 00:35:12.826 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:35:13.088 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:35:13.088 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a7352963-26e0-4c00-bdd4-b6cb057a2b89 00:35:13.088 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:35:13.349 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:35:13.349 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:35:13.349 [2024-11-26 07:44:57.384127] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:35:13.349 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a7352963-26e0-4c00-bdd4-b6cb057a2b89 00:35:13.349 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:35:13.349 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a7352963-26e0-4c00-bdd4-b6cb057a2b89 00:35:13.349 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:13.349 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:13.349 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:13.349 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:13.349 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:13.349 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:13.349 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:13.350 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:35:13.350 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a7352963-26e0-4c00-bdd4-b6cb057a2b89 00:35:13.610 request: 00:35:13.610 { 00:35:13.610 "uuid": "a7352963-26e0-4c00-bdd4-b6cb057a2b89", 00:35:13.610 "method": "bdev_lvol_get_lvstores", 00:35:13.610 "req_id": 1 00:35:13.610 } 00:35:13.610 Got JSON-RPC error response 00:35:13.610 response: 00:35:13.610 { 00:35:13.610 "code": -19, 00:35:13.610 "message": "No such device" 00:35:13.610 } 00:35:13.610 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:35:13.610 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:13.610 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:13.610 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:13.610 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:13.872 aio_bdev 00:35:13.872 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6b572106-045e-4107-ae1a-2fa4533845eb 00:35:13.872 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=6b572106-045e-4107-ae1a-2fa4533845eb 00:35:13.872 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:13.872 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:35:13.872 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:13.872 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:13.872 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:13.872 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6b572106-045e-4107-ae1a-2fa4533845eb -t 2000 00:35:14.133 [ 00:35:14.133 { 00:35:14.133 "name": "6b572106-045e-4107-ae1a-2fa4533845eb", 00:35:14.133 "aliases": [ 00:35:14.133 "lvs/lvol" 00:35:14.133 ], 00:35:14.133 "product_name": "Logical Volume", 00:35:14.133 "block_size": 4096, 00:35:14.133 "num_blocks": 38912, 00:35:14.133 "uuid": "6b572106-045e-4107-ae1a-2fa4533845eb", 00:35:14.133 "assigned_rate_limits": { 00:35:14.133 "rw_ios_per_sec": 0, 00:35:14.133 "rw_mbytes_per_sec": 0, 00:35:14.133 "r_mbytes_per_sec": 0, 00:35:14.133 "w_mbytes_per_sec": 0 00:35:14.133 }, 00:35:14.133 "claimed": false, 00:35:14.133 "zoned": false, 00:35:14.133 "supported_io_types": { 00:35:14.133 "read": true, 00:35:14.133 "write": true, 00:35:14.133 "unmap": true, 00:35:14.133 "flush": false, 00:35:14.133 "reset": true, 00:35:14.133 "nvme_admin": false, 00:35:14.133 "nvme_io": false, 00:35:14.133 "nvme_io_md": false, 00:35:14.133 "write_zeroes": true, 00:35:14.133 "zcopy": false, 00:35:14.134 "get_zone_info": false, 00:35:14.134 "zone_management": false, 00:35:14.134 "zone_append": false, 00:35:14.134 "compare": false, 00:35:14.134 "compare_and_write": false, 00:35:14.134 "abort": false, 00:35:14.134 "seek_hole": true, 00:35:14.134 "seek_data": true, 00:35:14.134 "copy": false, 00:35:14.134 "nvme_iov_md": false 00:35:14.134 }, 00:35:14.134 "driver_specific": { 00:35:14.134 "lvol": { 00:35:14.134 "lvol_store_uuid": "a7352963-26e0-4c00-bdd4-b6cb057a2b89", 00:35:14.134 "base_bdev": "aio_bdev", 00:35:14.134 "thin_provision": false, 00:35:14.134 "num_allocated_clusters": 38, 00:35:14.134 "snapshot": false, 00:35:14.134 "clone": false, 00:35:14.134 "esnap_clone": false 00:35:14.134 } 00:35:14.134 } 00:35:14.134 } 00:35:14.134 ] 00:35:14.134 07:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:35:14.134 07:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a7352963-26e0-4c00-bdd4-b6cb057a2b89 00:35:14.134 07:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:35:14.395 07:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:35:14.395 07:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a7352963-26e0-4c00-bdd4-b6cb057a2b89 00:35:14.395 07:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:35:14.395 07:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:35:14.395 07:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6b572106-045e-4107-ae1a-2fa4533845eb 00:35:14.657 07:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a7352963-26e0-4c00-bdd4-b6cb057a2b89 00:35:14.919 07:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:35:14.919 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:15.180 00:35:15.180 real 0m16.961s 00:35:15.180 user 0m34.774s 00:35:15.180 sys 0m2.911s 00:35:15.180 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:15.180 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:15.180 ************************************ 00:35:15.180 END TEST lvs_grow_dirty 00:35:15.180 ************************************ 00:35:15.180 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:35:15.180 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:35:15.180 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:35:15.181 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:35:15.181 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:35:15.181 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:35:15.181 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:35:15.181 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:35:15.181 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:35:15.181 nvmf_trace.0 00:35:15.181 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:35:15.181 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:35:15.181 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:15.181 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:35:15.181 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:15.181 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:35:15.181 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:15.181 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:15.181 rmmod nvme_tcp 00:35:15.181 rmmod nvme_fabrics 00:35:15.181 rmmod nvme_keyring 00:35:15.181 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:15.181 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:35:15.181 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:35:15.181 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2366989 ']' 00:35:15.181 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2366989 00:35:15.181 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2366989 ']' 00:35:15.181 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2366989 00:35:15.181 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:35:15.181 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:15.181 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2366989 00:35:15.181 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:15.181 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:15.181 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2366989' 00:35:15.181 killing process with pid 2366989 00:35:15.181 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2366989 00:35:15.181 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2366989 00:35:15.441 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:15.441 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:15.441 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:15.441 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:35:15.441 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:35:15.441 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:15.441 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:35:15.441 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:15.441 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:15.441 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:15.441 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:15.441 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:17.990 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:17.990 00:35:17.990 real 0m44.918s 00:35:17.990 user 0m53.371s 00:35:17.990 sys 0m11.036s 00:35:17.990 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:17.990 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:17.990 ************************************ 00:35:17.990 END TEST nvmf_lvs_grow 00:35:17.990 ************************************ 00:35:17.990 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:35:17.990 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:17.990 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:17.990 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:17.990 ************************************ 00:35:17.990 START TEST nvmf_bdev_io_wait 00:35:17.990 ************************************ 00:35:17.990 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:35:17.990 * Looking for test storage... 00:35:17.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:17.990 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:17.990 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:35:17.990 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:17.990 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:17.990 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:17.990 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:17.990 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:17.990 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:35:17.990 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:35:17.990 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:35:17.990 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:35:17.990 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:35:17.990 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:35:17.990 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:35:17.990 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:17.990 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:35:17.990 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:35:17.990 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:17.990 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:17.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.991 --rc genhtml_branch_coverage=1 00:35:17.991 --rc genhtml_function_coverage=1 00:35:17.991 --rc genhtml_legend=1 00:35:17.991 --rc geninfo_all_blocks=1 00:35:17.991 --rc geninfo_unexecuted_blocks=1 00:35:17.991 00:35:17.991 ' 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:17.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.991 --rc genhtml_branch_coverage=1 00:35:17.991 --rc genhtml_function_coverage=1 00:35:17.991 --rc genhtml_legend=1 00:35:17.991 --rc geninfo_all_blocks=1 00:35:17.991 --rc geninfo_unexecuted_blocks=1 00:35:17.991 00:35:17.991 ' 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:17.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.991 --rc genhtml_branch_coverage=1 00:35:17.991 --rc genhtml_function_coverage=1 00:35:17.991 --rc genhtml_legend=1 00:35:17.991 --rc geninfo_all_blocks=1 00:35:17.991 --rc geninfo_unexecuted_blocks=1 00:35:17.991 00:35:17.991 ' 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:17.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.991 --rc genhtml_branch_coverage=1 00:35:17.991 --rc genhtml_function_coverage=1 00:35:17.991 --rc genhtml_legend=1 00:35:17.991 --rc geninfo_all_blocks=1 00:35:17.991 --rc geninfo_unexecuted_blocks=1 00:35:17.991 00:35:17.991 ' 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:17.991 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:17.992 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:35:17.992 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:26.137 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:26.137 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:26.137 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:26.138 Found net devices under 0000:31:00.0: cvl_0_0 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:26.138 Found net devices under 0000:31:00.1: cvl_0_1 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:26.138 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:26.138 07:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:26.138 07:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:26.138 07:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:26.138 07:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:26.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:26.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.696 ms 00:35:26.138 00:35:26.138 --- 10.0.0.2 ping statistics --- 00:35:26.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:26.138 rtt min/avg/max/mdev = 0.696/0.696/0.696/0.000 ms 00:35:26.138 07:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:26.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:26.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:35:26.138 00:35:26.138 --- 10.0.0.1 ping statistics --- 00:35:26.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:26.138 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:35:26.138 07:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:26.138 07:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:35:26.138 07:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:26.138 07:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:26.138 07:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:26.138 07:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:26.138 07:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:26.138 07:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:26.138 07:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:26.138 07:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:35:26.138 07:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:26.138 07:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:26.138 07:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:26.138 07:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2372868 00:35:26.138 07:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2372868 00:35:26.138 07:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2372868 ']' 00:35:26.138 07:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:26.138 07:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:26.138 07:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:26.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:26.138 07:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:26.139 07:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:26.139 07:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:35:26.139 [2024-11-26 07:45:10.256242] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:26.139 [2024-11-26 07:45:10.257404] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:35:26.139 [2024-11-26 07:45:10.257459] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:26.399 [2024-11-26 07:45:10.348138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:26.399 [2024-11-26 07:45:10.391183] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:26.399 [2024-11-26 07:45:10.391219] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:26.399 [2024-11-26 07:45:10.391228] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:26.399 [2024-11-26 07:45:10.391235] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:26.399 [2024-11-26 07:45:10.391240] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:26.399 [2024-11-26 07:45:10.392839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:26.399 [2024-11-26 07:45:10.392984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:26.399 [2024-11-26 07:45:10.393309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:26.399 [2024-11-26 07:45:10.393310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:26.399 [2024-11-26 07:45:10.393711] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:26.970 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:26.970 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:35:26.970 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:26.970 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:26.970 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:26.970 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:26.970 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:35:26.970 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.970 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:27.230 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.230 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:35:27.230 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.230 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:27.230 [2024-11-26 07:45:11.147425] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:27.230 [2024-11-26 07:45:11.147846] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:27.230 [2024-11-26 07:45:11.148650] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:27.230 [2024-11-26 07:45:11.148746] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:27.230 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.230 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:27.230 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.230 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:27.230 [2024-11-26 07:45:11.157905] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:27.230 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.230 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:27.230 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.230 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:27.230 Malloc0 00:35:27.230 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.230 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:27.230 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.230 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:27.230 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.230 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:27.230 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:27.231 [2024-11-26 07:45:11.222079] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2373305 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2373307 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:27.231 { 00:35:27.231 "params": { 00:35:27.231 "name": "Nvme$subsystem", 00:35:27.231 "trtype": "$TEST_TRANSPORT", 00:35:27.231 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:27.231 "adrfam": "ipv4", 00:35:27.231 "trsvcid": "$NVMF_PORT", 00:35:27.231 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:27.231 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:27.231 "hdgst": ${hdgst:-false}, 00:35:27.231 "ddgst": ${ddgst:-false} 00:35:27.231 }, 00:35:27.231 "method": "bdev_nvme_attach_controller" 00:35:27.231 } 00:35:27.231 EOF 00:35:27.231 )") 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2373309 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2373312 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:27.231 { 00:35:27.231 "params": { 00:35:27.231 "name": "Nvme$subsystem", 00:35:27.231 "trtype": "$TEST_TRANSPORT", 00:35:27.231 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:27.231 "adrfam": "ipv4", 00:35:27.231 "trsvcid": "$NVMF_PORT", 00:35:27.231 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:27.231 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:27.231 "hdgst": ${hdgst:-false}, 00:35:27.231 "ddgst": ${ddgst:-false} 00:35:27.231 }, 00:35:27.231 "method": "bdev_nvme_attach_controller" 00:35:27.231 } 00:35:27.231 EOF 00:35:27.231 )") 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:27.231 { 00:35:27.231 "params": { 00:35:27.231 "name": "Nvme$subsystem", 00:35:27.231 "trtype": "$TEST_TRANSPORT", 00:35:27.231 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:27.231 "adrfam": "ipv4", 00:35:27.231 "trsvcid": "$NVMF_PORT", 00:35:27.231 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:27.231 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:27.231 "hdgst": ${hdgst:-false}, 00:35:27.231 "ddgst": ${ddgst:-false} 00:35:27.231 }, 00:35:27.231 "method": "bdev_nvme_attach_controller" 00:35:27.231 } 00:35:27.231 EOF 00:35:27.231 )") 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:27.231 { 00:35:27.231 "params": { 00:35:27.231 "name": "Nvme$subsystem", 00:35:27.231 "trtype": "$TEST_TRANSPORT", 00:35:27.231 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:27.231 "adrfam": "ipv4", 00:35:27.231 "trsvcid": "$NVMF_PORT", 00:35:27.231 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:27.231 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:27.231 "hdgst": ${hdgst:-false}, 00:35:27.231 "ddgst": ${ddgst:-false} 00:35:27.231 }, 00:35:27.231 "method": "bdev_nvme_attach_controller" 00:35:27.231 } 00:35:27.231 EOF 00:35:27.231 )") 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2373305 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:35:27.231 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:27.231 "params": { 00:35:27.231 "name": "Nvme1", 00:35:27.231 "trtype": "tcp", 00:35:27.231 "traddr": "10.0.0.2", 00:35:27.231 "adrfam": "ipv4", 00:35:27.231 "trsvcid": "4420", 00:35:27.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:27.231 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:27.231 "hdgst": false, 00:35:27.231 "ddgst": false 00:35:27.231 }, 00:35:27.231 "method": "bdev_nvme_attach_controller" 00:35:27.231 }' 00:35:27.232 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:35:27.232 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:35:27.232 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:35:27.232 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:27.232 "params": { 00:35:27.232 "name": "Nvme1", 00:35:27.232 "trtype": "tcp", 00:35:27.232 "traddr": "10.0.0.2", 00:35:27.232 "adrfam": "ipv4", 00:35:27.232 "trsvcid": "4420", 00:35:27.232 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:27.232 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:27.232 "hdgst": false, 00:35:27.232 "ddgst": false 00:35:27.232 }, 00:35:27.232 "method": "bdev_nvme_attach_controller" 00:35:27.232 }' 00:35:27.232 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:35:27.232 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:27.232 "params": { 00:35:27.232 "name": "Nvme1", 00:35:27.232 "trtype": "tcp", 00:35:27.232 "traddr": "10.0.0.2", 00:35:27.232 "adrfam": "ipv4", 00:35:27.232 "trsvcid": "4420", 00:35:27.232 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:27.232 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:27.232 "hdgst": false, 00:35:27.232 "ddgst": false 00:35:27.232 }, 00:35:27.232 "method": "bdev_nvme_attach_controller" 00:35:27.232 }' 00:35:27.232 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:35:27.232 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:27.232 "params": { 00:35:27.232 "name": "Nvme1", 00:35:27.232 "trtype": "tcp", 00:35:27.232 "traddr": "10.0.0.2", 00:35:27.232 "adrfam": "ipv4", 00:35:27.232 "trsvcid": "4420", 00:35:27.232 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:27.232 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:27.232 "hdgst": false, 00:35:27.232 "ddgst": false 00:35:27.232 }, 00:35:27.232 "method": "bdev_nvme_attach_controller" 00:35:27.232 }' 00:35:27.232 [2024-11-26 07:45:11.276283] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:35:27.232 [2024-11-26 07:45:11.276337] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:35:27.232 [2024-11-26 07:45:11.278687] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:35:27.232 [2024-11-26 07:45:11.278734] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:35:27.232 [2024-11-26 07:45:11.281385] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:35:27.232 [2024-11-26 07:45:11.281432] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:35:27.232 [2024-11-26 07:45:11.282609] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:35:27.232 [2024-11-26 07:45:11.282655] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:35:27.491 [2024-11-26 07:45:11.444456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:27.491 [2024-11-26 07:45:11.473642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:27.491 [2024-11-26 07:45:11.499939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:27.491 [2024-11-26 07:45:11.529181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:27.491 [2024-11-26 07:45:11.547036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:27.491 [2024-11-26 07:45:11.575547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:27.491 [2024-11-26 07:45:11.594754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:27.750 [2024-11-26 07:45:11.623097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:35:27.750 Running I/O for 1 seconds... 00:35:27.750 Running I/O for 1 seconds... 00:35:27.750 Running I/O for 1 seconds... 00:35:27.750 Running I/O for 1 seconds... 00:35:28.688 19563.00 IOPS, 76.42 MiB/s 00:35:28.688 Latency(us) 00:35:28.688 [2024-11-26T06:45:12.825Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:28.688 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:35:28.689 Nvme1n1 : 1.01 19610.26 76.60 0.00 0.00 6509.01 2088.96 8956.59 00:35:28.689 [2024-11-26T06:45:12.826Z] =================================================================================================================== 00:35:28.689 [2024-11-26T06:45:12.826Z] Total : 19610.26 76.60 0.00 0.00 6509.01 2088.96 8956.59 00:35:28.689 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2373307 00:35:28.950 181424.00 IOPS, 708.69 MiB/s 00:35:28.950 Latency(us) 00:35:28.950 [2024-11-26T06:45:13.087Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:28.950 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:35:28.950 Nvme1n1 : 1.00 181069.50 707.30 0.00 0.00 702.48 303.79 1966.08 00:35:28.950 [2024-11-26T06:45:13.087Z] =================================================================================================================== 00:35:28.950 [2024-11-26T06:45:13.087Z] Total : 181069.50 707.30 0.00 0.00 702.48 303.79 1966.08 00:35:28.950 11934.00 IOPS, 46.62 MiB/s 00:35:28.950 Latency(us) 00:35:28.950 [2024-11-26T06:45:13.087Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:28.950 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:35:28.950 Nvme1n1 : 1.01 11987.19 46.82 0.00 0.00 10642.13 4833.28 13981.01 00:35:28.950 [2024-11-26T06:45:13.087Z] =================================================================================================================== 00:35:28.950 [2024-11-26T06:45:13.087Z] Total : 11987.19 46.82 0.00 0.00 10642.13 4833.28 13981.01 00:35:28.950 13452.00 IOPS, 52.55 MiB/s 00:35:28.950 Latency(us) 00:35:28.950 [2024-11-26T06:45:13.087Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:28.950 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:35:28.950 Nvme1n1 : 1.01 13541.36 52.90 0.00 0.00 9427.85 2757.97 15947.09 00:35:28.950 [2024-11-26T06:45:13.087Z] =================================================================================================================== 00:35:28.950 [2024-11-26T06:45:13.087Z] Total : 13541.36 52.90 0.00 0.00 9427.85 2757.97 15947.09 00:35:28.950 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2373309 00:35:28.950 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2373312 00:35:28.950 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:28.950 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.950 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:28.950 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.950 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:35:28.950 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:35:28.950 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:28.950 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:35:28.950 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:28.950 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:35:28.950 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:28.950 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:28.950 rmmod nvme_tcp 00:35:28.950 rmmod nvme_fabrics 00:35:28.950 rmmod nvme_keyring 00:35:28.950 07:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:28.950 07:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:35:28.950 07:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:35:28.950 07:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2372868 ']' 00:35:28.950 07:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2372868 00:35:28.950 07:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2372868 ']' 00:35:28.950 07:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2372868 00:35:28.950 07:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:35:28.950 07:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:28.950 07:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2372868 00:35:29.211 07:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:29.211 07:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:29.211 07:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2372868' 00:35:29.211 killing process with pid 2372868 00:35:29.211 07:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2372868 00:35:29.211 07:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2372868 00:35:29.211 07:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:29.211 07:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:29.211 07:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:29.211 07:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:35:29.211 07:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:35:29.211 07:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:35:29.211 07:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:29.211 07:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:29.211 07:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:29.212 07:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:29.212 07:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:29.212 07:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:31.759 00:35:31.759 real 0m13.714s 00:35:31.759 user 0m15.297s 00:35:31.759 sys 0m8.067s 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:31.759 ************************************ 00:35:31.759 END TEST nvmf_bdev_io_wait 00:35:31.759 ************************************ 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:31.759 ************************************ 00:35:31.759 START TEST nvmf_queue_depth 00:35:31.759 ************************************ 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:35:31.759 * Looking for test storage... 00:35:31.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:31.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.759 --rc genhtml_branch_coverage=1 00:35:31.759 --rc genhtml_function_coverage=1 00:35:31.759 --rc genhtml_legend=1 00:35:31.759 --rc geninfo_all_blocks=1 00:35:31.759 --rc geninfo_unexecuted_blocks=1 00:35:31.759 00:35:31.759 ' 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:31.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.759 --rc genhtml_branch_coverage=1 00:35:31.759 --rc genhtml_function_coverage=1 00:35:31.759 --rc genhtml_legend=1 00:35:31.759 --rc geninfo_all_blocks=1 00:35:31.759 --rc geninfo_unexecuted_blocks=1 00:35:31.759 00:35:31.759 ' 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:31.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.759 --rc genhtml_branch_coverage=1 00:35:31.759 --rc genhtml_function_coverage=1 00:35:31.759 --rc genhtml_legend=1 00:35:31.759 --rc geninfo_all_blocks=1 00:35:31.759 --rc geninfo_unexecuted_blocks=1 00:35:31.759 00:35:31.759 ' 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:31.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.759 --rc genhtml_branch_coverage=1 00:35:31.759 --rc genhtml_function_coverage=1 00:35:31.759 --rc genhtml_legend=1 00:35:31.759 --rc geninfo_all_blocks=1 00:35:31.759 --rc geninfo_unexecuted_blocks=1 00:35:31.759 00:35:31.759 ' 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:31.759 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:31.760 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:31.760 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:31.760 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:35:31.760 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:31.760 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:31.760 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:31.760 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.760 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.760 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.760 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:35:31.760 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.760 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:35:31.760 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:31.760 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:31.760 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:31.760 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:31.760 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:31.760 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:31.760 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:31.760 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:31.760 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:31.760 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:31.760 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:35:31.760 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:35:31.760 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:31.760 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:35:31.760 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:31.760 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:31.760 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:31.760 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:31.760 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:31.760 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:31.760 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:31.760 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:31.760 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:31.760 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:31.760 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:35:31.760 07:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:39.996 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:39.996 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:39.996 Found net devices under 0000:31:00.0: cvl_0_0 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:39.996 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:39.997 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:39.997 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:39.997 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:39.997 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:39.997 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:39.997 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:39.997 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:39.997 Found net devices under 0000:31:00.1: cvl_0_1 00:35:39.997 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:39.997 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:39.997 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:35:39.997 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:39.997 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:39.997 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:39.997 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:39.997 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:39.997 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:39.997 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:39.997 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:39.997 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:39.997 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:39.997 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:39.997 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:39.997 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:39.997 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:39.997 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:39.997 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:39.997 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:39.997 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:39.997 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:39.997 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:39.997 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:39.997 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:39.997 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:39.997 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:39.997 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:39.997 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:39.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:39.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:35:39.997 00:35:39.997 --- 10.0.0.2 ping statistics --- 00:35:39.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:39.997 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:35:39.997 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:39.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:39.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:35:39.997 00:35:39.997 --- 10.0.0.1 ping statistics --- 00:35:39.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:39.997 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:35:39.997 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:39.997 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:35:39.997 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:39.997 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:39.997 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:39.997 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:39.997 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:39.997 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:39.997 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:39.997 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:35:39.997 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:39.997 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:39.997 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:39.997 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2378349 00:35:39.997 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2378349 00:35:39.997 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:35:39.997 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2378349 ']' 00:35:39.997 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:39.997 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:39.997 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:39.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:39.997 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:39.997 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:40.258 [2024-11-26 07:45:24.171981] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:40.258 [2024-11-26 07:45:24.173130] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:35:40.259 [2024-11-26 07:45:24.173184] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:40.259 [2024-11-26 07:45:24.284347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:40.259 [2024-11-26 07:45:24.334411] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:40.259 [2024-11-26 07:45:24.334463] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:40.259 [2024-11-26 07:45:24.334472] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:40.259 [2024-11-26 07:45:24.334480] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:40.259 [2024-11-26 07:45:24.334486] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:40.259 [2024-11-26 07:45:24.335271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:40.519 [2024-11-26 07:45:24.410318] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:40.519 [2024-11-26 07:45:24.410603] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:41.092 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:41.092 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:35:41.092 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:41.092 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:41.092 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:41.092 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:41.092 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:41.092 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.092 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:41.092 [2024-11-26 07:45:25.032093] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:41.092 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.092 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:41.092 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.092 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:41.092 Malloc0 00:35:41.092 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.092 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:41.092 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.092 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:41.092 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.092 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:41.092 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.092 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:41.092 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.092 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:41.092 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.092 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:41.092 [2024-11-26 07:45:25.100253] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:41.092 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.092 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2378448 00:35:41.092 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:41.092 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:35:41.092 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2378448 /var/tmp/bdevperf.sock 00:35:41.092 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2378448 ']' 00:35:41.092 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:41.092 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:41.093 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:41.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:41.093 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:41.093 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:41.093 [2024-11-26 07:45:25.158037] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:35:41.093 [2024-11-26 07:45:25.158090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2378448 ] 00:35:41.354 [2024-11-26 07:45:25.238480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:41.354 [2024-11-26 07:45:25.278418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:41.926 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:41.926 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:35:41.926 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:41.926 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.926 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:42.187 NVMe0n1 00:35:42.187 07:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.187 07:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:42.187 Running I/O for 10 seconds... 00:35:44.516 8390.00 IOPS, 32.77 MiB/s [2024-11-26T06:45:29.595Z] 8714.50 IOPS, 34.04 MiB/s [2024-11-26T06:45:30.536Z] 9571.67 IOPS, 37.39 MiB/s [2024-11-26T06:45:31.477Z] 10243.50 IOPS, 40.01 MiB/s [2024-11-26T06:45:32.417Z] 10651.40 IOPS, 41.61 MiB/s [2024-11-26T06:45:33.360Z] 10925.67 IOPS, 42.68 MiB/s [2024-11-26T06:45:34.300Z] 11071.86 IOPS, 43.25 MiB/s [2024-11-26T06:45:35.681Z] 11206.38 IOPS, 43.77 MiB/s [2024-11-26T06:45:36.623Z] 11321.56 IOPS, 44.22 MiB/s [2024-11-26T06:45:36.623Z] 11446.40 IOPS, 44.71 MiB/s 00:35:52.486 Latency(us) 00:35:52.486 [2024-11-26T06:45:36.623Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:52.486 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:35:52.486 Verification LBA range: start 0x0 length 0x4000 00:35:52.486 NVMe0n1 : 10.07 11466.54 44.79 0.00 0.00 88945.79 24248.32 76021.76 00:35:52.486 [2024-11-26T06:45:36.623Z] =================================================================================================================== 00:35:52.486 [2024-11-26T06:45:36.623Z] Total : 11466.54 44.79 0.00 0.00 88945.79 24248.32 76021.76 00:35:52.486 { 00:35:52.486 "results": [ 00:35:52.486 { 00:35:52.486 "job": "NVMe0n1", 00:35:52.486 "core_mask": "0x1", 00:35:52.486 "workload": "verify", 00:35:52.486 "status": "finished", 00:35:52.486 "verify_range": { 00:35:52.486 "start": 0, 00:35:52.486 "length": 16384 00:35:52.486 }, 00:35:52.486 "queue_depth": 1024, 00:35:52.486 "io_size": 4096, 00:35:52.486 "runtime": 10.065808, 00:35:52.486 "iops": 11466.540987072274, 00:35:52.486 "mibps": 44.79117573075107, 00:35:52.486 "io_failed": 0, 00:35:52.486 "io_timeout": 0, 00:35:52.486 "avg_latency_us": 88945.78672858546, 00:35:52.486 "min_latency_us": 24248.32, 00:35:52.486 "max_latency_us": 76021.76 00:35:52.486 } 00:35:52.486 ], 00:35:52.486 "core_count": 1 00:35:52.486 } 00:35:52.486 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2378448 00:35:52.486 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2378448 ']' 00:35:52.486 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2378448 00:35:52.486 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:35:52.486 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:52.486 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2378448 00:35:52.486 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:52.486 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:52.486 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2378448' 00:35:52.486 killing process with pid 2378448 00:35:52.486 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2378448 00:35:52.486 Received shutdown signal, test time was about 10.000000 seconds 00:35:52.486 00:35:52.486 Latency(us) 00:35:52.486 [2024-11-26T06:45:36.623Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:52.486 [2024-11-26T06:45:36.623Z] =================================================================================================================== 00:35:52.486 [2024-11-26T06:45:36.623Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:52.486 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2378448 00:35:52.486 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:35:52.486 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:35:52.486 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:52.486 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:35:52.486 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:52.486 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:35:52.487 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:52.487 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:52.487 rmmod nvme_tcp 00:35:52.487 rmmod nvme_fabrics 00:35:52.487 rmmod nvme_keyring 00:35:52.487 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:52.487 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:35:52.487 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:35:52.487 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2378349 ']' 00:35:52.487 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2378349 00:35:52.487 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2378349 ']' 00:35:52.487 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2378349 00:35:52.748 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:35:52.748 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:52.748 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2378349 00:35:52.748 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:52.748 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:52.748 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2378349' 00:35:52.748 killing process with pid 2378349 00:35:52.748 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2378349 00:35:52.748 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2378349 00:35:52.748 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:52.748 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:52.748 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:52.748 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:35:52.748 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:35:52.748 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:52.748 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:35:52.748 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:52.748 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:52.748 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:52.748 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:52.748 07:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:55.295 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:55.295 00:35:55.295 real 0m23.487s 00:35:55.295 user 0m24.838s 00:35:55.295 sys 0m8.306s 00:35:55.295 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:55.295 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:55.295 ************************************ 00:35:55.295 END TEST nvmf_queue_depth 00:35:55.295 ************************************ 00:35:55.295 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:35:55.295 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:55.295 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:55.295 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:55.295 ************************************ 00:35:55.295 START TEST nvmf_target_multipath 00:35:55.295 ************************************ 00:35:55.295 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:35:55.295 * Looking for test storage... 00:35:55.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:55.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:55.295 --rc genhtml_branch_coverage=1 00:35:55.295 --rc genhtml_function_coverage=1 00:35:55.295 --rc genhtml_legend=1 00:35:55.295 --rc geninfo_all_blocks=1 00:35:55.295 --rc geninfo_unexecuted_blocks=1 00:35:55.295 00:35:55.295 ' 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:55.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:55.295 --rc genhtml_branch_coverage=1 00:35:55.295 --rc genhtml_function_coverage=1 00:35:55.295 --rc genhtml_legend=1 00:35:55.295 --rc geninfo_all_blocks=1 00:35:55.295 --rc geninfo_unexecuted_blocks=1 00:35:55.295 00:35:55.295 ' 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:55.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:55.295 --rc genhtml_branch_coverage=1 00:35:55.295 --rc genhtml_function_coverage=1 00:35:55.295 --rc genhtml_legend=1 00:35:55.295 --rc geninfo_all_blocks=1 00:35:55.295 --rc geninfo_unexecuted_blocks=1 00:35:55.295 00:35:55.295 ' 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:55.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:55.295 --rc genhtml_branch_coverage=1 00:35:55.295 --rc genhtml_function_coverage=1 00:35:55.295 --rc genhtml_legend=1 00:35:55.295 --rc geninfo_all_blocks=1 00:35:55.295 --rc geninfo_unexecuted_blocks=1 00:35:55.295 00:35:55.295 ' 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:55.295 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:55.296 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:55.296 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.296 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.296 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.296 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:35:55.296 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.296 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:35:55.296 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:55.296 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:55.296 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:55.296 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:55.296 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:55.296 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:55.296 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:55.296 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:55.296 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:55.296 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:55.296 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:55.296 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:55.296 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:35:55.296 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:55.296 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:35:55.296 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:55.296 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:55.296 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:55.296 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:55.296 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:55.296 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:55.296 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:55.296 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:55.296 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:55.296 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:55.296 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:35:55.296 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:03.610 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:03.610 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:03.610 Found net devices under 0000:31:00.0: cvl_0_0 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:03.610 Found net devices under 0000:31:00.1: cvl_0_1 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:03.610 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:03.611 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:03.611 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:03.611 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:03.611 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:03.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:03.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.558 ms 00:36:03.611 00:36:03.611 --- 10.0.0.2 ping statistics --- 00:36:03.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:03.611 rtt min/avg/max/mdev = 0.558/0.558/0.558/0.000 ms 00:36:03.611 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:03.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:03.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:36:03.611 00:36:03.611 --- 10.0.0.1 ping statistics --- 00:36:03.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:03.611 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:36:03.611 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:03.611 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:36:03.611 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:03.611 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:03.611 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:03.611 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:03.611 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:03.611 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:03.611 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:03.871 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:36:03.871 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:36:03.871 only one NIC for nvmf test 00:36:03.871 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:36:03.871 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:03.871 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:36:03.871 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:03.871 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:36:03.871 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:03.871 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:03.871 rmmod nvme_tcp 00:36:03.871 rmmod nvme_fabrics 00:36:03.871 rmmod nvme_keyring 00:36:03.871 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:03.871 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:36:03.871 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:36:03.871 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:36:03.871 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:03.871 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:03.871 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:03.871 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:36:03.871 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:36:03.871 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:03.871 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:36:03.871 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:03.871 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:03.871 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:03.871 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:03.871 07:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:05.784 07:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:05.784 07:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:36:05.784 07:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:36:05.784 07:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:05.784 07:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:36:05.784 07:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:05.785 07:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:36:05.785 07:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:05.785 07:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:06.044 07:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:06.044 07:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:36:06.044 07:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:36:06.044 07:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:36:06.044 07:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:06.044 07:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:06.045 07:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:06.045 07:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:36:06.045 07:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:36:06.045 07:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:06.045 07:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:36:06.045 07:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:06.045 07:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:06.045 07:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:06.045 07:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:06.045 07:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:06.045 07:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:06.045 00:36:06.045 real 0m10.994s 00:36:06.045 user 0m2.335s 00:36:06.045 sys 0m6.590s 00:36:06.045 07:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:06.045 07:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:36:06.045 ************************************ 00:36:06.045 END TEST nvmf_target_multipath 00:36:06.045 ************************************ 00:36:06.045 07:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:36:06.045 07:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:06.045 07:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:06.045 07:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:06.045 ************************************ 00:36:06.045 START TEST nvmf_zcopy 00:36:06.045 ************************************ 00:36:06.045 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:36:06.045 * Looking for test storage... 00:36:06.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:06.045 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:06.045 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:36:06.045 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:06.306 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:06.306 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:06.306 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:06.306 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:06.306 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:36:06.306 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:36:06.306 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:36:06.306 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:36:06.306 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:36:06.306 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:36:06.306 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:36:06.306 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:06.306 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:36:06.306 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:36:06.306 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:06.306 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:06.306 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:36:06.306 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:36:06.306 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:06.306 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:36:06.306 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:36:06.306 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:36:06.306 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:36:06.306 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:06.306 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:36:06.306 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:36:06.306 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:06.306 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:06.306 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:36:06.306 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:06.306 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:06.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:06.306 --rc genhtml_branch_coverage=1 00:36:06.306 --rc genhtml_function_coverage=1 00:36:06.306 --rc genhtml_legend=1 00:36:06.306 --rc geninfo_all_blocks=1 00:36:06.306 --rc geninfo_unexecuted_blocks=1 00:36:06.306 00:36:06.306 ' 00:36:06.306 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:06.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:06.306 --rc genhtml_branch_coverage=1 00:36:06.306 --rc genhtml_function_coverage=1 00:36:06.306 --rc genhtml_legend=1 00:36:06.306 --rc geninfo_all_blocks=1 00:36:06.306 --rc geninfo_unexecuted_blocks=1 00:36:06.306 00:36:06.306 ' 00:36:06.306 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:06.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:06.306 --rc genhtml_branch_coverage=1 00:36:06.306 --rc genhtml_function_coverage=1 00:36:06.306 --rc genhtml_legend=1 00:36:06.306 --rc geninfo_all_blocks=1 00:36:06.306 --rc geninfo_unexecuted_blocks=1 00:36:06.306 00:36:06.306 ' 00:36:06.306 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:06.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:06.306 --rc genhtml_branch_coverage=1 00:36:06.306 --rc genhtml_function_coverage=1 00:36:06.306 --rc genhtml_legend=1 00:36:06.306 --rc geninfo_all_blocks=1 00:36:06.306 --rc geninfo_unexecuted_blocks=1 00:36:06.307 00:36:06.307 ' 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:36:06.307 07:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:14.447 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:14.447 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:14.448 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:14.448 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:14.448 Found net devices under 0000:31:00.0: cvl_0_0 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:14.448 Found net devices under 0000:31:00.1: cvl_0_1 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:14.448 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:14.709 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:14.709 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:14.709 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:14.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:14.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.597 ms 00:36:14.709 00:36:14.709 --- 10.0.0.2 ping statistics --- 00:36:14.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:14.710 rtt min/avg/max/mdev = 0.597/0.597/0.597/0.000 ms 00:36:14.710 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:14.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:14.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:36:14.710 00:36:14.710 --- 10.0.0.1 ping statistics --- 00:36:14.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:14.710 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:36:14.710 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:14.710 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:36:14.710 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:14.710 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:14.710 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:14.710 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:14.710 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:14.710 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:14.710 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:14.710 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:36:14.710 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:14.710 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:14.710 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:14.710 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2390071 00:36:14.710 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2390071 00:36:14.710 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:36:14.710 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2390071 ']' 00:36:14.710 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:14.710 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:14.710 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:14.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:14.710 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:14.710 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:14.710 [2024-11-26 07:45:58.705734] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:14.710 [2024-11-26 07:45:58.706770] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:36:14.710 [2024-11-26 07:45:58.706809] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:14.710 [2024-11-26 07:45:58.812498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:14.970 [2024-11-26 07:45:58.859276] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:14.970 [2024-11-26 07:45:58.859329] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:14.970 [2024-11-26 07:45:58.859339] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:14.971 [2024-11-26 07:45:58.859346] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:14.971 [2024-11-26 07:45:58.859353] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:14.971 [2024-11-26 07:45:58.860118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:14.971 [2024-11-26 07:45:58.936033] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:14.971 [2024-11-26 07:45:58.936339] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:15.541 [2024-11-26 07:45:59.540991] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:15.541 [2024-11-26 07:45:59.569225] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:15.541 malloc0 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:15.541 { 00:36:15.541 "params": { 00:36:15.541 "name": "Nvme$subsystem", 00:36:15.541 "trtype": "$TEST_TRANSPORT", 00:36:15.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:15.541 "adrfam": "ipv4", 00:36:15.541 "trsvcid": "$NVMF_PORT", 00:36:15.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:15.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:15.541 "hdgst": ${hdgst:-false}, 00:36:15.541 "ddgst": ${ddgst:-false} 00:36:15.541 }, 00:36:15.541 "method": "bdev_nvme_attach_controller" 00:36:15.541 } 00:36:15.541 EOF 00:36:15.541 )") 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:36:15.541 07:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:15.541 "params": { 00:36:15.541 "name": "Nvme1", 00:36:15.541 "trtype": "tcp", 00:36:15.542 "traddr": "10.0.0.2", 00:36:15.542 "adrfam": "ipv4", 00:36:15.542 "trsvcid": "4420", 00:36:15.542 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:15.542 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:15.542 "hdgst": false, 00:36:15.542 "ddgst": false 00:36:15.542 }, 00:36:15.542 "method": "bdev_nvme_attach_controller" 00:36:15.542 }' 00:36:15.542 [2024-11-26 07:45:59.651987] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:36:15.542 [2024-11-26 07:45:59.652041] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2390111 ] 00:36:15.802 [2024-11-26 07:45:59.729363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:15.802 [2024-11-26 07:45:59.765568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:16.064 Running I/O for 10 seconds... 00:36:17.947 6577.00 IOPS, 51.38 MiB/s [2024-11-26T06:46:03.027Z] 6629.00 IOPS, 51.79 MiB/s [2024-11-26T06:46:03.968Z] 6643.33 IOPS, 51.90 MiB/s [2024-11-26T06:46:05.351Z] 6656.75 IOPS, 52.01 MiB/s [2024-11-26T06:46:06.293Z] 6667.80 IOPS, 52.09 MiB/s [2024-11-26T06:46:07.233Z] 6670.33 IOPS, 52.11 MiB/s [2024-11-26T06:46:08.175Z] 7054.00 IOPS, 55.11 MiB/s [2024-11-26T06:46:09.117Z] 7379.50 IOPS, 57.65 MiB/s [2024-11-26T06:46:10.057Z] 7632.22 IOPS, 59.63 MiB/s [2024-11-26T06:46:10.057Z] 7835.50 IOPS, 61.21 MiB/s 00:36:25.920 Latency(us) 00:36:25.920 [2024-11-26T06:46:10.057Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:25.920 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:36:25.920 Verification LBA range: start 0x0 length 0x1000 00:36:25.920 Nvme1n1 : 10.01 7838.01 61.23 0.00 0.00 16278.06 2389.33 27306.67 00:36:25.920 [2024-11-26T06:46:10.057Z] =================================================================================================================== 00:36:25.920 [2024-11-26T06:46:10.057Z] Total : 7838.01 61.23 0.00 0.00 16278.06 2389.33 27306.67 00:36:26.181 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2392106 00:36:26.181 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:36:26.181 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:26.181 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:36:26.181 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:36:26.181 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:36:26.181 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:36:26.181 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:26.181 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:26.181 { 00:36:26.181 "params": { 00:36:26.181 "name": "Nvme$subsystem", 00:36:26.181 "trtype": "$TEST_TRANSPORT", 00:36:26.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:26.181 "adrfam": "ipv4", 00:36:26.181 "trsvcid": "$NVMF_PORT", 00:36:26.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:26.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:26.181 "hdgst": ${hdgst:-false}, 00:36:26.181 "ddgst": ${ddgst:-false} 00:36:26.181 }, 00:36:26.181 "method": "bdev_nvme_attach_controller" 00:36:26.181 } 00:36:26.181 EOF 00:36:26.181 )") 00:36:26.181 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:36:26.181 [2024-11-26 07:46:10.108544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.181 [2024-11-26 07:46:10.108575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.181 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:36:26.181 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:36:26.181 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:26.181 "params": { 00:36:26.181 "name": "Nvme1", 00:36:26.181 "trtype": "tcp", 00:36:26.181 "traddr": "10.0.0.2", 00:36:26.181 "adrfam": "ipv4", 00:36:26.181 "trsvcid": "4420", 00:36:26.181 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:26.181 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:26.181 "hdgst": false, 00:36:26.181 "ddgst": false 00:36:26.181 }, 00:36:26.181 "method": "bdev_nvme_attach_controller" 00:36:26.181 }' 00:36:26.181 [2024-11-26 07:46:10.120493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.181 [2024-11-26 07:46:10.120502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.181 [2024-11-26 07:46:10.132491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.181 [2024-11-26 07:46:10.132499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.181 [2024-11-26 07:46:10.144490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.181 [2024-11-26 07:46:10.144497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.181 [2024-11-26 07:46:10.150145] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:36:26.181 [2024-11-26 07:46:10.150199] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2392106 ] 00:36:26.181 [2024-11-26 07:46:10.156490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.181 [2024-11-26 07:46:10.156498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.181 [2024-11-26 07:46:10.168491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.181 [2024-11-26 07:46:10.168499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.181 [2024-11-26 07:46:10.180490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.181 [2024-11-26 07:46:10.180497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.181 [2024-11-26 07:46:10.192491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.181 [2024-11-26 07:46:10.192498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.181 [2024-11-26 07:46:10.204490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.181 [2024-11-26 07:46:10.204497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.181 [2024-11-26 07:46:10.216490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.181 [2024-11-26 07:46:10.216501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.181 [2024-11-26 07:46:10.227824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:26.181 [2024-11-26 07:46:10.228491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.181 [2024-11-26 07:46:10.228498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.181 [2024-11-26 07:46:10.240490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.181 [2024-11-26 07:46:10.240501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.181 [2024-11-26 07:46:10.252503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.181 [2024-11-26 07:46:10.252511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.181 [2024-11-26 07:46:10.263449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:26.181 [2024-11-26 07:46:10.264491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.181 [2024-11-26 07:46:10.264498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.181 [2024-11-26 07:46:10.276495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.181 [2024-11-26 07:46:10.276505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.181 [2024-11-26 07:46:10.288497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.181 [2024-11-26 07:46:10.288512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.181 [2024-11-26 07:46:10.300493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.181 [2024-11-26 07:46:10.300502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.442 [2024-11-26 07:46:10.312492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.442 [2024-11-26 07:46:10.312500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.442 [2024-11-26 07:46:10.324491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.442 [2024-11-26 07:46:10.324498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.442 [2024-11-26 07:46:10.336498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.442 [2024-11-26 07:46:10.336512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.442 [2024-11-26 07:46:10.348489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.442 [2024-11-26 07:46:10.348499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.442 [2024-11-26 07:46:10.360491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.442 [2024-11-26 07:46:10.360500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.442 [2024-11-26 07:46:10.372492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.442 [2024-11-26 07:46:10.372505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.442 [2024-11-26 07:46:10.384490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.442 [2024-11-26 07:46:10.384501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.442 [2024-11-26 07:46:10.396494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.442 [2024-11-26 07:46:10.396508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.442 Running I/O for 5 seconds... 00:36:26.442 [2024-11-26 07:46:10.413094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.442 [2024-11-26 07:46:10.413110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.442 [2024-11-26 07:46:10.428353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.442 [2024-11-26 07:46:10.428370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.442 [2024-11-26 07:46:10.442369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.442 [2024-11-26 07:46:10.442394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.442 [2024-11-26 07:46:10.456327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.442 [2024-11-26 07:46:10.456344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.442 [2024-11-26 07:46:10.470266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.442 [2024-11-26 07:46:10.470282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.442 [2024-11-26 07:46:10.484314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.442 [2024-11-26 07:46:10.484330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.442 [2024-11-26 07:46:10.497487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.442 [2024-11-26 07:46:10.497503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.442 [2024-11-26 07:46:10.512091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.442 [2024-11-26 07:46:10.512107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.442 [2024-11-26 07:46:10.525951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.442 [2024-11-26 07:46:10.525966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.442 [2024-11-26 07:46:10.540529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.442 [2024-11-26 07:46:10.540544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.442 [2024-11-26 07:46:10.553721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.442 [2024-11-26 07:46:10.553737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.442 [2024-11-26 07:46:10.568355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.442 [2024-11-26 07:46:10.568370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.702 [2024-11-26 07:46:10.581811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.702 [2024-11-26 07:46:10.581826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.702 [2024-11-26 07:46:10.595971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.702 [2024-11-26 07:46:10.595986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.702 [2024-11-26 07:46:10.609761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.702 [2024-11-26 07:46:10.609777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.702 [2024-11-26 07:46:10.624112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.702 [2024-11-26 07:46:10.624129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.702 [2024-11-26 07:46:10.637740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.702 [2024-11-26 07:46:10.637755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.702 [2024-11-26 07:46:10.652739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.702 [2024-11-26 07:46:10.652755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.703 [2024-11-26 07:46:10.666096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.703 [2024-11-26 07:46:10.666111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.703 [2024-11-26 07:46:10.679965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.703 [2024-11-26 07:46:10.679980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.703 [2024-11-26 07:46:10.693479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.703 [2024-11-26 07:46:10.693494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.703 [2024-11-26 07:46:10.708102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.703 [2024-11-26 07:46:10.708127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.703 [2024-11-26 07:46:10.722034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.703 [2024-11-26 07:46:10.722049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.703 [2024-11-26 07:46:10.736453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.703 [2024-11-26 07:46:10.736469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.703 [2024-11-26 07:46:10.750071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.703 [2024-11-26 07:46:10.750087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.703 [2024-11-26 07:46:10.764494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.703 [2024-11-26 07:46:10.764510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.703 [2024-11-26 07:46:10.777899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.703 [2024-11-26 07:46:10.777915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.703 [2024-11-26 07:46:10.791884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.703 [2024-11-26 07:46:10.791900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.703 [2024-11-26 07:46:10.805670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.703 [2024-11-26 07:46:10.805685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.703 [2024-11-26 07:46:10.819748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.703 [2024-11-26 07:46:10.819763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.964 [2024-11-26 07:46:10.833568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.964 [2024-11-26 07:46:10.833584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.964 [2024-11-26 07:46:10.847361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.964 [2024-11-26 07:46:10.847376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.964 [2024-11-26 07:46:10.861295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.964 [2024-11-26 07:46:10.861310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.964 [2024-11-26 07:46:10.875952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.964 [2024-11-26 07:46:10.875968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.964 [2024-11-26 07:46:10.889833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.964 [2024-11-26 07:46:10.889849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.964 [2024-11-26 07:46:10.904331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.964 [2024-11-26 07:46:10.904347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.964 [2024-11-26 07:46:10.916926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.964 [2024-11-26 07:46:10.916941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.964 [2024-11-26 07:46:10.932150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.964 [2024-11-26 07:46:10.932166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.964 [2024-11-26 07:46:10.945857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.964 [2024-11-26 07:46:10.945877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.964 [2024-11-26 07:46:10.960256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.964 [2024-11-26 07:46:10.960271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.964 [2024-11-26 07:46:10.974167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.964 [2024-11-26 07:46:10.974192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.964 [2024-11-26 07:46:10.988928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.964 [2024-11-26 07:46:10.988945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.964 [2024-11-26 07:46:11.004126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.964 [2024-11-26 07:46:11.004143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.965 [2024-11-26 07:46:11.017720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.965 [2024-11-26 07:46:11.017736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.965 [2024-11-26 07:46:11.031872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.965 [2024-11-26 07:46:11.031888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.965 [2024-11-26 07:46:11.045763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.965 [2024-11-26 07:46:11.045778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.965 [2024-11-26 07:46:11.060202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.965 [2024-11-26 07:46:11.060218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.965 [2024-11-26 07:46:11.073974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.965 [2024-11-26 07:46:11.073990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.965 [2024-11-26 07:46:11.088195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.965 [2024-11-26 07:46:11.088211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.225 [2024-11-26 07:46:11.102101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.226 [2024-11-26 07:46:11.102116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.226 [2024-11-26 07:46:11.115833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.226 [2024-11-26 07:46:11.115850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.226 [2024-11-26 07:46:11.129965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.226 [2024-11-26 07:46:11.129981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.226 [2024-11-26 07:46:11.143549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.226 [2024-11-26 07:46:11.143565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.226 [2024-11-26 07:46:11.157109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.226 [2024-11-26 07:46:11.157123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.226 [2024-11-26 07:46:11.172246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.226 [2024-11-26 07:46:11.172263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.226 [2024-11-26 07:46:11.185578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.226 [2024-11-26 07:46:11.185594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.226 [2024-11-26 07:46:11.200326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.226 [2024-11-26 07:46:11.200342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.226 [2024-11-26 07:46:11.213246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.226 [2024-11-26 07:46:11.213261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.226 [2024-11-26 07:46:11.227671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.226 [2024-11-26 07:46:11.227687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.226 [2024-11-26 07:46:11.241598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.226 [2024-11-26 07:46:11.241613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.226 [2024-11-26 07:46:11.256290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.226 [2024-11-26 07:46:11.256305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.226 [2024-11-26 07:46:11.269731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.226 [2024-11-26 07:46:11.269746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.226 [2024-11-26 07:46:11.284072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.226 [2024-11-26 07:46:11.284087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.226 [2024-11-26 07:46:11.298037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.226 [2024-11-26 07:46:11.298053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.226 [2024-11-26 07:46:11.311956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.226 [2024-11-26 07:46:11.311971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.226 [2024-11-26 07:46:11.325671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.226 [2024-11-26 07:46:11.325687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.226 [2024-11-26 07:46:11.339932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.226 [2024-11-26 07:46:11.339947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.226 [2024-11-26 07:46:11.353579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.226 [2024-11-26 07:46:11.353594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.487 [2024-11-26 07:46:11.368227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.487 [2024-11-26 07:46:11.368243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.487 [2024-11-26 07:46:11.381343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.487 [2024-11-26 07:46:11.381358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.487 [2024-11-26 07:46:11.395619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.487 [2024-11-26 07:46:11.395634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.487 17975.00 IOPS, 140.43 MiB/s [2024-11-26T06:46:11.624Z] [2024-11-26 07:46:11.409425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.487 [2024-11-26 07:46:11.409440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.487 [2024-11-26 07:46:11.424354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.487 [2024-11-26 07:46:11.424369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.487 [2024-11-26 07:46:11.438175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.487 [2024-11-26 07:46:11.438190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.487 [2024-11-26 07:46:11.452839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.487 [2024-11-26 07:46:11.452854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.487 [2024-11-26 07:46:11.468108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.487 [2024-11-26 07:46:11.468123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.487 [2024-11-26 07:46:11.482107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.487 [2024-11-26 07:46:11.482122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.487 [2024-11-26 07:46:11.496122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.487 [2024-11-26 07:46:11.496137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.487 [2024-11-26 07:46:11.508966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.487 [2024-11-26 07:46:11.508980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.487 [2024-11-26 07:46:11.523773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.487 [2024-11-26 07:46:11.523789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.487 [2024-11-26 07:46:11.537680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.487 [2024-11-26 07:46:11.537695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.487 [2024-11-26 07:46:11.551254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.487 [2024-11-26 07:46:11.551270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.487 [2024-11-26 07:46:11.565152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.487 [2024-11-26 07:46:11.565167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.487 [2024-11-26 07:46:11.579555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.487 [2024-11-26 07:46:11.579570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.487 [2024-11-26 07:46:11.593177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.488 [2024-11-26 07:46:11.593193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.488 [2024-11-26 07:46:11.608020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.488 [2024-11-26 07:46:11.608036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.748 [2024-11-26 07:46:11.621486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.748 [2024-11-26 07:46:11.621501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.748 [2024-11-26 07:46:11.636181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.748 [2024-11-26 07:46:11.636196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.748 [2024-11-26 07:46:11.649649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.748 [2024-11-26 07:46:11.649664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.748 [2024-11-26 07:46:11.664107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.748 [2024-11-26 07:46:11.664123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.748 [2024-11-26 07:46:11.677607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.748 [2024-11-26 07:46:11.677624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.748 [2024-11-26 07:46:11.691514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.748 [2024-11-26 07:46:11.691530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.748 [2024-11-26 07:46:11.704870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.748 [2024-11-26 07:46:11.704885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.748 [2024-11-26 07:46:11.720101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.748 [2024-11-26 07:46:11.720116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.748 [2024-11-26 07:46:11.733526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.748 [2024-11-26 07:46:11.733541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.748 [2024-11-26 07:46:11.747968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.748 [2024-11-26 07:46:11.747984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.748 [2024-11-26 07:46:11.761577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.748 [2024-11-26 07:46:11.761600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.748 [2024-11-26 07:46:11.775997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.748 [2024-11-26 07:46:11.776012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.748 [2024-11-26 07:46:11.789678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.748 [2024-11-26 07:46:11.789692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.748 [2024-11-26 07:46:11.804167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.748 [2024-11-26 07:46:11.804182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.748 [2024-11-26 07:46:11.817829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.748 [2024-11-26 07:46:11.817844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.749 [2024-11-26 07:46:11.831042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.749 [2024-11-26 07:46:11.831057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.749 [2024-11-26 07:46:11.844514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.749 [2024-11-26 07:46:11.844529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.749 [2024-11-26 07:46:11.857207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.749 [2024-11-26 07:46:11.857222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.749 [2024-11-26 07:46:11.871916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.749 [2024-11-26 07:46:11.871931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.009 [2024-11-26 07:46:11.885706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.009 [2024-11-26 07:46:11.885721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.009 [2024-11-26 07:46:11.900283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.009 [2024-11-26 07:46:11.900299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.009 [2024-11-26 07:46:11.914049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.009 [2024-11-26 07:46:11.914065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.009 [2024-11-26 07:46:11.928807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.009 [2024-11-26 07:46:11.928822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.009 [2024-11-26 07:46:11.944248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.009 [2024-11-26 07:46:11.944264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.009 [2024-11-26 07:46:11.957744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.009 [2024-11-26 07:46:11.957759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.009 [2024-11-26 07:46:11.972130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.009 [2024-11-26 07:46:11.972145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.009 [2024-11-26 07:46:11.984984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.009 [2024-11-26 07:46:11.984999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.009 [2024-11-26 07:46:11.999911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.009 [2024-11-26 07:46:11.999926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.009 [2024-11-26 07:46:12.013806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.009 [2024-11-26 07:46:12.013821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.009 [2024-11-26 07:46:12.027460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.009 [2024-11-26 07:46:12.027484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.009 [2024-11-26 07:46:12.040854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.009 [2024-11-26 07:46:12.040873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.009 [2024-11-26 07:46:12.055947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.009 [2024-11-26 07:46:12.055963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.009 [2024-11-26 07:46:12.069779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.009 [2024-11-26 07:46:12.069794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.009 [2024-11-26 07:46:12.084274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.009 [2024-11-26 07:46:12.084290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.009 [2024-11-26 07:46:12.097685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.009 [2024-11-26 07:46:12.097700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.009 [2024-11-26 07:46:12.112250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.009 [2024-11-26 07:46:12.112266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.010 [2024-11-26 07:46:12.126062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.010 [2024-11-26 07:46:12.126077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.271 [2024-11-26 07:46:12.140181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.271 [2024-11-26 07:46:12.140196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.271 [2024-11-26 07:46:12.154132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.271 [2024-11-26 07:46:12.154148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.271 [2024-11-26 07:46:12.167726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.271 [2024-11-26 07:46:12.167742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.271 [2024-11-26 07:46:12.181129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.271 [2024-11-26 07:46:12.181145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.271 [2024-11-26 07:46:12.196446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.271 [2024-11-26 07:46:12.196462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.271 [2024-11-26 07:46:12.209881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.271 [2024-11-26 07:46:12.209897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.271 [2024-11-26 07:46:12.223984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.271 [2024-11-26 07:46:12.223999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.271 [2024-11-26 07:46:12.237950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.271 [2024-11-26 07:46:12.237966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.271 [2024-11-26 07:46:12.251299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.271 [2024-11-26 07:46:12.251315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.271 [2024-11-26 07:46:12.264691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.271 [2024-11-26 07:46:12.264707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.271 [2024-11-26 07:46:12.277201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.271 [2024-11-26 07:46:12.277216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.271 [2024-11-26 07:46:12.291733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.271 [2024-11-26 07:46:12.291756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.271 [2024-11-26 07:46:12.305606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.271 [2024-11-26 07:46:12.305621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.271 [2024-11-26 07:46:12.320093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.271 [2024-11-26 07:46:12.320109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.271 [2024-11-26 07:46:12.333775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.271 [2024-11-26 07:46:12.333791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.271 [2024-11-26 07:46:12.347871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.271 [2024-11-26 07:46:12.347887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.271 [2024-11-26 07:46:12.361755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.271 [2024-11-26 07:46:12.361770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.271 [2024-11-26 07:46:12.376132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.271 [2024-11-26 07:46:12.376147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.271 [2024-11-26 07:46:12.389895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.271 [2024-11-26 07:46:12.389910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.532 [2024-11-26 07:46:12.404512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.532 [2024-11-26 07:46:12.404527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.532 18058.50 IOPS, 141.08 MiB/s [2024-11-26T06:46:12.669Z] [2024-11-26 07:46:12.417923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.532 [2024-11-26 07:46:12.417939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.532 [2024-11-26 07:46:12.432548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.532 [2024-11-26 07:46:12.432564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.532 [2024-11-26 07:46:12.445015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.532 [2024-11-26 07:46:12.445031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.532 [2024-11-26 07:46:12.459546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.532 [2024-11-26 07:46:12.459561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.532 [2024-11-26 07:46:12.473945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.532 [2024-11-26 07:46:12.473960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.532 [2024-11-26 07:46:12.488093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.532 [2024-11-26 07:46:12.488108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.532 [2024-11-26 07:46:12.501688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.532 [2024-11-26 07:46:12.501704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.532 [2024-11-26 07:46:12.515439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.532 [2024-11-26 07:46:12.515454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.532 [2024-11-26 07:46:12.529073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.532 [2024-11-26 07:46:12.529088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.532 [2024-11-26 07:46:12.543816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.532 [2024-11-26 07:46:12.543831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.532 [2024-11-26 07:46:12.557726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.532 [2024-11-26 07:46:12.557741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.532 [2024-11-26 07:46:12.571986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.532 [2024-11-26 07:46:12.572002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.532 [2024-11-26 07:46:12.585886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.532 [2024-11-26 07:46:12.585901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.532 [2024-11-26 07:46:12.599854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.533 [2024-11-26 07:46:12.599874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.533 [2024-11-26 07:46:12.613622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.533 [2024-11-26 07:46:12.613637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.533 [2024-11-26 07:46:12.628341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.533 [2024-11-26 07:46:12.628357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.533 [2024-11-26 07:46:12.642115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.533 [2024-11-26 07:46:12.642131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.533 [2024-11-26 07:46:12.656746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.533 [2024-11-26 07:46:12.656762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.793 [2024-11-26 07:46:12.670077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.793 [2024-11-26 07:46:12.670093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.793 [2024-11-26 07:46:12.683879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.793 [2024-11-26 07:46:12.683895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.793 [2024-11-26 07:46:12.697427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.793 [2024-11-26 07:46:12.697443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.793 [2024-11-26 07:46:12.711993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.793 [2024-11-26 07:46:12.712009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.793 [2024-11-26 07:46:12.725649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.793 [2024-11-26 07:46:12.725664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.793 [2024-11-26 07:46:12.740618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.793 [2024-11-26 07:46:12.740634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.793 [2024-11-26 07:46:12.753289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.793 [2024-11-26 07:46:12.753304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.793 [2024-11-26 07:46:12.767894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.793 [2024-11-26 07:46:12.767909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.793 [2024-11-26 07:46:12.781610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.793 [2024-11-26 07:46:12.781626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.793 [2024-11-26 07:46:12.796108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.793 [2024-11-26 07:46:12.796124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.793 [2024-11-26 07:46:12.809919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.793 [2024-11-26 07:46:12.809935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.793 [2024-11-26 07:46:12.824953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.793 [2024-11-26 07:46:12.824969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.793 [2024-11-26 07:46:12.840252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.793 [2024-11-26 07:46:12.840268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.793 [2024-11-26 07:46:12.854129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.793 [2024-11-26 07:46:12.854145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.793 [2024-11-26 07:46:12.867879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.793 [2024-11-26 07:46:12.867895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.793 [2024-11-26 07:46:12.881662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.793 [2024-11-26 07:46:12.881677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.793 [2024-11-26 07:46:12.896016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.793 [2024-11-26 07:46:12.896032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.793 [2024-11-26 07:46:12.909667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.793 [2024-11-26 07:46:12.909682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.054 [2024-11-26 07:46:12.924334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.054 [2024-11-26 07:46:12.924349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.054 [2024-11-26 07:46:12.937678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.054 [2024-11-26 07:46:12.937693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.054 [2024-11-26 07:46:12.952410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.054 [2024-11-26 07:46:12.952425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.054 [2024-11-26 07:46:12.965648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.054 [2024-11-26 07:46:12.965662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.054 [2024-11-26 07:46:12.980200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.054 [2024-11-26 07:46:12.980216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.054 [2024-11-26 07:46:12.994080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.054 [2024-11-26 07:46:12.994095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.054 [2024-11-26 07:46:13.008691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.054 [2024-11-26 07:46:13.008707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.054 [2024-11-26 07:46:13.021672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.054 [2024-11-26 07:46:13.021686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.054 [2024-11-26 07:46:13.036471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.054 [2024-11-26 07:46:13.036487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.054 [2024-11-26 07:46:13.049125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.054 [2024-11-26 07:46:13.049141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.054 [2024-11-26 07:46:13.064084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.054 [2024-11-26 07:46:13.064099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.054 [2024-11-26 07:46:13.078055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.054 [2024-11-26 07:46:13.078071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.054 [2024-11-26 07:46:13.092456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.054 [2024-11-26 07:46:13.092471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.054 [2024-11-26 07:46:13.106284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.054 [2024-11-26 07:46:13.106299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.054 [2024-11-26 07:46:13.119930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.054 [2024-11-26 07:46:13.119945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.054 [2024-11-26 07:46:13.133652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.054 [2024-11-26 07:46:13.133667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.054 [2024-11-26 07:46:13.148127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.054 [2024-11-26 07:46:13.148143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.054 [2024-11-26 07:46:13.161991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.054 [2024-11-26 07:46:13.162013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.054 [2024-11-26 07:46:13.176331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.054 [2024-11-26 07:46:13.176346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.315 [2024-11-26 07:46:13.189165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.315 [2024-11-26 07:46:13.189180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.315 [2024-11-26 07:46:13.203787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.315 [2024-11-26 07:46:13.203803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.315 [2024-11-26 07:46:13.217560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.315 [2024-11-26 07:46:13.217575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.315 [2024-11-26 07:46:13.232053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.315 [2024-11-26 07:46:13.232068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.315 [2024-11-26 07:46:13.246091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.315 [2024-11-26 07:46:13.246107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.315 [2024-11-26 07:46:13.260018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.315 [2024-11-26 07:46:13.260033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.315 [2024-11-26 07:46:13.273355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.315 [2024-11-26 07:46:13.273370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.315 [2024-11-26 07:46:13.287903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.315 [2024-11-26 07:46:13.287918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.315 [2024-11-26 07:46:13.301459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.315 [2024-11-26 07:46:13.301474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.315 [2024-11-26 07:46:13.316734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.315 [2024-11-26 07:46:13.316749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.315 [2024-11-26 07:46:13.330004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.315 [2024-11-26 07:46:13.330019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.315 [2024-11-26 07:46:13.344742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.315 [2024-11-26 07:46:13.344757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.315 [2024-11-26 07:46:13.355543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.315 [2024-11-26 07:46:13.355560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.315 [2024-11-26 07:46:13.369721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.315 [2024-11-26 07:46:13.369736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.315 [2024-11-26 07:46:13.384128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.315 [2024-11-26 07:46:13.384144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.315 [2024-11-26 07:46:13.397584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.315 [2024-11-26 07:46:13.397600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.315 [2024-11-26 07:46:13.411928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.315 [2024-11-26 07:46:13.411944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.315 18078.33 IOPS, 141.24 MiB/s [2024-11-26T06:46:13.452Z] [2024-11-26 07:46:13.425130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.315 [2024-11-26 07:46:13.425145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.315 [2024-11-26 07:46:13.439779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.315 [2024-11-26 07:46:13.439794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.576 [2024-11-26 07:46:13.453656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.576 [2024-11-26 07:46:13.453671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.576 [2024-11-26 07:46:13.467776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.576 [2024-11-26 07:46:13.467791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.576 [2024-11-26 07:46:13.481881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.576 [2024-11-26 07:46:13.481896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.576 [2024-11-26 07:46:13.496426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.576 [2024-11-26 07:46:13.496442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.576 [2024-11-26 07:46:13.509330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.576 [2024-11-26 07:46:13.509344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.576 [2024-11-26 07:46:13.524299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.576 [2024-11-26 07:46:13.524315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.576 [2024-11-26 07:46:13.538162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.576 [2024-11-26 07:46:13.538177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.576 [2024-11-26 07:46:13.552240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.576 [2024-11-26 07:46:13.552255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.576 [2024-11-26 07:46:13.566225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.576 [2024-11-26 07:46:13.566242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.576 [2024-11-26 07:46:13.580486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.576 [2024-11-26 07:46:13.580501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.576 [2024-11-26 07:46:13.593881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.576 [2024-11-26 07:46:13.593895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.576 [2024-11-26 07:46:13.608410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.576 [2024-11-26 07:46:13.608432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.576 [2024-11-26 07:46:13.620906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.576 [2024-11-26 07:46:13.620920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.576 [2024-11-26 07:46:13.635627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.576 [2024-11-26 07:46:13.635643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.576 [2024-11-26 07:46:13.649353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.576 [2024-11-26 07:46:13.649368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.576 [2024-11-26 07:46:13.663973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.576 [2024-11-26 07:46:13.663988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.576 [2024-11-26 07:46:13.677576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.576 [2024-11-26 07:46:13.677591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.576 [2024-11-26 07:46:13.692574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.576 [2024-11-26 07:46:13.692590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.836 [2024-11-26 07:46:13.705987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.836 [2024-11-26 07:46:13.706003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.836 [2024-11-26 07:46:13.719826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.836 [2024-11-26 07:46:13.719841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.836 [2024-11-26 07:46:13.733621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.836 [2024-11-26 07:46:13.733635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.836 [2024-11-26 07:46:13.747545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.836 [2024-11-26 07:46:13.747561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.836 [2024-11-26 07:46:13.760768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.836 [2024-11-26 07:46:13.760783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.836 [2024-11-26 07:46:13.771932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.836 [2024-11-26 07:46:13.771948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.836 [2024-11-26 07:46:13.785777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.836 [2024-11-26 07:46:13.785792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.836 [2024-11-26 07:46:13.800095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.836 [2024-11-26 07:46:13.800111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.836 [2024-11-26 07:46:13.813519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.836 [2024-11-26 07:46:13.813534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.836 [2024-11-26 07:46:13.828103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.836 [2024-11-26 07:46:13.828118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.836 [2024-11-26 07:46:13.841731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.836 [2024-11-26 07:46:13.841746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.836 [2024-11-26 07:46:13.856427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.836 [2024-11-26 07:46:13.856444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.836 [2024-11-26 07:46:13.869946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.836 [2024-11-26 07:46:13.869969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.836 [2024-11-26 07:46:13.884364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.836 [2024-11-26 07:46:13.884379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.836 [2024-11-26 07:46:13.898116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.836 [2024-11-26 07:46:13.898131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.836 [2024-11-26 07:46:13.912342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.836 [2024-11-26 07:46:13.912357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.836 [2024-11-26 07:46:13.925985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.836 [2024-11-26 07:46:13.926000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.836 [2024-11-26 07:46:13.940496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.836 [2024-11-26 07:46:13.940512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.836 [2024-11-26 07:46:13.953962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.836 [2024-11-26 07:46:13.953977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.097 [2024-11-26 07:46:13.968433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.097 [2024-11-26 07:46:13.968449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.097 [2024-11-26 07:46:13.981280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.097 [2024-11-26 07:46:13.981296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.097 [2024-11-26 07:46:13.996120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.097 [2024-11-26 07:46:13.996137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.097 [2024-11-26 07:46:14.009993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.097 [2024-11-26 07:46:14.010009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.097 [2024-11-26 07:46:14.024544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.097 [2024-11-26 07:46:14.024560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.097 [2024-11-26 07:46:14.037285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.097 [2024-11-26 07:46:14.037300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.097 [2024-11-26 07:46:14.051800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.097 [2024-11-26 07:46:14.051815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.097 [2024-11-26 07:46:14.065245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.097 [2024-11-26 07:46:14.065261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.097 [2024-11-26 07:46:14.080465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.097 [2024-11-26 07:46:14.080481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.097 [2024-11-26 07:46:14.094339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.097 [2024-11-26 07:46:14.094355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.097 [2024-11-26 07:46:14.107829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.097 [2024-11-26 07:46:14.107845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.097 [2024-11-26 07:46:14.121454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.097 [2024-11-26 07:46:14.121469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.097 [2024-11-26 07:46:14.136187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.097 [2024-11-26 07:46:14.136211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.097 [2024-11-26 07:46:14.149625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.097 [2024-11-26 07:46:14.149641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.097 [2024-11-26 07:46:14.163977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.098 [2024-11-26 07:46:14.163992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.098 [2024-11-26 07:46:14.177078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.098 [2024-11-26 07:46:14.177093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.098 [2024-11-26 07:46:14.191638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.098 [2024-11-26 07:46:14.191654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.098 [2024-11-26 07:46:14.205643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.098 [2024-11-26 07:46:14.205658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.098 [2024-11-26 07:46:14.220134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.098 [2024-11-26 07:46:14.220150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.358 [2024-11-26 07:46:14.233541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.358 [2024-11-26 07:46:14.233556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.358 [2024-11-26 07:46:14.248454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.358 [2024-11-26 07:46:14.248471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.358 [2024-11-26 07:46:14.261569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.358 [2024-11-26 07:46:14.261586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.358 [2024-11-26 07:46:14.275882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.358 [2024-11-26 07:46:14.275898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.358 [2024-11-26 07:46:14.289544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.358 [2024-11-26 07:46:14.289559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.358 [2024-11-26 07:46:14.304079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.358 [2024-11-26 07:46:14.304096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.358 [2024-11-26 07:46:14.317722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.358 [2024-11-26 07:46:14.317737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.358 [2024-11-26 07:46:14.332065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.358 [2024-11-26 07:46:14.332081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.358 [2024-11-26 07:46:14.345892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.358 [2024-11-26 07:46:14.345908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.358 [2024-11-26 07:46:14.359958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.358 [2024-11-26 07:46:14.359974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.358 [2024-11-26 07:46:14.373448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.358 [2024-11-26 07:46:14.373464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.358 [2024-11-26 07:46:14.387534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.358 [2024-11-26 07:46:14.387550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.358 [2024-11-26 07:46:14.401422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.358 [2024-11-26 07:46:14.401438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.358 18084.75 IOPS, 141.29 MiB/s [2024-11-26T06:46:14.495Z] [2024-11-26 07:46:14.416261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.358 [2024-11-26 07:46:14.416276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.358 [2024-11-26 07:46:14.429483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.358 [2024-11-26 07:46:14.429500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.358 [2024-11-26 07:46:14.443896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.358 [2024-11-26 07:46:14.443911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.358 [2024-11-26 07:46:14.457415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.358 [2024-11-26 07:46:14.457431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.358 [2024-11-26 07:46:14.472243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.358 [2024-11-26 07:46:14.472258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.358 [2024-11-26 07:46:14.485923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.358 [2024-11-26 07:46:14.485938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.619 [2024-11-26 07:46:14.500445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.619 [2024-11-26 07:46:14.500460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.619 [2024-11-26 07:46:14.512838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.619 [2024-11-26 07:46:14.512853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.619 [2024-11-26 07:46:14.528587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.619 [2024-11-26 07:46:14.528603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.619 [2024-11-26 07:46:14.542022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.619 [2024-11-26 07:46:14.542038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.619 [2024-11-26 07:46:14.556019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.619 [2024-11-26 07:46:14.556035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.619 [2024-11-26 07:46:14.569403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.619 [2024-11-26 07:46:14.569418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.619 [2024-11-26 07:46:14.584357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.619 [2024-11-26 07:46:14.584373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.619 [2024-11-26 07:46:14.597773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.619 [2024-11-26 07:46:14.597790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.619 [2024-11-26 07:46:14.612220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.619 [2024-11-26 07:46:14.612236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.619 [2024-11-26 07:46:14.625770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.619 [2024-11-26 07:46:14.625786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.619 [2024-11-26 07:46:14.639474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.619 [2024-11-26 07:46:14.639489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.619 [2024-11-26 07:46:14.652944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.619 [2024-11-26 07:46:14.652960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.619 [2024-11-26 07:46:14.668126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.619 [2024-11-26 07:46:14.668142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.620 [2024-11-26 07:46:14.681661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.620 [2024-11-26 07:46:14.681677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.620 [2024-11-26 07:46:14.696240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.620 [2024-11-26 07:46:14.696255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.620 [2024-11-26 07:46:14.709757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.620 [2024-11-26 07:46:14.709772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.620 [2024-11-26 07:46:14.724479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.620 [2024-11-26 07:46:14.724494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.620 [2024-11-26 07:46:14.737810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.620 [2024-11-26 07:46:14.737826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.880 [2024-11-26 07:46:14.752306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.880 [2024-11-26 07:46:14.752321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.880 [2024-11-26 07:46:14.765870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.880 [2024-11-26 07:46:14.765885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.880 [2024-11-26 07:46:14.779491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.880 [2024-11-26 07:46:14.779507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.880 [2024-11-26 07:46:14.793232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.880 [2024-11-26 07:46:14.793246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.880 [2024-11-26 07:46:14.807294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.880 [2024-11-26 07:46:14.807309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.880 [2024-11-26 07:46:14.820994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.880 [2024-11-26 07:46:14.821009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.880 [2024-11-26 07:46:14.836219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.880 [2024-11-26 07:46:14.836234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.880 [2024-11-26 07:46:14.849980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.880 [2024-11-26 07:46:14.849996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.880 [2024-11-26 07:46:14.864700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.880 [2024-11-26 07:46:14.864715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.880 [2024-11-26 07:46:14.878346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.880 [2024-11-26 07:46:14.878362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.880 [2024-11-26 07:46:14.891958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.880 [2024-11-26 07:46:14.891973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.880 [2024-11-26 07:46:14.905746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.880 [2024-11-26 07:46:14.905761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.880 [2024-11-26 07:46:14.920098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.880 [2024-11-26 07:46:14.920123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.880 [2024-11-26 07:46:14.933836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.880 [2024-11-26 07:46:14.933852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.880 [2024-11-26 07:46:14.948033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.880 [2024-11-26 07:46:14.948049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.880 [2024-11-26 07:46:14.961857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.880 [2024-11-26 07:46:14.961876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.880 [2024-11-26 07:46:14.976254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.880 [2024-11-26 07:46:14.976269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.880 [2024-11-26 07:46:14.990054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.880 [2024-11-26 07:46:14.990069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.880 [2024-11-26 07:46:15.004401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.880 [2024-11-26 07:46:15.004417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.141 [2024-11-26 07:46:15.018109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.141 [2024-11-26 07:46:15.018124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.141 [2024-11-26 07:46:15.032331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.141 [2024-11-26 07:46:15.032346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.141 [2024-11-26 07:46:15.045857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.141 [2024-11-26 07:46:15.045879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.141 [2024-11-26 07:46:15.059394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.141 [2024-11-26 07:46:15.059409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.141 [2024-11-26 07:46:15.073366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.141 [2024-11-26 07:46:15.073381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.141 [2024-11-26 07:46:15.087828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.141 [2024-11-26 07:46:15.087843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.141 [2024-11-26 07:46:15.101599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.141 [2024-11-26 07:46:15.101615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.141 [2024-11-26 07:46:15.116272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.141 [2024-11-26 07:46:15.116288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.141 [2024-11-26 07:46:15.129943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.141 [2024-11-26 07:46:15.129958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.141 [2024-11-26 07:46:15.144235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.141 [2024-11-26 07:46:15.144250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.141 [2024-11-26 07:46:15.157946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.141 [2024-11-26 07:46:15.157961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.141 [2024-11-26 07:46:15.172853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.141 [2024-11-26 07:46:15.172873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.141 [2024-11-26 07:46:15.188422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.141 [2024-11-26 07:46:15.188446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.141 [2024-11-26 07:46:15.202045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.141 [2024-11-26 07:46:15.202060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.141 [2024-11-26 07:46:15.216663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.141 [2024-11-26 07:46:15.216679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.141 [2024-11-26 07:46:15.230454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.141 [2024-11-26 07:46:15.230469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.141 [2024-11-26 07:46:15.243784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.141 [2024-11-26 07:46:15.243800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.141 [2024-11-26 07:46:15.257455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.141 [2024-11-26 07:46:15.257470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.402 [2024-11-26 07:46:15.272557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.402 [2024-11-26 07:46:15.272573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.402 [2024-11-26 07:46:15.285062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.402 [2024-11-26 07:46:15.285077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.402 [2024-11-26 07:46:15.299868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.402 [2024-11-26 07:46:15.299883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.402 [2024-11-26 07:46:15.313671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.402 [2024-11-26 07:46:15.313687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.402 [2024-11-26 07:46:15.328094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.402 [2024-11-26 07:46:15.328109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.402 [2024-11-26 07:46:15.341834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.402 [2024-11-26 07:46:15.341849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.402 [2024-11-26 07:46:15.356551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.402 [2024-11-26 07:46:15.356567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.402 [2024-11-26 07:46:15.369142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.402 [2024-11-26 07:46:15.369157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.402 [2024-11-26 07:46:15.383505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.402 [2024-11-26 07:46:15.383521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.402 [2024-11-26 07:46:15.397035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.402 [2024-11-26 07:46:15.397050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.402 [2024-11-26 07:46:15.411851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.402 [2024-11-26 07:46:15.411870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.402 18099.80 IOPS, 141.40 MiB/s [2024-11-26T06:46:15.539Z] [2024-11-26 07:46:15.424155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.402 [2024-11-26 07:46:15.424170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.402 00:36:31.402 Latency(us) 00:36:31.402 [2024-11-26T06:46:15.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:31.402 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:36:31.402 Nvme1n1 : 5.01 18101.30 141.42 0.00 0.00 7063.28 2539.52 12615.68 00:36:31.402 [2024-11-26T06:46:15.539Z] =================================================================================================================== 00:36:31.402 [2024-11-26T06:46:15.539Z] Total : 18101.30 141.42 0.00 0.00 7063.28 2539.52 12615.68 00:36:31.402 [2024-11-26 07:46:15.432493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.402 [2024-11-26 07:46:15.432507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.402 [2024-11-26 07:46:15.444499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.402 [2024-11-26 07:46:15.444512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.402 [2024-11-26 07:46:15.456502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.402 [2024-11-26 07:46:15.456513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.402 [2024-11-26 07:46:15.468498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.402 [2024-11-26 07:46:15.468509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.402 [2024-11-26 07:46:15.480496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.402 [2024-11-26 07:46:15.480506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.402 [2024-11-26 07:46:15.492493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.402 [2024-11-26 07:46:15.492502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.402 [2024-11-26 07:46:15.504491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.402 [2024-11-26 07:46:15.504499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.402 [2024-11-26 07:46:15.516493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.402 [2024-11-26 07:46:15.516503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.402 [2024-11-26 07:46:15.528492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.402 [2024-11-26 07:46:15.528500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.663 [2024-11-26 07:46:15.540491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.663 [2024-11-26 07:46:15.540499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2392106) - No such process 00:36:31.663 07:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2392106 00:36:31.663 07:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:31.663 07:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.663 07:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:31.663 07:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.663 07:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:31.663 07:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.663 07:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:31.663 delay0 00:36:31.663 07:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.663 07:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:36:31.663 07:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.663 07:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:31.663 07:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.663 07:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:36:31.663 [2024-11-26 07:46:15.687345] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:39.801 Initializing NVMe Controllers 00:36:39.801 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:39.801 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:39.801 Initialization complete. Launching workers. 00:36:39.801 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 5724 00:36:39.801 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 5998, failed to submit 46 00:36:39.801 success 5846, unsuccessful 152, failed 0 00:36:39.801 07:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:36:39.801 07:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:36:39.801 07:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:39.801 07:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:36:39.802 07:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:39.802 07:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:36:39.802 07:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:39.802 07:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:39.802 rmmod nvme_tcp 00:36:39.802 rmmod nvme_fabrics 00:36:39.802 rmmod nvme_keyring 00:36:39.802 07:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:39.802 07:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:36:39.802 07:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:36:39.802 07:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2390071 ']' 00:36:39.802 07:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2390071 00:36:39.802 07:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2390071 ']' 00:36:39.802 07:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2390071 00:36:39.802 07:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:36:39.802 07:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:39.802 07:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2390071 00:36:39.802 07:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:39.802 07:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:39.802 07:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2390071' 00:36:39.802 killing process with pid 2390071 00:36:39.802 07:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2390071 00:36:39.802 07:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2390071 00:36:39.802 07:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:39.802 07:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:39.802 07:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:39.802 07:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:36:39.802 07:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:39.802 07:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:36:39.802 07:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:36:39.802 07:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:39.802 07:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:39.802 07:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:39.802 07:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:39.802 07:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:41.185 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:41.185 00:36:41.185 real 0m35.077s 00:36:41.185 user 0m43.897s 00:36:41.185 sys 0m12.679s 00:36:41.185 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:41.185 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:41.185 ************************************ 00:36:41.185 END TEST nvmf_zcopy 00:36:41.185 ************************************ 00:36:41.185 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:36:41.185 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:41.185 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:41.185 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:41.185 ************************************ 00:36:41.185 START TEST nvmf_nmic 00:36:41.185 ************************************ 00:36:41.185 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:36:41.185 * Looking for test storage... 00:36:41.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:41.185 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:41.185 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:36:41.185 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:41.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:41.447 --rc genhtml_branch_coverage=1 00:36:41.447 --rc genhtml_function_coverage=1 00:36:41.447 --rc genhtml_legend=1 00:36:41.447 --rc geninfo_all_blocks=1 00:36:41.447 --rc geninfo_unexecuted_blocks=1 00:36:41.447 00:36:41.447 ' 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:41.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:41.447 --rc genhtml_branch_coverage=1 00:36:41.447 --rc genhtml_function_coverage=1 00:36:41.447 --rc genhtml_legend=1 00:36:41.447 --rc geninfo_all_blocks=1 00:36:41.447 --rc geninfo_unexecuted_blocks=1 00:36:41.447 00:36:41.447 ' 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:41.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:41.447 --rc genhtml_branch_coverage=1 00:36:41.447 --rc genhtml_function_coverage=1 00:36:41.447 --rc genhtml_legend=1 00:36:41.447 --rc geninfo_all_blocks=1 00:36:41.447 --rc geninfo_unexecuted_blocks=1 00:36:41.447 00:36:41.447 ' 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:41.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:41.447 --rc genhtml_branch_coverage=1 00:36:41.447 --rc genhtml_function_coverage=1 00:36:41.447 --rc genhtml_legend=1 00:36:41.447 --rc geninfo_all_blocks=1 00:36:41.447 --rc geninfo_unexecuted_blocks=1 00:36:41.447 00:36:41.447 ' 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:41.447 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.448 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.448 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.448 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:36:41.448 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.448 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:36:41.448 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:41.448 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:41.448 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:41.448 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:41.448 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:41.448 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:41.448 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:41.448 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:41.448 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:41.448 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:41.448 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:41.448 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:41.448 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:36:41.448 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:41.448 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:41.448 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:41.448 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:41.448 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:41.448 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:41.448 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:41.448 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:41.448 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:41.448 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:41.448 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:36:41.448 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:49.587 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:49.587 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:49.587 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:49.588 Found net devices under 0000:31:00.0: cvl_0_0 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:49.588 Found net devices under 0000:31:00.1: cvl_0_1 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:49.588 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:49.588 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.686 ms 00:36:49.588 00:36:49.588 --- 10.0.0.2 ping statistics --- 00:36:49.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:49.588 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:49.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:49.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:36:49.588 00:36:49.588 --- 10.0.0.1 ping statistics --- 00:36:49.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:49.588 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:36:49.588 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:49.589 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:49.589 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:49.589 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2399137 00:36:49.589 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2399137 00:36:49.589 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:36:49.589 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2399137 ']' 00:36:49.589 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:49.589 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:49.589 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:49.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:49.589 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:49.589 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:49.848 [2024-11-26 07:46:33.746941] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:49.849 [2024-11-26 07:46:33.748079] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:36:49.849 [2024-11-26 07:46:33.748133] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:49.849 [2024-11-26 07:46:33.838796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:49.849 [2024-11-26 07:46:33.881757] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:49.849 [2024-11-26 07:46:33.881793] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:49.849 [2024-11-26 07:46:33.881802] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:49.849 [2024-11-26 07:46:33.881809] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:49.849 [2024-11-26 07:46:33.881815] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:49.849 [2024-11-26 07:46:33.883389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:49.849 [2024-11-26 07:46:33.883505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:49.849 [2024-11-26 07:46:33.883663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:49.849 [2024-11-26 07:46:33.883663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:49.849 [2024-11-26 07:46:33.939665] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:49.849 [2024-11-26 07:46:33.939742] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:49.849 [2024-11-26 07:46:33.940634] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:49.849 [2024-11-26 07:46:33.941441] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:49.849 [2024-11-26 07:46:33.941526] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:50.419 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:50.419 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:36:50.419 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:50.680 [2024-11-26 07:46:34.596148] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:50.680 Malloc0 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:50.680 [2024-11-26 07:46:34.672295] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:36:50.680 test case1: single bdev can't be used in multiple subsystems 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:50.680 [2024-11-26 07:46:34.708041] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:36:50.680 [2024-11-26 07:46:34.708060] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:36:50.680 [2024-11-26 07:46:34.708068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:50.680 request: 00:36:50.680 { 00:36:50.680 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:36:50.680 "namespace": { 00:36:50.680 "bdev_name": "Malloc0", 00:36:50.680 "no_auto_visible": false 00:36:50.680 }, 00:36:50.680 "method": "nvmf_subsystem_add_ns", 00:36:50.680 "req_id": 1 00:36:50.680 } 00:36:50.680 Got JSON-RPC error response 00:36:50.680 response: 00:36:50.680 { 00:36:50.680 "code": -32602, 00:36:50.680 "message": "Invalid parameters" 00:36:50.680 } 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:36:50.680 Adding namespace failed - expected result. 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:36:50.680 test case2: host connect to nvmf target in multiple paths 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:50.680 [2024-11-26 07:46:34.720151] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:36:50.680 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.681 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:51.250 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:36:51.511 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:36:51.511 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:36:51.511 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:36:51.511 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:36:51.511 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:36:53.423 07:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:36:53.423 07:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:36:53.423 07:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:36:53.423 07:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:36:53.423 07:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:36:53.423 07:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:36:53.423 07:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:36:53.423 [global] 00:36:53.423 thread=1 00:36:53.423 invalidate=1 00:36:53.423 rw=write 00:36:53.423 time_based=1 00:36:53.423 runtime=1 00:36:53.423 ioengine=libaio 00:36:53.423 direct=1 00:36:53.423 bs=4096 00:36:53.423 iodepth=1 00:36:53.423 norandommap=0 00:36:53.423 numjobs=1 00:36:53.423 00:36:53.423 verify_dump=1 00:36:53.423 verify_backlog=512 00:36:53.423 verify_state_save=0 00:36:53.423 do_verify=1 00:36:53.423 verify=crc32c-intel 00:36:53.708 [job0] 00:36:53.708 filename=/dev/nvme0n1 00:36:53.708 Could not set queue depth (nvme0n1) 00:36:53.972 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:53.972 fio-3.35 00:36:53.972 Starting 1 thread 00:36:54.915 00:36:54.915 job0: (groupid=0, jobs=1): err= 0: pid=2400051: Tue Nov 26 07:46:39 2024 00:36:54.915 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:36:54.915 slat (nsec): min=7087, max=59855, avg=26705.24, stdev=3727.79 00:36:54.915 clat (usec): min=830, max=1295, avg=1084.41, stdev=84.54 00:36:54.915 lat (usec): min=857, max=1335, avg=1111.11, stdev=84.83 00:36:54.915 clat percentiles (usec): 00:36:54.915 | 1.00th=[ 881], 5.00th=[ 947], 10.00th=[ 963], 20.00th=[ 1004], 00:36:54.915 | 30.00th=[ 1029], 40.00th=[ 1074], 50.00th=[ 1106], 60.00th=[ 1123], 00:36:54.915 | 70.00th=[ 1139], 80.00th=[ 1156], 90.00th=[ 1188], 95.00th=[ 1205], 00:36:54.915 | 99.00th=[ 1237], 99.50th=[ 1287], 99.90th=[ 1303], 99.95th=[ 1303], 00:36:54.915 | 99.99th=[ 1303] 00:36:54.915 write: IOPS=572, BW=2290KiB/s (2345kB/s)(2292KiB/1001msec); 0 zone resets 00:36:54.915 slat (usec): min=9, max=28562, avg=83.05, stdev=1191.84 00:36:54.915 clat (usec): min=182, max=889, avg=654.51, stdev=98.62 00:36:54.915 lat (usec): min=216, max=29317, avg=737.56, stdev=1200.25 00:36:54.915 clat percentiles (usec): 00:36:54.915 | 1.00th=[ 363], 5.00th=[ 482], 10.00th=[ 529], 20.00th=[ 586], 00:36:54.915 | 30.00th=[ 619], 40.00th=[ 644], 50.00th=[ 676], 60.00th=[ 693], 00:36:54.915 | 70.00th=[ 709], 80.00th=[ 734], 90.00th=[ 758], 95.00th=[ 783], 00:36:54.915 | 99.00th=[ 824], 99.50th=[ 857], 99.90th=[ 889], 99.95th=[ 889], 00:36:54.915 | 99.99th=[ 889] 00:36:54.915 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:36:54.915 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:54.915 lat (usec) : 250=0.09%, 500=3.87%, 750=41.11%, 1000=16.96% 00:36:54.915 lat (msec) : 2=37.97% 00:36:54.915 cpu : usr=1.60%, sys=3.40%, ctx=1089, majf=0, minf=1 00:36:54.915 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:54.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:54.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:54.915 issued rwts: total=512,573,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:54.915 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:54.915 00:36:54.915 Run status group 0 (all jobs): 00:36:54.915 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:36:54.915 WRITE: bw=2290KiB/s (2345kB/s), 2290KiB/s-2290KiB/s (2345kB/s-2345kB/s), io=2292KiB (2347kB), run=1001-1001msec 00:36:54.915 00:36:54.915 Disk stats (read/write): 00:36:54.915 nvme0n1: ios=487/512, merge=0/0, ticks=1488/315, in_queue=1803, util=98.70% 00:36:55.175 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:55.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:36:55.175 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:55.175 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:36:55.175 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:36:55.175 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:55.175 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:36:55.175 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:55.175 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:36:55.175 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:55.175 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:36:55.175 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:55.175 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:36:55.175 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:55.175 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:36:55.175 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:55.175 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:55.175 rmmod nvme_tcp 00:36:55.175 rmmod nvme_fabrics 00:36:55.434 rmmod nvme_keyring 00:36:55.434 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:55.434 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:36:55.434 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:36:55.434 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2399137 ']' 00:36:55.434 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2399137 00:36:55.434 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2399137 ']' 00:36:55.434 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2399137 00:36:55.434 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:36:55.434 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:55.435 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2399137 00:36:55.435 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:55.435 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:55.435 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2399137' 00:36:55.435 killing process with pid 2399137 00:36:55.435 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2399137 00:36:55.435 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2399137 00:36:55.435 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:55.435 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:55.435 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:55.435 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:36:55.435 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:36:55.435 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:55.435 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:36:55.435 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:55.435 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:55.435 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:55.435 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:55.435 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:57.979 00:36:57.979 real 0m16.421s 00:36:57.979 user 0m35.804s 00:36:57.979 sys 0m7.971s 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:57.979 ************************************ 00:36:57.979 END TEST nvmf_nmic 00:36:57.979 ************************************ 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:57.979 ************************************ 00:36:57.979 START TEST nvmf_fio_target 00:36:57.979 ************************************ 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:36:57.979 * Looking for test storage... 00:36:57.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:57.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:57.979 --rc genhtml_branch_coverage=1 00:36:57.979 --rc genhtml_function_coverage=1 00:36:57.979 --rc genhtml_legend=1 00:36:57.979 --rc geninfo_all_blocks=1 00:36:57.979 --rc geninfo_unexecuted_blocks=1 00:36:57.979 00:36:57.979 ' 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:57.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:57.979 --rc genhtml_branch_coverage=1 00:36:57.979 --rc genhtml_function_coverage=1 00:36:57.979 --rc genhtml_legend=1 00:36:57.979 --rc geninfo_all_blocks=1 00:36:57.979 --rc geninfo_unexecuted_blocks=1 00:36:57.979 00:36:57.979 ' 00:36:57.979 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:57.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:57.979 --rc genhtml_branch_coverage=1 00:36:57.980 --rc genhtml_function_coverage=1 00:36:57.980 --rc genhtml_legend=1 00:36:57.980 --rc geninfo_all_blocks=1 00:36:57.980 --rc geninfo_unexecuted_blocks=1 00:36:57.980 00:36:57.980 ' 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:57.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:57.980 --rc genhtml_branch_coverage=1 00:36:57.980 --rc genhtml_function_coverage=1 00:36:57.980 --rc genhtml_legend=1 00:36:57.980 --rc geninfo_all_blocks=1 00:36:57.980 --rc geninfo_unexecuted_blocks=1 00:36:57.980 00:36:57.980 ' 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:36:57.980 07:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:37:06.131 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:06.131 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:37:06.131 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:06.131 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:06.131 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:06.131 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:06.131 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:06.131 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:37:06.131 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:06.131 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:37:06.131 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:37:06.131 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:37:06.131 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:37:06.131 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:37:06.131 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:37:06.131 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:06.131 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:06.131 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:06.131 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:06.131 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:06.131 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:06.131 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:06.132 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:06.132 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:06.132 Found net devices under 0000:31:00.0: cvl_0_0 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:06.132 Found net devices under 0000:31:00.1: cvl_0_1 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:06.132 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:06.132 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:06.132 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:06.132 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:06.132 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:06.132 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:06.132 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:37:06.132 00:37:06.132 --- 10.0.0.2 ping statistics --- 00:37:06.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:06.132 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:37:06.132 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:06.132 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:06.132 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:37:06.132 00:37:06.132 --- 10.0.0.1 ping statistics --- 00:37:06.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:06.132 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:37:06.132 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:06.132 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:37:06.132 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:06.132 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:06.132 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:06.132 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:06.132 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:06.132 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:06.132 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:06.132 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:37:06.132 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:06.132 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:06.132 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:37:06.132 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2405029 00:37:06.133 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2405029 00:37:06.133 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:37:06.133 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2405029 ']' 00:37:06.133 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:06.133 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:06.133 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:06.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:06.133 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:06.133 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:37:06.133 [2024-11-26 07:46:50.226686] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:06.133 [2024-11-26 07:46:50.227839] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:37:06.133 [2024-11-26 07:46:50.227905] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:06.394 [2024-11-26 07:46:50.320198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:06.395 [2024-11-26 07:46:50.361083] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:06.395 [2024-11-26 07:46:50.361118] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:06.395 [2024-11-26 07:46:50.361126] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:06.395 [2024-11-26 07:46:50.361134] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:06.395 [2024-11-26 07:46:50.361139] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:06.395 [2024-11-26 07:46:50.362716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:06.395 [2024-11-26 07:46:50.362831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:06.395 [2024-11-26 07:46:50.362971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:06.395 [2024-11-26 07:46:50.363101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:06.395 [2024-11-26 07:46:50.419636] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:06.395 [2024-11-26 07:46:50.419682] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:06.395 [2024-11-26 07:46:50.420736] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:06.395 [2024-11-26 07:46:50.421319] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:06.395 [2024-11-26 07:46:50.421449] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:06.968 07:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:06.968 07:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:37:06.968 07:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:06.968 07:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:06.968 07:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:37:06.968 07:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:06.968 07:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:07.229 [2024-11-26 07:46:51.255937] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:07.229 07:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:07.490 07:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:37:07.490 07:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:07.752 07:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:37:07.752 07:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:08.013 07:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:37:08.013 07:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:08.013 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:37:08.013 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:37:08.273 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:08.579 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:37:08.579 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:08.579 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:37:08.579 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:08.840 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:37:08.840 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:37:08.840 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:37:09.100 07:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:37:09.100 07:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:09.361 07:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:37:09.361 07:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:37:09.361 07:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:09.621 [2024-11-26 07:46:53.632041] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:09.621 07:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:37:09.881 07:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:37:10.142 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:10.403 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:37:10.403 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:37:10.403 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:37:10.403 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:37:10.403 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:37:10.403 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:37:12.946 07:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:37:12.946 07:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:37:12.946 07:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:37:12.946 07:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:37:12.946 07:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:37:12.946 07:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:37:12.946 07:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:37:12.946 [global] 00:37:12.946 thread=1 00:37:12.946 invalidate=1 00:37:12.946 rw=write 00:37:12.946 time_based=1 00:37:12.946 runtime=1 00:37:12.946 ioengine=libaio 00:37:12.946 direct=1 00:37:12.946 bs=4096 00:37:12.946 iodepth=1 00:37:12.946 norandommap=0 00:37:12.946 numjobs=1 00:37:12.946 00:37:12.946 verify_dump=1 00:37:12.946 verify_backlog=512 00:37:12.946 verify_state_save=0 00:37:12.946 do_verify=1 00:37:12.946 verify=crc32c-intel 00:37:12.946 [job0] 00:37:12.946 filename=/dev/nvme0n1 00:37:12.946 [job1] 00:37:12.946 filename=/dev/nvme0n2 00:37:12.946 [job2] 00:37:12.946 filename=/dev/nvme0n3 00:37:12.946 [job3] 00:37:12.946 filename=/dev/nvme0n4 00:37:12.946 Could not set queue depth (nvme0n1) 00:37:12.946 Could not set queue depth (nvme0n2) 00:37:12.946 Could not set queue depth (nvme0n3) 00:37:12.946 Could not set queue depth (nvme0n4) 00:37:12.946 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:12.946 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:12.946 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:12.946 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:12.946 fio-3.35 00:37:12.946 Starting 4 threads 00:37:14.349 00:37:14.349 job0: (groupid=0, jobs=1): err= 0: pid=2406617: Tue Nov 26 07:46:58 2024 00:37:14.349 read: IOPS=18, BW=75.6KiB/s (77.4kB/s)(76.0KiB/1005msec) 00:37:14.349 slat (nsec): min=26694, max=27135, avg=26904.37, stdev=143.52 00:37:14.349 clat (usec): min=40948, max=42023, avg=41744.15, stdev=418.90 00:37:14.349 lat (usec): min=40975, max=42050, avg=41771.05, stdev=418.88 00:37:14.349 clat percentiles (usec): 00:37:14.349 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:37:14.349 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:37:14.349 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:37:14.349 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:14.349 | 99.99th=[42206] 00:37:14.349 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:37:14.349 slat (nsec): min=3196, max=67331, avg=24405.49, stdev=13461.61 00:37:14.349 clat (usec): min=121, max=939, avg=382.05, stdev=137.59 00:37:14.349 lat (usec): min=125, max=975, avg=406.45, stdev=139.48 00:37:14.349 clat percentiles (usec): 00:37:14.349 | 1.00th=[ 151], 5.00th=[ 233], 10.00th=[ 255], 20.00th=[ 277], 00:37:14.349 | 30.00th=[ 297], 40.00th=[ 338], 50.00th=[ 371], 60.00th=[ 383], 00:37:14.349 | 70.00th=[ 404], 80.00th=[ 433], 90.00th=[ 570], 95.00th=[ 709], 00:37:14.349 | 99.00th=[ 840], 99.50th=[ 914], 99.90th=[ 938], 99.95th=[ 938], 00:37:14.349 | 99.99th=[ 938] 00:37:14.349 bw ( KiB/s): min= 4096, max= 4096, per=34.62%, avg=4096.00, stdev= 0.00, samples=1 00:37:14.349 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:14.349 lat (usec) : 250=7.91%, 500=76.27%, 750=9.79%, 1000=2.45% 00:37:14.349 lat (msec) : 50=3.58% 00:37:14.349 cpu : usr=0.90%, sys=1.00%, ctx=532, majf=0, minf=1 00:37:14.349 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:14.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.349 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.349 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:14.349 job1: (groupid=0, jobs=1): err= 0: pid=2406618: Tue Nov 26 07:46:58 2024 00:37:14.349 read: IOPS=16, BW=67.9KiB/s (69.6kB/s)(68.0KiB/1001msec) 00:37:14.349 slat (nsec): min=26186, max=27579, avg=27000.12, stdev=286.63 00:37:14.349 clat (usec): min=1061, max=42095, avg=39336.21, stdev=9868.81 00:37:14.349 lat (usec): min=1088, max=42123, avg=39363.21, stdev=9868.82 00:37:14.349 clat percentiles (usec): 00:37:14.349 | 1.00th=[ 1057], 5.00th=[ 1057], 10.00th=[41157], 20.00th=[41157], 00:37:14.349 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:37:14.349 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:37:14.349 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:14.349 | 99.99th=[42206] 00:37:14.349 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:37:14.349 slat (usec): min=10, max=1479, avg=38.29, stdev=66.24 00:37:14.349 clat (usec): min=255, max=950, avg=601.48, stdev=141.07 00:37:14.349 lat (usec): min=291, max=2137, avg=639.77, stdev=159.34 00:37:14.349 clat percentiles (usec): 00:37:14.349 | 1.00th=[ 297], 5.00th=[ 371], 10.00th=[ 396], 20.00th=[ 469], 00:37:14.349 | 30.00th=[ 523], 40.00th=[ 570], 50.00th=[ 611], 60.00th=[ 644], 00:37:14.349 | 70.00th=[ 676], 80.00th=[ 734], 90.00th=[ 783], 95.00th=[ 832], 00:37:14.349 | 99.00th=[ 898], 99.50th=[ 906], 99.90th=[ 955], 99.95th=[ 955], 00:37:14.349 | 99.99th=[ 955] 00:37:14.349 bw ( KiB/s): min= 4096, max= 4096, per=34.62%, avg=4096.00, stdev= 0.00, samples=1 00:37:14.349 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:14.349 lat (usec) : 500=24.20%, 750=56.52%, 1000=16.07% 00:37:14.349 lat (msec) : 2=0.19%, 50=3.02% 00:37:14.349 cpu : usr=0.80%, sys=1.70%, ctx=532, majf=0, minf=1 00:37:14.349 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:14.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.349 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.349 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:14.349 job2: (groupid=0, jobs=1): err= 0: pid=2406619: Tue Nov 26 07:46:58 2024 00:37:14.349 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:37:14.349 slat (nsec): min=7327, max=58602, avg=26768.83, stdev=4071.95 00:37:14.349 clat (usec): min=587, max=1233, avg=971.57, stdev=120.22 00:37:14.349 lat (usec): min=614, max=1273, avg=998.34, stdev=120.57 00:37:14.349 clat percentiles (usec): 00:37:14.349 | 1.00th=[ 676], 5.00th=[ 750], 10.00th=[ 807], 20.00th=[ 873], 00:37:14.349 | 30.00th=[ 914], 40.00th=[ 955], 50.00th=[ 979], 60.00th=[ 1004], 00:37:14.349 | 70.00th=[ 1045], 80.00th=[ 1090], 90.00th=[ 1123], 95.00th=[ 1156], 00:37:14.349 | 99.00th=[ 1205], 99.50th=[ 1221], 99.90th=[ 1237], 99.95th=[ 1237], 00:37:14.349 | 99.99th=[ 1237] 00:37:14.349 write: IOPS=924, BW=3696KiB/s (3785kB/s)(3700KiB/1001msec); 0 zone resets 00:37:14.349 slat (nsec): min=10079, max=56632, avg=32891.53, stdev=8031.81 00:37:14.349 clat (usec): min=160, max=922, avg=483.41, stdev=139.64 00:37:14.349 lat (usec): min=171, max=957, avg=516.30, stdev=141.67 00:37:14.349 clat percentiles (usec): 00:37:14.349 | 1.00th=[ 225], 5.00th=[ 289], 10.00th=[ 322], 20.00th=[ 359], 00:37:14.349 | 30.00th=[ 388], 40.00th=[ 433], 50.00th=[ 469], 60.00th=[ 506], 00:37:14.349 | 70.00th=[ 545], 80.00th=[ 603], 90.00th=[ 693], 95.00th=[ 734], 00:37:14.349 | 99.00th=[ 840], 99.50th=[ 857], 99.90th=[ 922], 99.95th=[ 922], 00:37:14.350 | 99.99th=[ 922] 00:37:14.350 bw ( KiB/s): min= 4096, max= 4096, per=34.62%, avg=4096.00, stdev= 0.00, samples=1 00:37:14.350 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:14.350 lat (usec) : 250=1.25%, 500=36.19%, 750=26.58%, 1000=21.29% 00:37:14.350 lat (msec) : 2=14.68% 00:37:14.350 cpu : usr=2.20%, sys=4.50%, ctx=1439, majf=0, minf=1 00:37:14.350 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:14.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.350 issued rwts: total=512,925,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.350 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:14.350 job3: (groupid=0, jobs=1): err= 0: pid=2406620: Tue Nov 26 07:46:58 2024 00:37:14.350 read: IOPS=611, BW=2446KiB/s (2504kB/s)(2448KiB/1001msec) 00:37:14.350 slat (nsec): min=7327, max=61572, avg=25386.31, stdev=6922.91 00:37:14.350 clat (usec): min=510, max=1082, avg=827.06, stdev=97.21 00:37:14.350 lat (usec): min=537, max=1109, avg=852.45, stdev=98.48 00:37:14.350 clat percentiles (usec): 00:37:14.350 | 1.00th=[ 537], 5.00th=[ 652], 10.00th=[ 685], 20.00th=[ 750], 00:37:14.350 | 30.00th=[ 791], 40.00th=[ 824], 50.00th=[ 840], 60.00th=[ 865], 00:37:14.350 | 70.00th=[ 881], 80.00th=[ 906], 90.00th=[ 938], 95.00th=[ 963], 00:37:14.350 | 99.00th=[ 1012], 99.50th=[ 1057], 99.90th=[ 1090], 99.95th=[ 1090], 00:37:14.350 | 99.99th=[ 1090] 00:37:14.350 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:37:14.350 slat (nsec): min=9970, max=69173, avg=29939.06, stdev=11021.11 00:37:14.350 clat (usec): min=152, max=871, avg=426.27, stdev=93.18 00:37:14.350 lat (usec): min=164, max=907, avg=456.21, stdev=97.24 00:37:14.350 clat percentiles (usec): 00:37:14.350 | 1.00th=[ 208], 5.00th=[ 293], 10.00th=[ 310], 20.00th=[ 343], 00:37:14.350 | 30.00th=[ 367], 40.00th=[ 396], 50.00th=[ 437], 60.00th=[ 457], 00:37:14.350 | 70.00th=[ 478], 80.00th=[ 502], 90.00th=[ 545], 95.00th=[ 570], 00:37:14.350 | 99.00th=[ 627], 99.50th=[ 685], 99.90th=[ 783], 99.95th=[ 873], 00:37:14.350 | 99.99th=[ 873] 00:37:14.350 bw ( KiB/s): min= 4096, max= 4096, per=34.62%, avg=4096.00, stdev= 0.00, samples=1 00:37:14.350 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:14.350 lat (usec) : 250=1.53%, 500=47.62%, 750=20.78%, 1000=29.46% 00:37:14.350 lat (msec) : 2=0.61% 00:37:14.350 cpu : usr=2.50%, sys=4.50%, ctx=1637, majf=0, minf=1 00:37:14.350 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:14.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.350 issued rwts: total=612,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.350 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:14.350 00:37:14.350 Run status group 0 (all jobs): 00:37:14.350 READ: bw=4617KiB/s (4728kB/s), 67.9KiB/s-2446KiB/s (69.6kB/s-2504kB/s), io=4640KiB (4751kB), run=1001-1005msec 00:37:14.350 WRITE: bw=11.6MiB/s (12.1MB/s), 2038KiB/s-4092KiB/s (2087kB/s-4190kB/s), io=11.6MiB (12.2MB), run=1001-1005msec 00:37:14.350 00:37:14.350 Disk stats (read/write): 00:37:14.350 nvme0n1: ios=39/512, merge=0/0, ticks=1545/188, in_queue=1733, util=96.29% 00:37:14.350 nvme0n2: ios=60/512, merge=0/0, ticks=652/291, in_queue=943, util=97.03% 00:37:14.350 nvme0n3: ios=534/579, merge=0/0, ticks=1406/290, in_queue=1696, util=96.72% 00:37:14.350 nvme0n4: ios=534/861, merge=0/0, ticks=1334/350, in_queue=1684, util=96.67% 00:37:14.350 07:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:37:14.350 [global] 00:37:14.350 thread=1 00:37:14.350 invalidate=1 00:37:14.350 rw=randwrite 00:37:14.350 time_based=1 00:37:14.350 runtime=1 00:37:14.350 ioengine=libaio 00:37:14.350 direct=1 00:37:14.350 bs=4096 00:37:14.350 iodepth=1 00:37:14.350 norandommap=0 00:37:14.350 numjobs=1 00:37:14.350 00:37:14.350 verify_dump=1 00:37:14.350 verify_backlog=512 00:37:14.350 verify_state_save=0 00:37:14.350 do_verify=1 00:37:14.350 verify=crc32c-intel 00:37:14.350 [job0] 00:37:14.350 filename=/dev/nvme0n1 00:37:14.350 [job1] 00:37:14.350 filename=/dev/nvme0n2 00:37:14.350 [job2] 00:37:14.350 filename=/dev/nvme0n3 00:37:14.350 [job3] 00:37:14.350 filename=/dev/nvme0n4 00:37:14.350 Could not set queue depth (nvme0n1) 00:37:14.350 Could not set queue depth (nvme0n2) 00:37:14.350 Could not set queue depth (nvme0n3) 00:37:14.350 Could not set queue depth (nvme0n4) 00:37:14.615 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:14.615 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:14.615 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:14.615 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:14.615 fio-3.35 00:37:14.615 Starting 4 threads 00:37:16.029 00:37:16.029 job0: (groupid=0, jobs=1): err= 0: pid=2407054: Tue Nov 26 07:46:59 2024 00:37:16.029 read: IOPS=31, BW=128KiB/s (131kB/s)(132KiB/1034msec) 00:37:16.029 slat (nsec): min=10244, max=26203, avg=25163.45, stdev=2686.40 00:37:16.029 clat (usec): min=766, max=42158, avg=18412.07, stdev=20538.59 00:37:16.029 lat (usec): min=791, max=42183, avg=18437.23, stdev=20538.99 00:37:16.029 clat percentiles (usec): 00:37:16.029 | 1.00th=[ 766], 5.00th=[ 824], 10.00th=[ 865], 20.00th=[ 1057], 00:37:16.029 | 30.00th=[ 1074], 40.00th=[ 1106], 50.00th=[ 1156], 60.00th=[41681], 00:37:16.029 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:37:16.029 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:16.029 | 99.99th=[42206] 00:37:16.029 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:37:16.029 slat (nsec): min=9619, max=70318, avg=30221.64, stdev=8472.78 00:37:16.029 clat (usec): min=354, max=1017, avg=791.84, stdev=94.48 00:37:16.029 lat (usec): min=364, max=1050, avg=822.06, stdev=97.61 00:37:16.029 clat percentiles (usec): 00:37:16.029 | 1.00th=[ 486], 5.00th=[ 619], 10.00th=[ 685], 20.00th=[ 725], 00:37:16.029 | 30.00th=[ 758], 40.00th=[ 783], 50.00th=[ 807], 60.00th=[ 824], 00:37:16.029 | 70.00th=[ 848], 80.00th=[ 865], 90.00th=[ 889], 95.00th=[ 922], 00:37:16.029 | 99.00th=[ 971], 99.50th=[ 988], 99.90th=[ 1020], 99.95th=[ 1020], 00:37:16.029 | 99.99th=[ 1020] 00:37:16.029 bw ( KiB/s): min= 4096, max= 4096, per=38.18%, avg=4096.00, stdev= 0.00, samples=1 00:37:16.029 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:16.029 lat (usec) : 500=1.28%, 750=24.59%, 1000=68.44% 00:37:16.029 lat (msec) : 2=3.12%, 50=2.57% 00:37:16.029 cpu : usr=0.68%, sys=1.65%, ctx=546, majf=0, minf=1 00:37:16.029 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:16.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.029 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.029 issued rwts: total=33,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:16.029 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:16.029 job1: (groupid=0, jobs=1): err= 0: pid=2407069: Tue Nov 26 07:46:59 2024 00:37:16.029 read: IOPS=532, BW=2130KiB/s (2181kB/s)(2132KiB/1001msec) 00:37:16.029 slat (nsec): min=7057, max=44431, avg=25363.88, stdev=3831.01 00:37:16.029 clat (usec): min=414, max=1205, avg=874.20, stdev=129.18 00:37:16.029 lat (usec): min=440, max=1212, avg=899.56, stdev=129.17 00:37:16.029 clat percentiles (usec): 00:37:16.029 | 1.00th=[ 523], 5.00th=[ 603], 10.00th=[ 693], 20.00th=[ 775], 00:37:16.029 | 30.00th=[ 824], 40.00th=[ 857], 50.00th=[ 898], 60.00th=[ 930], 00:37:16.029 | 70.00th=[ 955], 80.00th=[ 988], 90.00th=[ 1012], 95.00th=[ 1045], 00:37:16.029 | 99.00th=[ 1123], 99.50th=[ 1172], 99.90th=[ 1205], 99.95th=[ 1205], 00:37:16.029 | 99.99th=[ 1205] 00:37:16.029 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:37:16.029 slat (nsec): min=9048, max=61828, avg=29919.14, stdev=6927.98 00:37:16.029 clat (usec): min=169, max=911, avg=466.73, stdev=128.86 00:37:16.029 lat (usec): min=201, max=942, avg=496.65, stdev=130.52 00:37:16.029 clat percentiles (usec): 00:37:16.029 | 1.00th=[ 235], 5.00th=[ 285], 10.00th=[ 314], 20.00th=[ 347], 00:37:16.029 | 30.00th=[ 375], 40.00th=[ 424], 50.00th=[ 453], 60.00th=[ 490], 00:37:16.029 | 70.00th=[ 529], 80.00th=[ 586], 90.00th=[ 644], 95.00th=[ 701], 00:37:16.029 | 99.00th=[ 807], 99.50th=[ 816], 99.90th=[ 840], 99.95th=[ 914], 00:37:16.029 | 99.99th=[ 914] 00:37:16.029 bw ( KiB/s): min= 4096, max= 4096, per=38.18%, avg=4096.00, stdev= 0.00, samples=1 00:37:16.029 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:16.029 lat (usec) : 250=1.09%, 500=40.46%, 750=28.39%, 1000=25.24% 00:37:16.029 lat (msec) : 2=4.82% 00:37:16.029 cpu : usr=2.20%, sys=4.80%, ctx=1557, majf=0, minf=1 00:37:16.029 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:16.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.029 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.029 issued rwts: total=533,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:16.029 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:16.029 job2: (groupid=0, jobs=1): err= 0: pid=2407087: Tue Nov 26 07:46:59 2024 00:37:16.029 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:37:16.029 slat (nsec): min=6871, max=64972, avg=29698.91, stdev=3508.89 00:37:16.029 clat (usec): min=662, max=1316, avg=1023.49, stdev=108.64 00:37:16.029 lat (usec): min=692, max=1345, avg=1053.19, stdev=108.47 00:37:16.029 clat percentiles (usec): 00:37:16.029 | 1.00th=[ 766], 5.00th=[ 865], 10.00th=[ 898], 20.00th=[ 938], 00:37:16.029 | 30.00th=[ 963], 40.00th=[ 988], 50.00th=[ 1012], 60.00th=[ 1037], 00:37:16.029 | 70.00th=[ 1074], 80.00th=[ 1123], 90.00th=[ 1172], 95.00th=[ 1205], 00:37:16.029 | 99.00th=[ 1287], 99.50th=[ 1303], 99.90th=[ 1319], 99.95th=[ 1319], 00:37:16.029 | 99.99th=[ 1319] 00:37:16.029 write: IOPS=732, BW=2929KiB/s (2999kB/s)(2932KiB/1001msec); 0 zone resets 00:37:16.029 slat (nsec): min=9164, max=55979, avg=31397.90, stdev=11283.66 00:37:16.029 clat (usec): min=229, max=1093, avg=581.86, stdev=128.33 00:37:16.029 lat (usec): min=240, max=1130, avg=613.26, stdev=133.96 00:37:16.029 clat percentiles (usec): 00:37:16.029 | 1.00th=[ 273], 5.00th=[ 347], 10.00th=[ 400], 20.00th=[ 474], 00:37:16.029 | 30.00th=[ 523], 40.00th=[ 562], 50.00th=[ 594], 60.00th=[ 627], 00:37:16.029 | 70.00th=[ 652], 80.00th=[ 685], 90.00th=[ 734], 95.00th=[ 766], 00:37:16.029 | 99.00th=[ 857], 99.50th=[ 881], 99.90th=[ 1090], 99.95th=[ 1090], 00:37:16.029 | 99.99th=[ 1090] 00:37:16.029 bw ( KiB/s): min= 4096, max= 4096, per=38.18%, avg=4096.00, stdev= 0.00, samples=1 00:37:16.029 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:16.029 lat (usec) : 250=0.24%, 500=15.18%, 750=39.84%, 1000=22.25% 00:37:16.029 lat (msec) : 2=22.49% 00:37:16.029 cpu : usr=2.70%, sys=5.00%, ctx=1246, majf=0, minf=1 00:37:16.029 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:16.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.029 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.029 issued rwts: total=512,733,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:16.029 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:16.029 job3: (groupid=0, jobs=1): err= 0: pid=2407093: Tue Nov 26 07:46:59 2024 00:37:16.029 read: IOPS=16, BW=65.6KiB/s (67.1kB/s)(68.0KiB/1037msec) 00:37:16.029 slat (nsec): min=26530, max=27350, avg=26705.82, stdev=193.11 00:37:16.029 clat (usec): min=41006, max=42056, avg=41854.59, stdev=301.29 00:37:16.029 lat (usec): min=41033, max=42082, avg=41881.30, stdev=301.18 00:37:16.029 clat percentiles (usec): 00:37:16.029 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:37:16.029 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:37:16.029 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:37:16.029 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:16.029 | 99.99th=[42206] 00:37:16.029 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:37:16.029 slat (nsec): min=9030, max=51689, avg=28507.33, stdev=9767.99 00:37:16.029 clat (usec): min=258, max=1015, avg=597.69, stdev=130.42 00:37:16.029 lat (usec): min=268, max=1048, avg=626.20, stdev=134.36 00:37:16.029 clat percentiles (usec): 00:37:16.029 | 1.00th=[ 293], 5.00th=[ 371], 10.00th=[ 420], 20.00th=[ 490], 00:37:16.029 | 30.00th=[ 529], 40.00th=[ 578], 50.00th=[ 611], 60.00th=[ 635], 00:37:16.029 | 70.00th=[ 668], 80.00th=[ 693], 90.00th=[ 750], 95.00th=[ 799], 00:37:16.029 | 99.00th=[ 963], 99.50th=[ 1012], 99.90th=[ 1020], 99.95th=[ 1020], 00:37:16.029 | 99.99th=[ 1020] 00:37:16.029 bw ( KiB/s): min= 4096, max= 4096, per=38.18%, avg=4096.00, stdev= 0.00, samples=1 00:37:16.029 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:16.029 lat (usec) : 500=22.50%, 750=64.65%, 1000=9.07% 00:37:16.029 lat (msec) : 2=0.57%, 50=3.21% 00:37:16.029 cpu : usr=1.06%, sys=1.74%, ctx=529, majf=0, minf=1 00:37:16.029 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:16.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.030 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:16.030 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:16.030 00:37:16.030 Run status group 0 (all jobs): 00:37:16.030 READ: bw=4224KiB/s (4325kB/s), 65.6KiB/s-2130KiB/s (67.1kB/s-2181kB/s), io=4380KiB (4485kB), run=1001-1037msec 00:37:16.030 WRITE: bw=10.5MiB/s (11.0MB/s), 1975KiB/s-4092KiB/s (2022kB/s-4190kB/s), io=10.9MiB (11.4MB), run=1001-1037msec 00:37:16.030 00:37:16.030 Disk stats (read/write): 00:37:16.030 nvme0n1: ios=78/512, merge=0/0, ticks=538/378, in_queue=916, util=95.19% 00:37:16.030 nvme0n2: ios=559/721, merge=0/0, ticks=571/327, in_queue=898, util=95.41% 00:37:16.030 nvme0n3: ios=529/512, merge=0/0, ticks=765/234, in_queue=999, util=97.78% 00:37:16.030 nvme0n4: ios=12/512, merge=0/0, ticks=503/239, in_queue=742, util=89.41% 00:37:16.030 07:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:37:16.030 [global] 00:37:16.030 thread=1 00:37:16.030 invalidate=1 00:37:16.030 rw=write 00:37:16.030 time_based=1 00:37:16.030 runtime=1 00:37:16.030 ioengine=libaio 00:37:16.030 direct=1 00:37:16.030 bs=4096 00:37:16.030 iodepth=128 00:37:16.030 norandommap=0 00:37:16.030 numjobs=1 00:37:16.030 00:37:16.030 verify_dump=1 00:37:16.030 verify_backlog=512 00:37:16.030 verify_state_save=0 00:37:16.030 do_verify=1 00:37:16.030 verify=crc32c-intel 00:37:16.030 [job0] 00:37:16.030 filename=/dev/nvme0n1 00:37:16.030 [job1] 00:37:16.030 filename=/dev/nvme0n2 00:37:16.030 [job2] 00:37:16.030 filename=/dev/nvme0n3 00:37:16.030 [job3] 00:37:16.030 filename=/dev/nvme0n4 00:37:16.030 Could not set queue depth (nvme0n1) 00:37:16.030 Could not set queue depth (nvme0n2) 00:37:16.030 Could not set queue depth (nvme0n3) 00:37:16.030 Could not set queue depth (nvme0n4) 00:37:16.292 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:16.292 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:16.292 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:16.292 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:16.292 fio-3.35 00:37:16.292 Starting 4 threads 00:37:17.701 00:37:17.701 job0: (groupid=0, jobs=1): err= 0: pid=2407515: Tue Nov 26 07:47:01 2024 00:37:17.701 read: IOPS=7138, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1003msec) 00:37:17.701 slat (nsec): min=913, max=13417k, avg=68487.54, stdev=506031.58 00:37:17.701 clat (usec): min=1173, max=49709, avg=8888.96, stdev=5397.96 00:37:17.701 lat (usec): min=4061, max=57837, avg=8957.45, stdev=5446.79 00:37:17.701 clat percentiles (usec): 00:37:17.701 | 1.00th=[ 4490], 5.00th=[ 5604], 10.00th=[ 5997], 20.00th=[ 6783], 00:37:17.701 | 30.00th=[ 7111], 40.00th=[ 7242], 50.00th=[ 7373], 60.00th=[ 7570], 00:37:17.701 | 70.00th=[ 7898], 80.00th=[ 8455], 90.00th=[10814], 95.00th=[22414], 00:37:17.701 | 99.00th=[32375], 99.50th=[36439], 99.90th=[45351], 99.95th=[45351], 00:37:17.701 | 99.99th=[49546] 00:37:17.701 write: IOPS=7146, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1003msec); 0 zone resets 00:37:17.701 slat (nsec): min=1589, max=14723k, avg=67541.09, stdev=398879.82 00:37:17.701 clat (usec): min=689, max=52213, avg=8611.43, stdev=5497.22 00:37:17.701 lat (usec): min=698, max=52225, avg=8678.97, stdev=5537.50 00:37:17.701 clat percentiles (usec): 00:37:17.701 | 1.00th=[ 2507], 5.00th=[ 4621], 10.00th=[ 6063], 20.00th=[ 6849], 00:37:17.701 | 30.00th=[ 7046], 40.00th=[ 7177], 50.00th=[ 7242], 60.00th=[ 7373], 00:37:17.701 | 70.00th=[ 7701], 80.00th=[ 8160], 90.00th=[13960], 95.00th=[14484], 00:37:17.701 | 99.00th=[35914], 99.50th=[46924], 99.90th=[51643], 99.95th=[51643], 00:37:17.701 | 99.99th=[52167] 00:37:17.701 bw ( KiB/s): min=21288, max=36056, per=29.94%, avg=28672.00, stdev=10442.55, samples=2 00:37:17.701 iops : min= 5322, max= 9014, avg=7168.00, stdev=2610.64, samples=2 00:37:17.701 lat (usec) : 750=0.02% 00:37:17.701 lat (msec) : 2=0.39%, 4=0.81%, 10=86.19%, 20=7.72%, 50=4.75% 00:37:17.701 lat (msec) : 100=0.12% 00:37:17.701 cpu : usr=2.69%, sys=4.99%, ctx=806, majf=0, minf=1 00:37:17.701 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:37:17.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:17.701 issued rwts: total=7160,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.701 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:17.701 job1: (groupid=0, jobs=1): err= 0: pid=2407516: Tue Nov 26 07:47:01 2024 00:37:17.701 read: IOPS=8118, BW=31.7MiB/s (33.3MB/s)(32.0MiB/1009msec) 00:37:17.701 slat (nsec): min=921, max=25748k, avg=58540.00, stdev=485976.11 00:37:17.701 clat (usec): min=2867, max=48925, avg=7755.18, stdev=3612.05 00:37:17.701 lat (usec): min=2878, max=48932, avg=7813.72, stdev=3628.91 00:37:17.701 clat percentiles (usec): 00:37:17.701 | 1.00th=[ 4424], 5.00th=[ 5473], 10.00th=[ 5932], 20.00th=[ 6456], 00:37:17.701 | 30.00th=[ 6783], 40.00th=[ 6980], 50.00th=[ 7111], 60.00th=[ 7242], 00:37:17.701 | 70.00th=[ 7635], 80.00th=[ 8455], 90.00th=[ 9241], 95.00th=[10421], 00:37:17.701 | 99.00th=[26870], 99.50th=[32375], 99.90th=[45876], 99.95th=[45876], 00:37:17.701 | 99.99th=[49021] 00:37:17.701 write: IOPS=8498, BW=33.2MiB/s (34.8MB/s)(33.5MiB/1009msec); 0 zone resets 00:37:17.701 slat (nsec): min=1601, max=15757k, avg=55065.63, stdev=333149.14 00:37:17.701 clat (usec): min=1011, max=29044, avg=7284.69, stdev=2615.83 00:37:17.701 lat (usec): min=1020, max=29047, avg=7339.76, stdev=2631.12 00:37:17.701 clat percentiles (usec): 00:37:17.701 | 1.00th=[ 2966], 5.00th=[ 4293], 10.00th=[ 5604], 20.00th=[ 6521], 00:37:17.701 | 30.00th=[ 6652], 40.00th=[ 6849], 50.00th=[ 6915], 60.00th=[ 7046], 00:37:17.701 | 70.00th=[ 7242], 80.00th=[ 7504], 90.00th=[ 8455], 95.00th=[10552], 00:37:17.701 | 99.00th=[20317], 99.50th=[24511], 99.90th=[27919], 99.95th=[28967], 00:37:17.701 | 99.99th=[28967] 00:37:17.701 bw ( KiB/s): min=33672, max=33912, per=35.29%, avg=33792.00, stdev=169.71, samples=2 00:37:17.701 iops : min= 8418, max= 8478, avg=8448.00, stdev=42.43, samples=2 00:37:17.701 lat (msec) : 2=0.23%, 4=1.49%, 10=91.93%, 20=4.99%, 50=1.35% 00:37:17.701 cpu : usr=5.95%, sys=6.45%, ctx=824, majf=0, minf=1 00:37:17.701 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:37:17.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:17.701 issued rwts: total=8192,8575,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.701 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:17.701 job2: (groupid=0, jobs=1): err= 0: pid=2407529: Tue Nov 26 07:47:01 2024 00:37:17.701 read: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec) 00:37:17.701 slat (nsec): min=984, max=12465k, avg=103204.85, stdev=847157.40 00:37:17.701 clat (usec): min=3847, max=27528, avg=13060.81, stdev=3181.87 00:37:17.701 lat (usec): min=3857, max=27558, avg=13164.01, stdev=3255.46 00:37:17.701 clat percentiles (usec): 00:37:17.701 | 1.00th=[ 6783], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10421], 00:37:17.701 | 30.00th=[11469], 40.00th=[11994], 50.00th=[12780], 60.00th=[13304], 00:37:17.701 | 70.00th=[13829], 80.00th=[15008], 90.00th=[17695], 95.00th=[19006], 00:37:17.701 | 99.00th=[23725], 99.50th=[24511], 99.90th=[24773], 99.95th=[24773], 00:37:17.701 | 99.99th=[27657] 00:37:17.701 write: IOPS=5291, BW=20.7MiB/s (21.7MB/s)(20.9MiB/1009msec); 0 zone resets 00:37:17.701 slat (nsec): min=1739, max=15947k, avg=84291.55, stdev=677482.87 00:37:17.701 clat (usec): min=1258, max=24621, avg=11074.03, stdev=2809.64 00:37:17.701 lat (usec): min=1267, max=24639, avg=11158.33, stdev=2847.45 00:37:17.701 clat percentiles (usec): 00:37:17.701 | 1.00th=[ 3785], 5.00th=[ 6849], 10.00th=[ 7570], 20.00th=[ 9110], 00:37:17.701 | 30.00th=[ 9896], 40.00th=[10552], 50.00th=[10945], 60.00th=[11207], 00:37:17.701 | 70.00th=[12256], 80.00th=[12911], 90.00th=[14877], 95.00th=[16581], 00:37:17.701 | 99.00th=[17695], 99.50th=[18744], 99.90th=[21365], 99.95th=[22938], 00:37:17.701 | 99.99th=[24511] 00:37:17.701 bw ( KiB/s): min=20480, max=21208, per=21.77%, avg=20844.00, stdev=514.77, samples=2 00:37:17.701 iops : min= 5120, max= 5302, avg=5211.00, stdev=128.69, samples=2 00:37:17.701 lat (msec) : 2=0.10%, 4=0.48%, 10=20.35%, 20=77.27%, 50=1.81% 00:37:17.701 cpu : usr=3.57%, sys=5.95%, ctx=354, majf=0, minf=1 00:37:17.701 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:37:17.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:17.701 issued rwts: total=5120,5339,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.701 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:17.701 job3: (groupid=0, jobs=1): err= 0: pid=2407536: Tue Nov 26 07:47:01 2024 00:37:17.701 read: IOPS=2772, BW=10.8MiB/s (11.4MB/s)(10.9MiB/1008msec) 00:37:17.701 slat (nsec): min=1091, max=27520k, avg=181861.82, stdev=1507819.76 00:37:17.701 clat (msec): min=4, max=106, avg=22.18, stdev=25.58 00:37:17.701 lat (msec): min=4, max=106, avg=22.36, stdev=25.76 00:37:17.701 clat percentiles (msec): 00:37:17.701 | 1.00th=[ 5], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:37:17.701 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 11], 60.00th=[ 11], 00:37:17.701 | 70.00th=[ 14], 80.00th=[ 24], 90.00th=[ 74], 95.00th=[ 80], 00:37:17.701 | 99.00th=[ 107], 99.50th=[ 107], 99.90th=[ 107], 99.95th=[ 107], 00:37:17.701 | 99.99th=[ 107] 00:37:17.701 write: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec); 0 zone resets 00:37:17.701 slat (nsec): min=1764, max=37762k, avg=152635.78, stdev=1320263.29 00:37:17.701 clat (usec): min=1524, max=100306, avg=18982.40, stdev=19520.17 00:37:17.701 lat (usec): min=1533, max=100313, avg=19135.03, stdev=19650.75 00:37:17.701 clat percentiles (msec): 00:37:17.702 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 8], 00:37:17.702 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 12], 60.00th=[ 15], 00:37:17.702 | 70.00th=[ 15], 80.00th=[ 19], 90.00th=[ 52], 95.00th=[ 65], 00:37:17.702 | 99.00th=[ 89], 99.50th=[ 101], 99.90th=[ 101], 99.95th=[ 101], 00:37:17.702 | 99.99th=[ 101] 00:37:17.702 bw ( KiB/s): min= 4096, max=20480, per=12.83%, avg=12288.00, stdev=11585.24, samples=2 00:37:17.702 iops : min= 1024, max= 5120, avg=3072.00, stdev=2896.31, samples=2 00:37:17.702 lat (msec) : 2=0.05%, 4=0.82%, 10=45.03%, 20=33.73%, 50=6.05% 00:37:17.702 lat (msec) : 100=13.38%, 250=0.94% 00:37:17.702 cpu : usr=2.18%, sys=3.87%, ctx=229, majf=0, minf=2 00:37:17.702 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:37:17.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:17.702 issued rwts: total=2795,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.702 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:17.702 00:37:17.702 Run status group 0 (all jobs): 00:37:17.702 READ: bw=90.1MiB/s (94.5MB/s), 10.8MiB/s-31.7MiB/s (11.4MB/s-33.3MB/s), io=90.9MiB (95.3MB), run=1003-1009msec 00:37:17.702 WRITE: bw=93.5MiB/s (98.1MB/s), 11.9MiB/s-33.2MiB/s (12.5MB/s-34.8MB/s), io=94.4MiB (98.9MB), run=1003-1009msec 00:37:17.702 00:37:17.702 Disk stats (read/write): 00:37:17.702 nvme0n1: ios=6390/6656, merge=0/0, ticks=26196/25987, in_queue=52183, util=92.38% 00:37:17.702 nvme0n2: ios=6974/7168, merge=0/0, ticks=39099/31461, in_queue=70560, util=96.73% 00:37:17.702 nvme0n3: ios=4155/4310, merge=0/0, ticks=52995/46634, in_queue=99629, util=97.46% 00:37:17.702 nvme0n4: ios=1694/2048, merge=0/0, ticks=21996/21619, in_queue=43615, util=99.89% 00:37:17.702 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:37:17.702 [global] 00:37:17.702 thread=1 00:37:17.702 invalidate=1 00:37:17.702 rw=randwrite 00:37:17.702 time_based=1 00:37:17.702 runtime=1 00:37:17.702 ioengine=libaio 00:37:17.702 direct=1 00:37:17.702 bs=4096 00:37:17.702 iodepth=128 00:37:17.702 norandommap=0 00:37:17.702 numjobs=1 00:37:17.702 00:37:17.702 verify_dump=1 00:37:17.702 verify_backlog=512 00:37:17.702 verify_state_save=0 00:37:17.702 do_verify=1 00:37:17.702 verify=crc32c-intel 00:37:17.702 [job0] 00:37:17.702 filename=/dev/nvme0n1 00:37:17.702 [job1] 00:37:17.702 filename=/dev/nvme0n2 00:37:17.702 [job2] 00:37:17.702 filename=/dev/nvme0n3 00:37:17.702 [job3] 00:37:17.702 filename=/dev/nvme0n4 00:37:17.702 Could not set queue depth (nvme0n1) 00:37:17.702 Could not set queue depth (nvme0n2) 00:37:17.702 Could not set queue depth (nvme0n3) 00:37:17.702 Could not set queue depth (nvme0n4) 00:37:17.967 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:17.967 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:17.967 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:17.967 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:17.967 fio-3.35 00:37:17.967 Starting 4 threads 00:37:19.375 00:37:19.375 job0: (groupid=0, jobs=1): err= 0: pid=2407963: Tue Nov 26 07:47:03 2024 00:37:19.375 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:37:19.375 slat (nsec): min=1109, max=14126k, avg=83841.39, stdev=700339.86 00:37:19.375 clat (usec): min=3438, max=26177, avg=11287.82, stdev=4106.38 00:37:19.375 lat (usec): min=3444, max=26184, avg=11371.66, stdev=4157.59 00:37:19.375 clat percentiles (usec): 00:37:19.375 | 1.00th=[ 4490], 5.00th=[ 5932], 10.00th=[ 6390], 20.00th=[ 7963], 00:37:19.375 | 30.00th=[ 9634], 40.00th=[10290], 50.00th=[10552], 60.00th=[11731], 00:37:19.375 | 70.00th=[12256], 80.00th=[13960], 90.00th=[16450], 95.00th=[20055], 00:37:19.375 | 99.00th=[24249], 99.50th=[24773], 99.90th=[25560], 99.95th=[25560], 00:37:19.375 | 99.99th=[26084] 00:37:19.375 write: IOPS=5456, BW=21.3MiB/s (22.3MB/s)(21.4MiB/1003msec); 0 zone resets 00:37:19.375 slat (nsec): min=1674, max=45037k, avg=93929.95, stdev=858900.42 00:37:19.375 clat (usec): min=743, max=56205, avg=12662.76, stdev=10495.61 00:37:19.375 lat (usec): min=753, max=56235, avg=12756.69, stdev=10553.74 00:37:19.375 clat percentiles (usec): 00:37:19.375 | 1.00th=[ 1483], 5.00th=[ 4047], 10.00th=[ 4817], 20.00th=[ 5866], 00:37:19.375 | 30.00th=[ 7963], 40.00th=[ 8717], 50.00th=[ 9634], 60.00th=[10552], 00:37:19.375 | 70.00th=[11731], 80.00th=[14746], 90.00th=[26870], 95.00th=[41157], 00:37:19.375 | 99.00th=[48497], 99.50th=[49546], 99.90th=[54789], 99.95th=[54789], 00:37:19.375 | 99.99th=[56361] 00:37:19.375 bw ( KiB/s): min=20439, max=22288, per=24.40%, avg=21363.50, stdev=1307.44, samples=2 00:37:19.375 iops : min= 5109, max= 5572, avg=5340.50, stdev=327.39, samples=2 00:37:19.375 lat (usec) : 750=0.02%, 1000=0.09% 00:37:19.375 lat (msec) : 2=0.72%, 4=1.89%, 10=42.40%, 20=46.00%, 50=8.64% 00:37:19.375 lat (msec) : 100=0.25% 00:37:19.375 cpu : usr=4.49%, sys=5.99%, ctx=316, majf=0, minf=1 00:37:19.375 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:37:19.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:19.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:19.375 issued rwts: total=5120,5473,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:19.375 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:19.375 job1: (groupid=0, jobs=1): err= 0: pid=2407979: Tue Nov 26 07:47:03 2024 00:37:19.375 read: IOPS=6583, BW=25.7MiB/s (27.0MB/s)(26.0MiB/1011msec) 00:37:19.375 slat (nsec): min=1333, max=10111k, avg=67494.21, stdev=540702.62 00:37:19.375 clat (usec): min=3056, max=32456, avg=9222.03, stdev=3848.15 00:37:19.375 lat (usec): min=3060, max=32459, avg=9289.53, stdev=3881.59 00:37:19.375 clat percentiles (usec): 00:37:19.375 | 1.00th=[ 4883], 5.00th=[ 5276], 10.00th=[ 5735], 20.00th=[ 6390], 00:37:19.375 | 30.00th=[ 6849], 40.00th=[ 7373], 50.00th=[ 7963], 60.00th=[ 8455], 00:37:19.375 | 70.00th=[10028], 80.00th=[11994], 90.00th=[14091], 95.00th=[17433], 00:37:19.375 | 99.00th=[22414], 99.50th=[22414], 99.90th=[32375], 99.95th=[32375], 00:37:19.375 | 99.99th=[32375] 00:37:19.375 write: IOPS=6833, BW=26.7MiB/s (28.0MB/s)(27.0MiB/1011msec); 0 zone resets 00:37:19.375 slat (nsec): min=1722, max=17406k, avg=64523.12, stdev=532737.25 00:37:19.375 clat (usec): min=1478, max=60793, avg=9682.60, stdev=8880.44 00:37:19.375 lat (usec): min=1490, max=60803, avg=9747.12, stdev=8937.63 00:37:19.375 clat percentiles (usec): 00:37:19.375 | 1.00th=[ 3064], 5.00th=[ 3982], 10.00th=[ 4490], 20.00th=[ 5538], 00:37:19.375 | 30.00th=[ 5866], 40.00th=[ 6128], 50.00th=[ 6456], 60.00th=[ 7373], 00:37:19.375 | 70.00th=[ 9110], 80.00th=[11994], 90.00th=[17695], 95.00th=[24511], 00:37:19.375 | 99.00th=[57934], 99.50th=[59507], 99.90th=[60556], 99.95th=[60556], 00:37:19.375 | 99.99th=[60556] 00:37:19.375 bw ( KiB/s): min=24200, max=30056, per=30.98%, avg=27128.00, stdev=4140.82, samples=2 00:37:19.375 iops : min= 6050, max= 7514, avg=6782.00, stdev=1035.20, samples=2 00:37:19.375 lat (msec) : 2=0.21%, 4=2.91%, 10=67.70%, 20=24.62%, 50=3.69% 00:37:19.375 lat (msec) : 100=0.87% 00:37:19.375 cpu : usr=3.27%, sys=6.44%, ctx=377, majf=0, minf=2 00:37:19.375 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:37:19.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:19.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:19.375 issued rwts: total=6656,6909,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:19.375 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:19.375 job2: (groupid=0, jobs=1): err= 0: pid=2408008: Tue Nov 26 07:47:03 2024 00:37:19.375 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:37:19.375 slat (nsec): min=1001, max=13678k, avg=95145.53, stdev=708755.67 00:37:19.375 clat (usec): min=3044, max=52944, avg=11892.78, stdev=6172.34 00:37:19.375 lat (usec): min=3053, max=52950, avg=11987.93, stdev=6231.79 00:37:19.375 clat percentiles (usec): 00:37:19.375 | 1.00th=[ 5014], 5.00th=[ 6259], 10.00th=[ 6849], 20.00th=[ 7308], 00:37:19.375 | 30.00th=[ 7963], 40.00th=[ 9110], 50.00th=[10552], 60.00th=[11338], 00:37:19.375 | 70.00th=[13304], 80.00th=[15664], 90.00th=[19006], 95.00th=[22414], 00:37:19.375 | 99.00th=[38011], 99.50th=[44303], 99.90th=[52691], 99.95th=[52691], 00:37:19.375 | 99.99th=[52691] 00:37:19.375 write: IOPS=4606, BW=18.0MiB/s (18.9MB/s)(18.1MiB/1005msec); 0 zone resets 00:37:19.375 slat (nsec): min=1572, max=10821k, avg=116235.21, stdev=708489.84 00:37:19.375 clat (usec): min=1426, max=79789, avg=15682.54, stdev=15377.56 00:37:19.375 lat (usec): min=2189, max=79795, avg=15798.78, stdev=15471.85 00:37:19.375 clat percentiles (usec): 00:37:19.375 | 1.00th=[ 3163], 5.00th=[ 5866], 10.00th=[ 6456], 20.00th=[ 6980], 00:37:19.375 | 30.00th=[ 7963], 40.00th=[ 8979], 50.00th=[10159], 60.00th=[11207], 00:37:19.375 | 70.00th=[12649], 80.00th=[15664], 90.00th=[43254], 95.00th=[55837], 00:37:19.375 | 99.00th=[73925], 99.50th=[77071], 99.90th=[80217], 99.95th=[80217], 00:37:19.375 | 99.99th=[80217] 00:37:19.375 bw ( KiB/s): min=17104, max=19760, per=21.05%, avg=18432.00, stdev=1878.08, samples=2 00:37:19.375 iops : min= 4276, max= 4940, avg=4608.00, stdev=469.52, samples=2 00:37:19.375 lat (msec) : 2=0.02%, 4=1.37%, 10=46.58%, 20=40.78%, 50=8.02% 00:37:19.375 lat (msec) : 100=3.23% 00:37:19.375 cpu : usr=3.39%, sys=4.98%, ctx=382, majf=0, minf=1 00:37:19.375 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:37:19.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:19.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:19.375 issued rwts: total=4608,4630,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:19.375 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:19.375 job3: (groupid=0, jobs=1): err= 0: pid=2408018: Tue Nov 26 07:47:03 2024 00:37:19.375 read: IOPS=4627, BW=18.1MiB/s (19.0MB/s)(18.2MiB/1009msec) 00:37:19.375 slat (nsec): min=986, max=12059k, avg=80223.81, stdev=606705.24 00:37:19.375 clat (usec): min=3477, max=44603, avg=10073.28, stdev=4401.65 00:37:19.375 lat (usec): min=3486, max=44617, avg=10153.51, stdev=4455.41 00:37:19.375 clat percentiles (usec): 00:37:19.375 | 1.00th=[ 4817], 5.00th=[ 5538], 10.00th=[ 6390], 20.00th=[ 6980], 00:37:19.375 | 30.00th=[ 7242], 40.00th=[ 7570], 50.00th=[ 8586], 60.00th=[10421], 00:37:19.375 | 70.00th=[11469], 80.00th=[12911], 90.00th=[14746], 95.00th=[16581], 00:37:19.375 | 99.00th=[27395], 99.50th=[34341], 99.90th=[44827], 99.95th=[44827], 00:37:19.375 | 99.99th=[44827] 00:37:19.375 write: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec); 0 zone resets 00:37:19.375 slat (nsec): min=1573, max=9037.6k, avg=116757.19, stdev=639603.02 00:37:19.375 clat (usec): min=1218, max=56133, avg=15815.14, stdev=14426.47 00:37:19.375 lat (usec): min=1227, max=56141, avg=15931.90, stdev=14523.71 00:37:19.375 clat percentiles (usec): 00:37:19.375 | 1.00th=[ 3425], 5.00th=[ 4686], 10.00th=[ 4948], 20.00th=[ 6128], 00:37:19.375 | 30.00th=[ 7308], 40.00th=[ 8586], 50.00th=[ 9896], 60.00th=[11863], 00:37:19.375 | 70.00th=[13829], 80.00th=[21103], 90.00th=[45876], 95.00th=[50594], 00:37:19.375 | 99.00th=[54789], 99.50th=[55313], 99.90th=[56361], 99.95th=[56361], 00:37:19.375 | 99.99th=[56361] 00:37:19.375 bw ( KiB/s): min=19952, max=20480, per=23.09%, avg=20216.00, stdev=373.35, samples=2 00:37:19.375 iops : min= 4988, max= 5120, avg=5054.00, stdev=93.34, samples=2 00:37:19.375 lat (msec) : 2=0.23%, 4=0.57%, 10=52.57%, 20=34.97%, 50=8.68% 00:37:19.375 lat (msec) : 100=2.97% 00:37:19.375 cpu : usr=4.07%, sys=5.26%, ctx=385, majf=0, minf=2 00:37:19.376 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:37:19.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:19.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:19.376 issued rwts: total=4669,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:19.376 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:19.376 00:37:19.376 Run status group 0 (all jobs): 00:37:19.376 READ: bw=81.3MiB/s (85.3MB/s), 17.9MiB/s-25.7MiB/s (18.8MB/s-27.0MB/s), io=82.2MiB (86.2MB), run=1003-1011msec 00:37:19.376 WRITE: bw=85.5MiB/s (89.7MB/s), 18.0MiB/s-26.7MiB/s (18.9MB/s-28.0MB/s), io=86.5MiB (90.7MB), run=1003-1011msec 00:37:19.376 00:37:19.376 Disk stats (read/write): 00:37:19.376 nvme0n1: ios=4117/4143, merge=0/0, ticks=36068/46388, in_queue=82456, util=95.99% 00:37:19.376 nvme0n2: ios=5054/5120, merge=0/0, ticks=44765/50503, in_queue=95268, util=99.79% 00:37:19.376 nvme0n3: ios=3428/3584, merge=0/0, ticks=36200/60562, in_queue=96762, util=86.91% 00:37:19.376 nvme0n4: ios=4130/4288, merge=0/0, ticks=38093/57819, in_queue=95912, util=95.27% 00:37:19.376 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:37:19.376 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2408202 00:37:19.376 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:37:19.376 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:37:19.376 [global] 00:37:19.376 thread=1 00:37:19.376 invalidate=1 00:37:19.376 rw=read 00:37:19.376 time_based=1 00:37:19.376 runtime=10 00:37:19.376 ioengine=libaio 00:37:19.376 direct=1 00:37:19.376 bs=4096 00:37:19.376 iodepth=1 00:37:19.376 norandommap=1 00:37:19.376 numjobs=1 00:37:19.376 00:37:19.376 [job0] 00:37:19.376 filename=/dev/nvme0n1 00:37:19.376 [job1] 00:37:19.376 filename=/dev/nvme0n2 00:37:19.376 [job2] 00:37:19.376 filename=/dev/nvme0n3 00:37:19.376 [job3] 00:37:19.376 filename=/dev/nvme0n4 00:37:19.376 Could not set queue depth (nvme0n1) 00:37:19.376 Could not set queue depth (nvme0n2) 00:37:19.376 Could not set queue depth (nvme0n3) 00:37:19.376 Could not set queue depth (nvme0n4) 00:37:19.640 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:19.640 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:19.640 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:19.640 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:19.640 fio-3.35 00:37:19.640 Starting 4 threads 00:37:22.186 07:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:37:22.446 07:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:37:22.446 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=1769472, buflen=4096 00:37:22.446 fio: pid=2408474, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:37:22.446 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=12877824, buflen=4096 00:37:22.446 fio: pid=2408464, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:37:22.446 07:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:22.446 07:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:37:22.706 07:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:22.706 07:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:37:22.707 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=1576960, buflen=4096 00:37:22.707 fio: pid=2408413, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:37:22.967 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=10088448, buflen=4096 00:37:22.967 fio: pid=2408430, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:37:22.967 07:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:22.967 07:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:37:22.967 00:37:22.967 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2408413: Tue Nov 26 07:47:06 2024 00:37:22.967 read: IOPS=131, BW=524KiB/s (537kB/s)(1540KiB/2937msec) 00:37:22.967 slat (usec): min=4, max=9668, avg=58.06, stdev=658.98 00:37:22.967 clat (usec): min=485, max=41977, avg=7511.50, stdev=15036.30 00:37:22.967 lat (usec): min=493, max=50970, avg=7569.64, stdev=15162.63 00:37:22.967 clat percentiles (usec): 00:37:22.967 | 1.00th=[ 537], 5.00th=[ 603], 10.00th=[ 644], 20.00th=[ 701], 00:37:22.967 | 30.00th=[ 734], 40.00th=[ 766], 50.00th=[ 791], 60.00th=[ 824], 00:37:22.967 | 70.00th=[ 865], 80.00th=[ 955], 90.00th=[41157], 95.00th=[41157], 00:37:22.967 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:22.967 | 99.99th=[42206] 00:37:22.967 bw ( KiB/s): min= 96, max= 2384, per=7.23%, avg=600.00, stdev=999.01, samples=5 00:37:22.967 iops : min= 24, max= 596, avg=150.00, stdev=249.75, samples=5 00:37:22.967 lat (usec) : 500=0.26%, 750=34.97%, 1000=46.63% 00:37:22.967 lat (msec) : 2=1.04%, 20=0.26%, 50=16.58% 00:37:22.967 cpu : usr=0.00%, sys=0.27%, ctx=388, majf=0, minf=1 00:37:22.967 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:22.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.967 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.967 issued rwts: total=386,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.967 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:22.967 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2408430: Tue Nov 26 07:47:06 2024 00:37:22.967 read: IOPS=795, BW=3182KiB/s (3259kB/s)(9852KiB/3096msec) 00:37:22.967 slat (usec): min=4, max=17244, avg=42.49, stdev=533.30 00:37:22.967 clat (usec): min=250, max=41918, avg=1200.97, stdev=3447.08 00:37:22.967 lat (usec): min=256, max=41946, avg=1243.47, stdev=3487.24 00:37:22.967 clat percentiles (usec): 00:37:22.967 | 1.00th=[ 478], 5.00th=[ 562], 10.00th=[ 603], 20.00th=[ 725], 00:37:22.967 | 30.00th=[ 791], 40.00th=[ 824], 50.00th=[ 898], 60.00th=[ 988], 00:37:22.967 | 70.00th=[ 1057], 80.00th=[ 1106], 90.00th=[ 1172], 95.00th=[ 1221], 00:37:22.968 | 99.00th=[ 1991], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:37:22.968 | 99.99th=[41681] 00:37:22.968 bw ( KiB/s): min= 984, max= 4917, per=38.31%, avg=3180.83, stdev=1639.24, samples=6 00:37:22.968 iops : min= 246, max= 1229, avg=795.17, stdev=409.76, samples=6 00:37:22.968 lat (usec) : 500=1.79%, 750=21.10%, 1000=39.04% 00:37:22.968 lat (msec) : 2=37.09%, 4=0.16%, 10=0.04%, 50=0.73% 00:37:22.968 cpu : usr=0.71%, sys=2.46%, ctx=2471, majf=0, minf=2 00:37:22.968 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:22.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.968 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.968 issued rwts: total=2464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.968 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:22.968 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2408464: Tue Nov 26 07:47:06 2024 00:37:22.968 read: IOPS=1149, BW=4598KiB/s (4709kB/s)(12.3MiB/2735msec) 00:37:22.968 slat (nsec): min=4377, max=62125, avg=24900.16, stdev=6697.19 00:37:22.968 clat (usec): min=392, max=1223, avg=832.05, stdev=71.05 00:37:22.968 lat (usec): min=419, max=1249, avg=856.95, stdev=72.48 00:37:22.968 clat percentiles (usec): 00:37:22.968 | 1.00th=[ 594], 5.00th=[ 693], 10.00th=[ 734], 20.00th=[ 791], 00:37:22.968 | 30.00th=[ 816], 40.00th=[ 832], 50.00th=[ 848], 60.00th=[ 857], 00:37:22.968 | 70.00th=[ 873], 80.00th=[ 881], 90.00th=[ 906], 95.00th=[ 922], 00:37:22.968 | 99.00th=[ 955], 99.50th=[ 971], 99.90th=[ 1029], 99.95th=[ 1090], 00:37:22.968 | 99.99th=[ 1221] 00:37:22.968 bw ( KiB/s): min= 4560, max= 4808, per=55.86%, avg=4636.80, stdev=99.21, samples=5 00:37:22.968 iops : min= 1140, max= 1202, avg=1159.20, stdev=24.80, samples=5 00:37:22.968 lat (usec) : 500=0.13%, 750=12.69%, 1000=86.93% 00:37:22.968 lat (msec) : 2=0.22% 00:37:22.968 cpu : usr=1.24%, sys=3.29%, ctx=3145, majf=0, minf=2 00:37:22.968 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:22.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.968 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.968 issued rwts: total=3145,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.968 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:22.968 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2408474: Tue Nov 26 07:47:06 2024 00:37:22.968 read: IOPS=168, BW=671KiB/s (687kB/s)(1728KiB/2577msec) 00:37:22.968 slat (nsec): min=4933, max=63213, avg=15079.92, stdev=9164.58 00:37:22.968 clat (usec): min=417, max=42160, avg=5882.89, stdev=13468.93 00:37:22.968 lat (usec): min=426, max=42185, avg=5897.94, stdev=13472.97 00:37:22.968 clat percentiles (usec): 00:37:22.968 | 1.00th=[ 553], 5.00th=[ 635], 10.00th=[ 685], 20.00th=[ 734], 00:37:22.968 | 30.00th=[ 775], 40.00th=[ 799], 50.00th=[ 832], 60.00th=[ 865], 00:37:22.968 | 70.00th=[ 938], 80.00th=[ 1037], 90.00th=[41681], 95.00th=[42206], 00:37:22.968 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:22.968 | 99.99th=[42206] 00:37:22.968 bw ( KiB/s): min= 88, max= 2480, per=8.29%, avg=688.00, stdev=1032.36, samples=5 00:37:22.968 iops : min= 22, max= 620, avg=172.00, stdev=258.09, samples=5 00:37:22.968 lat (usec) : 500=0.46%, 750=23.33%, 1000=52.42% 00:37:22.968 lat (msec) : 2=11.09%, 20=0.23%, 50=12.24% 00:37:22.968 cpu : usr=0.31%, sys=0.19%, ctx=434, majf=0, minf=2 00:37:22.968 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:22.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.968 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.968 issued rwts: total=433,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.968 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:22.968 00:37:22.968 Run status group 0 (all jobs): 00:37:22.968 READ: bw=8300KiB/s (8499kB/s), 524KiB/s-4598KiB/s (537kB/s-4709kB/s), io=25.1MiB (26.3MB), run=2577-3096msec 00:37:22.968 00:37:22.968 Disk stats (read/write): 00:37:22.968 nvme0n1: ios=382/0, merge=0/0, ticks=2769/0, in_queue=2769, util=92.86% 00:37:22.968 nvme0n2: ios=2430/0, merge=0/0, ticks=2802/0, in_queue=2802, util=92.79% 00:37:22.968 nvme0n3: ios=2939/0, merge=0/0, ticks=2370/0, in_queue=2370, util=95.60% 00:37:22.968 nvme0n4: ios=431/0, merge=0/0, ticks=2493/0, in_queue=2493, util=96.32% 00:37:23.228 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:23.228 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:37:23.228 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:23.228 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:37:23.488 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:23.488 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:37:23.748 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:23.748 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:37:23.748 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:37:23.748 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2408202 00:37:23.748 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:37:23.748 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:24.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:24.008 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:24.008 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:37:24.008 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:37:24.008 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:24.008 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:37:24.008 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:24.008 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:37:24.008 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:37:24.008 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:37:24.008 nvmf hotplug test: fio failed as expected 00:37:24.008 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:24.008 07:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:37:24.008 07:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:37:24.268 07:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:37:24.268 07:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:37:24.268 07:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:37:24.268 07:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:24.268 07:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:37:24.268 07:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:24.268 07:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:37:24.268 07:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:24.268 07:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:24.268 rmmod nvme_tcp 00:37:24.268 rmmod nvme_fabrics 00:37:24.268 rmmod nvme_keyring 00:37:24.268 07:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:24.268 07:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:37:24.268 07:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:37:24.268 07:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2405029 ']' 00:37:24.268 07:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2405029 00:37:24.268 07:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2405029 ']' 00:37:24.268 07:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2405029 00:37:24.268 07:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:37:24.268 07:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:24.268 07:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2405029 00:37:24.268 07:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:24.268 07:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:24.268 07:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2405029' 00:37:24.268 killing process with pid 2405029 00:37:24.268 07:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2405029 00:37:24.268 07:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2405029 00:37:24.268 07:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:24.528 07:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:24.528 07:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:24.528 07:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:37:24.528 07:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:37:24.528 07:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:37:24.528 07:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:24.528 07:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:24.528 07:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:24.528 07:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:24.528 07:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:24.528 07:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:26.440 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:26.440 00:37:26.440 real 0m28.776s 00:37:26.440 user 2m16.526s 00:37:26.440 sys 0m12.710s 00:37:26.440 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:26.440 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:37:26.440 ************************************ 00:37:26.440 END TEST nvmf_fio_target 00:37:26.441 ************************************ 00:37:26.441 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:37:26.441 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:26.441 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:26.441 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:26.441 ************************************ 00:37:26.441 START TEST nvmf_bdevio 00:37:26.441 ************************************ 00:37:26.441 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:37:26.702 * Looking for test storage... 00:37:26.702 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:26.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:26.702 --rc genhtml_branch_coverage=1 00:37:26.702 --rc genhtml_function_coverage=1 00:37:26.702 --rc genhtml_legend=1 00:37:26.702 --rc geninfo_all_blocks=1 00:37:26.702 --rc geninfo_unexecuted_blocks=1 00:37:26.702 00:37:26.702 ' 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:26.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:26.702 --rc genhtml_branch_coverage=1 00:37:26.702 --rc genhtml_function_coverage=1 00:37:26.702 --rc genhtml_legend=1 00:37:26.702 --rc geninfo_all_blocks=1 00:37:26.702 --rc geninfo_unexecuted_blocks=1 00:37:26.702 00:37:26.702 ' 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:26.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:26.702 --rc genhtml_branch_coverage=1 00:37:26.702 --rc genhtml_function_coverage=1 00:37:26.702 --rc genhtml_legend=1 00:37:26.702 --rc geninfo_all_blocks=1 00:37:26.702 --rc geninfo_unexecuted_blocks=1 00:37:26.702 00:37:26.702 ' 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:26.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:26.702 --rc genhtml_branch_coverage=1 00:37:26.702 --rc genhtml_function_coverage=1 00:37:26.702 --rc genhtml_legend=1 00:37:26.702 --rc geninfo_all_blocks=1 00:37:26.702 --rc geninfo_unexecuted_blocks=1 00:37:26.702 00:37:26.702 ' 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:26.702 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:26.703 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:26.703 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:37:26.703 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:26.703 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:26.703 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:26.703 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.703 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.703 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.703 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:37:26.703 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.703 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:37:26.703 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:26.703 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:26.703 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:26.703 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:26.703 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:26.703 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:26.703 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:26.703 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:26.703 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:26.703 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:26.703 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:26.703 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:26.703 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:37:26.703 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:26.703 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:26.703 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:26.703 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:26.703 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:26.703 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:26.703 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:26.703 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:26.703 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:26.703 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:26.703 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:37:26.703 07:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:34.845 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:34.845 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:34.845 Found net devices under 0000:31:00.0: cvl_0_0 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:34.845 Found net devices under 0000:31:00.1: cvl_0_1 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:34.845 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:34.846 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:34.846 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:35.196 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:35.197 07:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:35.197 07:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:35.197 07:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:35.197 07:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:35.197 07:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:35.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:35.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:37:35.197 00:37:35.197 --- 10.0.0.2 ping statistics --- 00:37:35.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:35.197 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:37:35.197 07:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:35.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:35.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:37:35.197 00:37:35.197 --- 10.0.0.1 ping statistics --- 00:37:35.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:35.197 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:37:35.197 07:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:35.197 07:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:37:35.197 07:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:35.197 07:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:35.197 07:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:35.197 07:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:35.197 07:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:35.197 07:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:35.197 07:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:35.197 07:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:37:35.197 07:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:35.197 07:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:35.197 07:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:35.197 07:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2414096 00:37:35.197 07:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2414096 00:37:35.197 07:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:37:35.197 07:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2414096 ']' 00:37:35.197 07:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:35.197 07:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:35.197 07:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:35.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:35.197 07:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:35.197 07:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:35.197 [2024-11-26 07:47:19.236595] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:35.197 [2024-11-26 07:47:19.237730] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:37:35.197 [2024-11-26 07:47:19.237778] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:35.506 [2024-11-26 07:47:19.345854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:35.506 [2024-11-26 07:47:19.396133] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:35.506 [2024-11-26 07:47:19.396179] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:35.506 [2024-11-26 07:47:19.396188] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:35.506 [2024-11-26 07:47:19.396195] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:35.506 [2024-11-26 07:47:19.396202] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:35.506 [2024-11-26 07:47:19.398169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:35.506 [2024-11-26 07:47:19.398330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:35.506 [2024-11-26 07:47:19.398487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:35.506 [2024-11-26 07:47:19.398488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:35.506 [2024-11-26 07:47:19.481690] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:35.506 [2024-11-26 07:47:19.482714] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:35.506 [2024-11-26 07:47:19.483067] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:35.506 [2024-11-26 07:47:19.483441] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:35.506 [2024-11-26 07:47:19.483496] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:36.148 07:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:36.148 07:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:37:36.148 07:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:36.148 07:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:36.148 07:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:36.148 07:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:36.148 07:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:36.148 07:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.148 07:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:36.148 [2024-11-26 07:47:20.119367] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:36.148 07:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.148 07:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:36.148 07:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.148 07:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:36.148 Malloc0 00:37:36.148 07:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.149 07:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:36.149 07:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.149 07:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:36.149 07:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.149 07:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:36.149 07:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.149 07:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:36.149 07:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.149 07:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:36.149 07:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.149 07:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:36.149 [2024-11-26 07:47:20.211695] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:36.149 07:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.149 07:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:37:36.149 07:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:37:36.149 07:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:37:36.149 07:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:37:36.149 07:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:36.149 07:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:36.149 { 00:37:36.149 "params": { 00:37:36.149 "name": "Nvme$subsystem", 00:37:36.149 "trtype": "$TEST_TRANSPORT", 00:37:36.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:36.149 "adrfam": "ipv4", 00:37:36.149 "trsvcid": "$NVMF_PORT", 00:37:36.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:36.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:36.149 "hdgst": ${hdgst:-false}, 00:37:36.149 "ddgst": ${ddgst:-false} 00:37:36.149 }, 00:37:36.149 "method": "bdev_nvme_attach_controller" 00:37:36.149 } 00:37:36.149 EOF 00:37:36.149 )") 00:37:36.149 07:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:37:36.149 07:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:37:36.149 07:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:37:36.149 07:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:36.149 "params": { 00:37:36.149 "name": "Nvme1", 00:37:36.149 "trtype": "tcp", 00:37:36.149 "traddr": "10.0.0.2", 00:37:36.149 "adrfam": "ipv4", 00:37:36.149 "trsvcid": "4420", 00:37:36.149 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:36.149 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:36.149 "hdgst": false, 00:37:36.149 "ddgst": false 00:37:36.149 }, 00:37:36.149 "method": "bdev_nvme_attach_controller" 00:37:36.149 }' 00:37:36.149 [2024-11-26 07:47:20.277281] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:37:36.149 [2024-11-26 07:47:20.277349] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2414223 ] 00:37:36.409 [2024-11-26 07:47:20.360178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:36.409 [2024-11-26 07:47:20.405079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:36.409 [2024-11-26 07:47:20.405201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:36.409 [2024-11-26 07:47:20.405205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:36.670 I/O targets: 00:37:36.670 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:37:36.670 00:37:36.670 00:37:36.670 CUnit - A unit testing framework for C - Version 2.1-3 00:37:36.670 http://cunit.sourceforge.net/ 00:37:36.670 00:37:36.670 00:37:36.670 Suite: bdevio tests on: Nvme1n1 00:37:36.670 Test: blockdev write read block ...passed 00:37:36.670 Test: blockdev write zeroes read block ...passed 00:37:36.670 Test: blockdev write zeroes read no split ...passed 00:37:36.670 Test: blockdev write zeroes read split ...passed 00:37:36.670 Test: blockdev write zeroes read split partial ...passed 00:37:36.670 Test: blockdev reset ...[2024-11-26 07:47:20.783305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:37:36.670 [2024-11-26 07:47:20.783370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15be4b0 (9): Bad file descriptor 00:37:36.670 [2024-11-26 07:47:20.789761] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:37:36.670 passed 00:37:36.670 Test: blockdev write read 8 blocks ...passed 00:37:36.670 Test: blockdev write read size > 128k ...passed 00:37:36.670 Test: blockdev write read invalid size ...passed 00:37:36.931 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:37:36.931 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:37:36.931 Test: blockdev write read max offset ...passed 00:37:36.931 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:37:36.931 Test: blockdev writev readv 8 blocks ...passed 00:37:36.931 Test: blockdev writev readv 30 x 1block ...passed 00:37:36.931 Test: blockdev writev readv block ...passed 00:37:36.931 Test: blockdev writev readv size > 128k ...passed 00:37:36.931 Test: blockdev writev readv size > 128k in two iovs ...passed 00:37:36.931 Test: blockdev comparev and writev ...[2024-11-26 07:47:21.010034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:36.931 [2024-11-26 07:47:21.010058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:36.931 [2024-11-26 07:47:21.010070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:36.931 [2024-11-26 07:47:21.010076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:36.931 [2024-11-26 07:47:21.010493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:36.931 [2024-11-26 07:47:21.010501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:37:36.931 [2024-11-26 07:47:21.010515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:36.931 [2024-11-26 07:47:21.010521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:37:36.931 [2024-11-26 07:47:21.010923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:36.932 [2024-11-26 07:47:21.010931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:37:36.932 [2024-11-26 07:47:21.010940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:36.932 [2024-11-26 07:47:21.010946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:37:36.932 [2024-11-26 07:47:21.011367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:36.932 [2024-11-26 07:47:21.011375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:37:36.932 [2024-11-26 07:47:21.011384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:36.932 [2024-11-26 07:47:21.011390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:37:36.932 passed 00:37:37.192 Test: blockdev nvme passthru rw ...passed 00:37:37.192 Test: blockdev nvme passthru vendor specific ...[2024-11-26 07:47:21.095405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:37.192 [2024-11-26 07:47:21.095414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:37:37.192 [2024-11-26 07:47:21.095649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:37.192 [2024-11-26 07:47:21.095656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:37:37.192 [2024-11-26 07:47:21.095939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:37.192 [2024-11-26 07:47:21.095946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:37.192 [2024-11-26 07:47:21.096219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:37.192 [2024-11-26 07:47:21.096226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:37.192 passed 00:37:37.192 Test: blockdev nvme admin passthru ...passed 00:37:37.192 Test: blockdev copy ...passed 00:37:37.192 00:37:37.192 Run Summary: Type Total Ran Passed Failed Inactive 00:37:37.192 suites 1 1 n/a 0 0 00:37:37.192 tests 23 23 23 0 0 00:37:37.192 asserts 152 152 152 0 n/a 00:37:37.192 00:37:37.192 Elapsed time = 1.149 seconds 00:37:37.192 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:37.192 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:37.192 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:37.192 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:37.192 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:37:37.192 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:37:37.192 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:37.192 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:37:37.192 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:37.192 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:37:37.192 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:37.192 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:37.192 rmmod nvme_tcp 00:37:37.192 rmmod nvme_fabrics 00:37:37.192 rmmod nvme_keyring 00:37:37.192 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:37.453 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:37:37.453 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:37:37.453 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2414096 ']' 00:37:37.453 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2414096 00:37:37.453 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2414096 ']' 00:37:37.453 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2414096 00:37:37.453 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:37:37.453 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:37.454 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2414096 00:37:37.454 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:37:37.454 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:37:37.454 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2414096' 00:37:37.454 killing process with pid 2414096 00:37:37.454 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2414096 00:37:37.454 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2414096 00:37:37.716 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:37.716 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:37.716 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:37.716 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:37:37.716 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:37:37.716 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:37.716 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:37:37.716 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:37.716 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:37.716 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:37.716 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:37.716 07:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:39.631 07:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:39.631 00:37:39.631 real 0m13.124s 00:37:39.631 user 0m9.401s 00:37:39.631 sys 0m7.212s 00:37:39.631 07:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:39.631 07:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:39.631 ************************************ 00:37:39.631 END TEST nvmf_bdevio 00:37:39.631 ************************************ 00:37:39.631 07:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:37:39.631 00:37:39.631 real 5m10.353s 00:37:39.631 user 10m18.534s 00:37:39.631 sys 2m12.646s 00:37:39.631 07:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:39.631 07:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:39.631 ************************************ 00:37:39.631 END TEST nvmf_target_core_interrupt_mode 00:37:39.631 ************************************ 00:37:39.631 07:47:23 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:37:39.631 07:47:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:39.631 07:47:23 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:39.631 07:47:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:39.892 ************************************ 00:37:39.892 START TEST nvmf_interrupt 00:37:39.892 ************************************ 00:37:39.892 07:47:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:37:39.892 * Looking for test storage... 00:37:39.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:39.892 07:47:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:39.892 07:47:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:37:39.892 07:47:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:39.892 07:47:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:39.892 07:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:39.892 07:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:39.892 07:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:39.892 07:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:37:39.892 07:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:37:39.892 07:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:37:39.892 07:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:37:39.892 07:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:37:39.892 07:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:37:39.892 07:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:37:39.892 07:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:39.892 07:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:37:39.892 07:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:37:39.892 07:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:39.892 07:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:39.892 07:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:37:39.892 07:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:37:39.892 07:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:39.892 07:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:37:39.892 07:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:37:39.892 07:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:37:39.892 07:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:37:39.892 07:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:39.892 07:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:37:39.892 07:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:37:39.892 07:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:39.892 07:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:39.892 07:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:37:39.892 07:47:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:39.892 07:47:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:39.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:39.892 --rc genhtml_branch_coverage=1 00:37:39.892 --rc genhtml_function_coverage=1 00:37:39.892 --rc genhtml_legend=1 00:37:39.892 --rc geninfo_all_blocks=1 00:37:39.892 --rc geninfo_unexecuted_blocks=1 00:37:39.892 00:37:39.892 ' 00:37:39.892 07:47:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:39.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:39.892 --rc genhtml_branch_coverage=1 00:37:39.892 --rc genhtml_function_coverage=1 00:37:39.893 --rc genhtml_legend=1 00:37:39.893 --rc geninfo_all_blocks=1 00:37:39.893 --rc geninfo_unexecuted_blocks=1 00:37:39.893 00:37:39.893 ' 00:37:39.893 07:47:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:39.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:39.893 --rc genhtml_branch_coverage=1 00:37:39.893 --rc genhtml_function_coverage=1 00:37:39.893 --rc genhtml_legend=1 00:37:39.893 --rc geninfo_all_blocks=1 00:37:39.893 --rc geninfo_unexecuted_blocks=1 00:37:39.893 00:37:39.893 ' 00:37:39.893 07:47:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:39.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:39.893 --rc genhtml_branch_coverage=1 00:37:39.893 --rc genhtml_function_coverage=1 00:37:39.893 --rc genhtml_legend=1 00:37:39.893 --rc geninfo_all_blocks=1 00:37:39.893 --rc geninfo_unexecuted_blocks=1 00:37:39.893 00:37:39.893 ' 00:37:39.893 07:47:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:39.893 07:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:37:39.893 07:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:39.893 07:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:39.893 07:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:39.893 07:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:39.893 07:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:39.893 07:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:39.893 07:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:39.893 07:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:39.893 07:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:39.893 07:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:39.893 07:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:39.893 07:47:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:39.893 07:47:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:39.893 07:47:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:39.893 07:47:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:39.893 07:47:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:39.893 07:47:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:39.893 07:47:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:37:39.893 07:47:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:39.893 07:47:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:39.893 07:47:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:39.893 07:47:24 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.893 07:47:24 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.893 07:47:24 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.893 07:47:24 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:37:39.893 07:47:24 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.893 07:47:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:37:39.893 07:47:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:39.893 07:47:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:39.893 07:47:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:39.893 07:47:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:39.893 07:47:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:39.893 07:47:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:39.893 07:47:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:39.893 07:47:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:39.893 07:47:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:39.893 07:47:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:39.893 07:47:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:37:39.893 07:47:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:37:39.893 07:47:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:37:39.893 07:47:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:39.893 07:47:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:39.893 07:47:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:39.893 07:47:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:39.893 07:47:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:39.893 07:47:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:39.893 07:47:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:39.893 07:47:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:40.155 07:47:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:40.155 07:47:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:40.155 07:47:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:37:40.155 07:47:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:48.300 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:48.300 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:37:48.300 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:48.300 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:48.300 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:48.300 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:48.300 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:48.300 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:37:48.300 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:48.300 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:37:48.300 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:37:48.300 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:37:48.300 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:37:48.300 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:37:48.300 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:37:48.300 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:48.300 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:48.300 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:48.300 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:48.300 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:48.300 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:48.300 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:48.300 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:48.300 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:48.300 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:48.300 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:48.300 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:48.300 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:48.301 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:48.301 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:48.301 Found net devices under 0000:31:00.0: cvl_0_0 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:48.301 Found net devices under 0000:31:00.1: cvl_0_1 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:48.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:48.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:37:48.301 00:37:48.301 --- 10.0.0.2 ping statistics --- 00:37:48.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:48.301 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:48.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:48.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:37:48.301 00:37:48.301 --- 10.0.0.1 ping statistics --- 00:37:48.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:48.301 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2419163 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2419163 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 2419163 ']' 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:48.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:48.301 07:47:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:48.562 [2024-11-26 07:47:32.436034] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:48.562 [2024-11-26 07:47:32.437018] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:37:48.562 [2024-11-26 07:47:32.437056] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:48.562 [2024-11-26 07:47:32.519406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:48.562 [2024-11-26 07:47:32.554212] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:48.562 [2024-11-26 07:47:32.554243] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:48.562 [2024-11-26 07:47:32.554250] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:48.562 [2024-11-26 07:47:32.554260] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:48.562 [2024-11-26 07:47:32.554266] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:48.562 [2024-11-26 07:47:32.555386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:48.562 [2024-11-26 07:47:32.555388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:48.562 [2024-11-26 07:47:32.609953] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:48.562 [2024-11-26 07:47:32.610499] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:48.562 [2024-11-26 07:47:32.610833] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:49.134 07:47:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:49.134 07:47:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:37:49.135 07:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:49.135 07:47:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:49.135 07:47:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:49.135 07:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:49.135 07:47:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:37:49.135 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:37:49.396 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:37:49.396 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:37:49.396 5000+0 records in 00:37:49.396 5000+0 records out 00:37:49.396 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0186584 s, 549 MB/s 00:37:49.396 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:37:49.396 07:47:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:49.396 07:47:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:49.396 AIO0 00:37:49.396 07:47:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:49.396 07:47:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:37:49.396 07:47:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:49.396 07:47:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:49.396 [2024-11-26 07:47:33.344293] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:49.396 07:47:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:49.396 07:47:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:37:49.396 07:47:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:49.396 07:47:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:49.396 07:47:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:49.396 07:47:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:37:49.396 07:47:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:49.396 07:47:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:49.396 07:47:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:49.396 07:47:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:49.396 07:47:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:49.396 07:47:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:49.396 [2024-11-26 07:47:33.384343] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:49.396 07:47:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:49.396 07:47:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:37:49.396 07:47:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2419163 0 00:37:49.396 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2419163 0 idle 00:37:49.396 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2419163 00:37:49.396 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:49.396 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:49.396 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:49.396 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:49.396 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:49.396 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:49.396 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:49.396 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:49.396 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:49.396 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2419163 -w 256 00:37:49.396 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2419163 root 20 0 128.2g 44928 32256 S 6.7 0.0 0:00.24 reactor_0' 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2419163 root 20 0 128.2g 44928 32256 S 6.7 0.0 0:00.24 reactor_0 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2419163 1 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2419163 1 idle 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2419163 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2419163 -w 256 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2419167 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1' 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2419167 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2419533 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:37:49.657 07:47:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:37:49.658 07:47:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:49.658 07:47:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2419163 0 00:37:49.658 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2419163 0 busy 00:37:49.658 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2419163 00:37:49.658 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:49.658 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:37:49.658 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:37:49.658 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:49.658 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:37:49.658 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:49.658 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:49.658 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:49.658 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2419163 -w 256 00:37:49.658 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:49.918 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2419163 root 20 0 128.2g 46080 33408 S 0.0 0.0 0:00.25 reactor_0' 00:37:49.918 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2419163 root 20 0 128.2g 46080 33408 S 0.0 0.0 0:00.25 reactor_0 00:37:49.918 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:49.918 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:49.918 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:49.918 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:49.918 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:37:49.918 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:37:49.918 07:47:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:37:50.861 07:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:37:50.862 07:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:50.862 07:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2419163 -w 256 00:37:50.862 07:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:51.122 07:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2419163 root 20 0 128.2g 46080 33408 R 99.9 0.0 0:02.58 reactor_0' 00:37:51.122 07:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2419163 root 20 0 128.2g 46080 33408 R 99.9 0.0 0:02.58 reactor_0 00:37:51.122 07:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:51.122 07:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:51.122 07:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:37:51.122 07:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:37:51.122 07:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:37:51.122 07:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:37:51.122 07:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:37:51.122 07:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:51.122 07:47:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:37:51.122 07:47:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:37:51.122 07:47:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2419163 1 00:37:51.122 07:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2419163 1 busy 00:37:51.122 07:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2419163 00:37:51.122 07:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:51.122 07:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:37:51.122 07:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:37:51.122 07:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:51.122 07:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:37:51.122 07:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:51.122 07:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:51.122 07:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:51.122 07:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2419163 -w 256 00:37:51.122 07:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:51.383 07:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2419167 root 20 0 128.2g 46080 33408 R 99.9 0.0 0:01.36 reactor_1' 00:37:51.383 07:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2419167 root 20 0 128.2g 46080 33408 R 99.9 0.0 0:01.36 reactor_1 00:37:51.383 07:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:51.383 07:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:51.383 07:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:37:51.383 07:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:37:51.383 07:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:37:51.383 07:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:37:51.383 07:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:37:51.383 07:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:51.383 07:47:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2419533 00:38:01.387 Initializing NVMe Controllers 00:38:01.387 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:01.387 Controller IO queue size 256, less than required. 00:38:01.387 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:01.387 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:38:01.387 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:38:01.387 Initialization complete. Launching workers. 00:38:01.387 ======================================================== 00:38:01.387 Latency(us) 00:38:01.387 Device Information : IOPS MiB/s Average min max 00:38:01.387 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16537.50 64.60 15489.98 2369.54 18914.73 00:38:01.387 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 20117.40 78.58 12727.44 7432.17 28784.82 00:38:01.388 ======================================================== 00:38:01.388 Total : 36654.89 143.18 13973.81 2369.54 28784.82 00:38:01.388 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2419163 0 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2419163 0 idle 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2419163 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2419163 -w 256 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2419163 root 20 0 128.2g 46080 33408 S 0.0 0.0 0:20.23 reactor_0' 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2419163 root 20 0 128.2g 46080 33408 S 0.0 0.0 0:20.23 reactor_0 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2419163 1 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2419163 1 idle 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2419163 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2419163 -w 256 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2419167 root 20 0 128.2g 46080 33408 S 0.0 0.0 0:10.00 reactor_1' 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2419167 root 20 0 128.2g 46080 33408 S 0.0 0.0 0:10.00 reactor_1 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:38:01.388 07:47:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:38:03.306 07:47:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:38:03.306 07:47:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:38:03.306 07:47:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2419163 0 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2419163 0 idle 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2419163 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2419163 -w 256 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2419163 root 20 0 128.2g 80640 33408 S 0.0 0.1 0:20.47 reactor_0' 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2419163 root 20 0 128.2g 80640 33408 S 0.0 0.1 0:20.47 reactor_0 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2419163 1 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2419163 1 idle 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2419163 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2419163 -w 256 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2419167 root 20 0 128.2g 80640 33408 S 0.0 0.1 0:10.12 reactor_1' 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2419167 root 20 0 128.2g 80640 33408 S 0.0 0.1 0:10.12 reactor_1 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:03.306 07:47:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:38:03.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:38:03.568 07:47:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:38:03.568 07:47:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:38:03.568 07:47:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:03.568 07:47:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:38:03.568 07:47:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:03.568 07:47:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:38:03.568 07:47:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:38:03.568 07:47:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:38:03.568 07:47:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:38:03.568 07:47:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:03.568 07:47:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:38:03.568 07:47:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:03.568 07:47:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:38:03.568 07:47:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:03.568 07:47:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:03.568 rmmod nvme_tcp 00:38:03.568 rmmod nvme_fabrics 00:38:03.568 rmmod nvme_keyring 00:38:03.568 07:47:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:03.568 07:47:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:38:03.568 07:47:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:38:03.568 07:47:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2419163 ']' 00:38:03.568 07:47:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2419163 00:38:03.568 07:47:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 2419163 ']' 00:38:03.568 07:47:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 2419163 00:38:03.568 07:47:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:38:03.568 07:47:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:03.568 07:47:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2419163 00:38:03.829 07:47:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:03.829 07:47:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:03.829 07:47:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2419163' 00:38:03.829 killing process with pid 2419163 00:38:03.829 07:47:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 2419163 00:38:03.829 07:47:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 2419163 00:38:03.829 07:47:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:03.829 07:47:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:03.829 07:47:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:03.829 07:47:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:38:03.829 07:47:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:38:03.829 07:47:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:03.829 07:47:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:38:03.829 07:47:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:03.829 07:47:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:03.829 07:47:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:03.829 07:47:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:03.829 07:47:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:06.377 07:47:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:06.377 00:38:06.377 real 0m26.195s 00:38:06.377 user 0m40.756s 00:38:06.377 sys 0m9.960s 00:38:06.377 07:47:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:06.377 07:47:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:06.377 ************************************ 00:38:06.377 END TEST nvmf_interrupt 00:38:06.377 ************************************ 00:38:06.377 00:38:06.377 real 31m10.837s 00:38:06.377 user 61m59.255s 00:38:06.377 sys 10m58.668s 00:38:06.377 07:47:50 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:06.377 07:47:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:06.377 ************************************ 00:38:06.377 END TEST nvmf_tcp 00:38:06.377 ************************************ 00:38:06.377 07:47:50 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:38:06.377 07:47:50 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:38:06.377 07:47:50 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:06.377 07:47:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:06.377 07:47:50 -- common/autotest_common.sh@10 -- # set +x 00:38:06.377 ************************************ 00:38:06.377 START TEST spdkcli_nvmf_tcp 00:38:06.377 ************************************ 00:38:06.377 07:47:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:38:06.377 * Looking for test storage... 00:38:06.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:38:06.377 07:47:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:06.377 07:47:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:38:06.377 07:47:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:06.377 07:47:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:06.377 07:47:50 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:06.377 07:47:50 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:06.377 07:47:50 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:06.377 07:47:50 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:38:06.377 07:47:50 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:38:06.377 07:47:50 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:38:06.377 07:47:50 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:06.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.378 --rc genhtml_branch_coverage=1 00:38:06.378 --rc genhtml_function_coverage=1 00:38:06.378 --rc genhtml_legend=1 00:38:06.378 --rc geninfo_all_blocks=1 00:38:06.378 --rc geninfo_unexecuted_blocks=1 00:38:06.378 00:38:06.378 ' 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:06.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.378 --rc genhtml_branch_coverage=1 00:38:06.378 --rc genhtml_function_coverage=1 00:38:06.378 --rc genhtml_legend=1 00:38:06.378 --rc geninfo_all_blocks=1 00:38:06.378 --rc geninfo_unexecuted_blocks=1 00:38:06.378 00:38:06.378 ' 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:06.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.378 --rc genhtml_branch_coverage=1 00:38:06.378 --rc genhtml_function_coverage=1 00:38:06.378 --rc genhtml_legend=1 00:38:06.378 --rc geninfo_all_blocks=1 00:38:06.378 --rc geninfo_unexecuted_blocks=1 00:38:06.378 00:38:06.378 ' 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:06.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.378 --rc genhtml_branch_coverage=1 00:38:06.378 --rc genhtml_function_coverage=1 00:38:06.378 --rc genhtml_legend=1 00:38:06.378 --rc geninfo_all_blocks=1 00:38:06.378 --rc geninfo_unexecuted_blocks=1 00:38:06.378 00:38:06.378 ' 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:06.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2422726 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2422726 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 2422726 ']' 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:06.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:06.378 07:47:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:06.378 [2024-11-26 07:47:50.380756] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:38:06.378 [2024-11-26 07:47:50.380808] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2422726 ] 00:38:06.378 [2024-11-26 07:47:50.458242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:06.378 [2024-11-26 07:47:50.495722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:06.378 [2024-11-26 07:47:50.495725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:07.322 07:47:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:07.322 07:47:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:38:07.322 07:47:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:38:07.322 07:47:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:07.322 07:47:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:07.322 07:47:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:38:07.322 07:47:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:38:07.322 07:47:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:38:07.322 07:47:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:07.322 07:47:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:07.322 07:47:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:38:07.322 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:38:07.322 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:38:07.322 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:38:07.322 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:38:07.322 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:38:07.322 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:38:07.322 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:38:07.322 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:38:07.322 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:38:07.322 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:38:07.323 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:07.323 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:38:07.323 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:38:07.323 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:07.323 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:38:07.323 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:38:07.323 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:38:07.323 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:38:07.323 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:07.323 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:38:07.323 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:38:07.323 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:38:07.323 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:38:07.323 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:07.323 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:38:07.323 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:38:07.323 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:38:07.323 ' 00:38:09.870 [2024-11-26 07:47:53.897026] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:11.258 [2024-11-26 07:47:55.257461] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:38:13.807 [2024-11-26 07:47:57.796897] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:38:16.355 [2024-11-26 07:47:59.999444] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:38:17.738 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:38:17.738 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:38:17.738 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:38:17.738 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:38:17.738 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:38:17.738 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:38:17.738 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:38:17.738 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:38:17.738 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:38:17.738 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:38:17.738 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:38:17.738 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:17.738 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:38:17.738 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:38:17.738 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:17.738 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:38:17.738 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:38:17.738 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:38:17.738 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:38:17.738 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:17.738 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:38:17.738 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:38:17.738 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:38:17.738 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:38:17.738 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:17.738 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:38:17.738 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:38:17.738 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:38:17.738 07:48:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:38:17.738 07:48:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:17.738 07:48:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:17.738 07:48:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:38:17.738 07:48:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:17.738 07:48:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:17.738 07:48:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:38:17.738 07:48:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:38:18.310 07:48:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:38:18.310 07:48:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:38:18.310 07:48:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:38:18.310 07:48:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:18.310 07:48:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:18.310 07:48:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:38:18.310 07:48:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:18.310 07:48:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:18.310 07:48:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:38:18.310 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:38:18.310 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:38:18.310 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:38:18.310 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:38:18.310 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:38:18.310 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:38:18.310 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:38:18.310 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:38:18.310 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:38:18.310 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:38:18.310 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:38:18.310 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:38:18.310 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:38:18.310 ' 00:38:23.603 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:38:23.603 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:38:23.603 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:38:23.603 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:38:23.603 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:38:23.603 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:38:23.603 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:38:23.603 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:38:23.603 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:38:23.603 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:38:23.603 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:38:23.603 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:38:23.603 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:38:23.603 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:38:23.603 07:48:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:38:23.603 07:48:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:23.603 07:48:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:23.603 07:48:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2422726 00:38:23.603 07:48:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2422726 ']' 00:38:23.603 07:48:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2422726 00:38:23.603 07:48:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:38:23.603 07:48:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:23.603 07:48:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2422726 00:38:23.603 07:48:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:23.603 07:48:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:23.603 07:48:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2422726' 00:38:23.603 killing process with pid 2422726 00:38:23.603 07:48:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 2422726 00:38:23.603 07:48:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 2422726 00:38:23.603 07:48:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:38:23.603 07:48:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:38:23.603 07:48:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2422726 ']' 00:38:23.603 07:48:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2422726 00:38:23.603 07:48:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2422726 ']' 00:38:23.603 07:48:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2422726 00:38:23.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2422726) - No such process 00:38:23.603 07:48:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 2422726 is not found' 00:38:23.603 Process with pid 2422726 is not found 00:38:23.603 07:48:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:38:23.603 07:48:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:38:23.603 07:48:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:38:23.603 00:38:23.603 real 0m17.480s 00:38:23.603 user 0m38.076s 00:38:23.603 sys 0m0.771s 00:38:23.603 07:48:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:23.603 07:48:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:23.603 ************************************ 00:38:23.604 END TEST spdkcli_nvmf_tcp 00:38:23.604 ************************************ 00:38:23.604 07:48:07 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:38:23.604 07:48:07 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:23.604 07:48:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:23.604 07:48:07 -- common/autotest_common.sh@10 -- # set +x 00:38:23.604 ************************************ 00:38:23.604 START TEST nvmf_identify_passthru 00:38:23.604 ************************************ 00:38:23.604 07:48:07 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:38:23.866 * Looking for test storage... 00:38:23.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:23.866 07:48:07 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:23.866 07:48:07 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:38:23.866 07:48:07 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:23.866 07:48:07 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:23.866 07:48:07 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:23.866 07:48:07 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:23.866 07:48:07 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:23.866 07:48:07 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:38:23.866 07:48:07 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:38:23.866 07:48:07 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:38:23.866 07:48:07 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:38:23.866 07:48:07 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:38:23.866 07:48:07 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:38:23.866 07:48:07 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:38:23.866 07:48:07 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:23.866 07:48:07 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:38:23.866 07:48:07 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:38:23.866 07:48:07 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:23.866 07:48:07 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:23.866 07:48:07 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:38:23.866 07:48:07 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:38:23.866 07:48:07 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:23.866 07:48:07 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:38:23.866 07:48:07 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:38:23.866 07:48:07 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:38:23.866 07:48:07 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:38:23.866 07:48:07 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:23.866 07:48:07 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:38:23.866 07:48:07 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:38:23.866 07:48:07 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:23.866 07:48:07 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:23.866 07:48:07 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:38:23.866 07:48:07 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:23.866 07:48:07 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:23.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:23.866 --rc genhtml_branch_coverage=1 00:38:23.866 --rc genhtml_function_coverage=1 00:38:23.866 --rc genhtml_legend=1 00:38:23.866 --rc geninfo_all_blocks=1 00:38:23.866 --rc geninfo_unexecuted_blocks=1 00:38:23.866 00:38:23.866 ' 00:38:23.866 07:48:07 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:23.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:23.866 --rc genhtml_branch_coverage=1 00:38:23.866 --rc genhtml_function_coverage=1 00:38:23.866 --rc genhtml_legend=1 00:38:23.866 --rc geninfo_all_blocks=1 00:38:23.866 --rc geninfo_unexecuted_blocks=1 00:38:23.866 00:38:23.866 ' 00:38:23.866 07:48:07 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:23.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:23.866 --rc genhtml_branch_coverage=1 00:38:23.866 --rc genhtml_function_coverage=1 00:38:23.866 --rc genhtml_legend=1 00:38:23.866 --rc geninfo_all_blocks=1 00:38:23.866 --rc geninfo_unexecuted_blocks=1 00:38:23.866 00:38:23.866 ' 00:38:23.866 07:48:07 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:23.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:23.866 --rc genhtml_branch_coverage=1 00:38:23.866 --rc genhtml_function_coverage=1 00:38:23.866 --rc genhtml_legend=1 00:38:23.866 --rc geninfo_all_blocks=1 00:38:23.866 --rc geninfo_unexecuted_blocks=1 00:38:23.866 00:38:23.866 ' 00:38:23.866 07:48:07 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:23.866 07:48:07 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:38:23.866 07:48:07 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:23.866 07:48:07 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:23.866 07:48:07 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:23.866 07:48:07 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:23.866 07:48:07 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:23.866 07:48:07 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:23.866 07:48:07 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:23.866 07:48:07 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:23.866 07:48:07 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:23.867 07:48:07 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:23.867 07:48:07 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:23.867 07:48:07 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:23.867 07:48:07 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:23.867 07:48:07 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:23.867 07:48:07 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:23.867 07:48:07 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:23.867 07:48:07 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:23.867 07:48:07 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:38:23.867 07:48:07 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:23.867 07:48:07 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:23.867 07:48:07 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:23.867 07:48:07 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:23.867 07:48:07 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:23.867 07:48:07 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:23.867 07:48:07 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:38:23.867 07:48:07 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:23.867 07:48:07 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:38:23.867 07:48:07 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:23.867 07:48:07 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:23.867 07:48:07 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:23.867 07:48:07 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:23.867 07:48:07 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:23.867 07:48:07 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:23.867 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:23.867 07:48:07 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:23.867 07:48:07 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:23.867 07:48:07 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:23.867 07:48:07 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:23.867 07:48:07 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:38:23.867 07:48:07 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:23.867 07:48:07 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:23.867 07:48:07 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:23.867 07:48:07 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:23.867 07:48:07 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:23.867 07:48:07 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:23.867 07:48:07 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:38:23.867 07:48:07 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:23.867 07:48:07 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:38:23.867 07:48:07 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:23.867 07:48:07 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:23.867 07:48:07 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:23.867 07:48:07 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:23.867 07:48:07 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:23.867 07:48:07 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:23.867 07:48:07 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:23.867 07:48:07 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:23.867 07:48:07 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:23.867 07:48:07 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:23.867 07:48:07 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:38:23.867 07:48:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:32.110 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:32.110 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:32.110 Found net devices under 0000:31:00.0: cvl_0_0 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:32.110 Found net devices under 0000:31:00.1: cvl_0_1 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:32.110 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:32.111 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:32.111 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:32.111 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:32.111 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:32.111 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:32.111 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:32.111 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:32.111 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:32.111 07:48:15 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:32.111 07:48:16 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:32.111 07:48:16 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:32.111 07:48:16 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:32.111 07:48:16 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:32.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:32.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:38:32.111 00:38:32.111 --- 10.0.0.2 ping statistics --- 00:38:32.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:32.111 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:38:32.111 07:48:16 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:32.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:32.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:38:32.111 00:38:32.111 --- 10.0.0.1 ping statistics --- 00:38:32.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:32.111 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:38:32.111 07:48:16 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:32.111 07:48:16 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:38:32.111 07:48:16 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:32.111 07:48:16 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:32.111 07:48:16 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:32.111 07:48:16 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:32.111 07:48:16 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:32.111 07:48:16 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:32.111 07:48:16 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:32.111 07:48:16 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:38:32.111 07:48:16 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:32.111 07:48:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:32.111 07:48:16 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:38:32.111 07:48:16 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:38:32.111 07:48:16 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:38:32.111 07:48:16 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:38:32.111 07:48:16 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:38:32.111 07:48:16 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:38:32.111 07:48:16 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:38:32.111 07:48:16 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:38:32.111 07:48:16 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:38:32.111 07:48:16 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:38:32.372 07:48:16 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:38:32.372 07:48:16 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:38:32.372 07:48:16 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:38:32.372 07:48:16 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:38:32.372 07:48:16 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:38:32.372 07:48:16 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:38:32.372 07:48:16 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:38:32.372 07:48:16 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:38:32.634 07:48:16 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:38:32.634 07:48:16 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:38:32.634 07:48:16 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:38:32.634 07:48:16 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:38:33.206 07:48:17 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:38:33.206 07:48:17 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:38:33.206 07:48:17 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:33.206 07:48:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:33.206 07:48:17 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:38:33.206 07:48:17 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:33.206 07:48:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:33.206 07:48:17 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2431053 00:38:33.206 07:48:17 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:33.206 07:48:17 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:38:33.206 07:48:17 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2431053 00:38:33.206 07:48:17 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 2431053 ']' 00:38:33.206 07:48:17 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:33.206 07:48:17 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:33.206 07:48:17 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:33.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:33.206 07:48:17 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:33.206 07:48:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:33.472 [2024-11-26 07:48:17.365647] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:38:33.472 [2024-11-26 07:48:17.365703] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:33.472 [2024-11-26 07:48:17.453037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:33.472 [2024-11-26 07:48:17.493894] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:33.472 [2024-11-26 07:48:17.493932] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:33.472 [2024-11-26 07:48:17.493940] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:33.472 [2024-11-26 07:48:17.493947] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:33.472 [2024-11-26 07:48:17.493953] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:33.472 [2024-11-26 07:48:17.495806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:33.472 [2024-11-26 07:48:17.495924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:33.472 [2024-11-26 07:48:17.496074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:33.472 [2024-11-26 07:48:17.496074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:34.045 07:48:18 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:34.045 07:48:18 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:38:34.045 07:48:18 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:38:34.045 07:48:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.045 07:48:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:34.045 INFO: Log level set to 20 00:38:34.045 INFO: Requests: 00:38:34.045 { 00:38:34.045 "jsonrpc": "2.0", 00:38:34.045 "method": "nvmf_set_config", 00:38:34.045 "id": 1, 00:38:34.045 "params": { 00:38:34.045 "admin_cmd_passthru": { 00:38:34.045 "identify_ctrlr": true 00:38:34.045 } 00:38:34.045 } 00:38:34.045 } 00:38:34.045 00:38:34.306 INFO: response: 00:38:34.306 { 00:38:34.306 "jsonrpc": "2.0", 00:38:34.306 "id": 1, 00:38:34.306 "result": true 00:38:34.306 } 00:38:34.306 00:38:34.306 07:48:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.306 07:48:18 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:38:34.306 07:48:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.306 07:48:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:34.306 INFO: Setting log level to 20 00:38:34.306 INFO: Setting log level to 20 00:38:34.306 INFO: Log level set to 20 00:38:34.306 INFO: Log level set to 20 00:38:34.306 INFO: Requests: 00:38:34.306 { 00:38:34.306 "jsonrpc": "2.0", 00:38:34.306 "method": "framework_start_init", 00:38:34.306 "id": 1 00:38:34.306 } 00:38:34.306 00:38:34.306 INFO: Requests: 00:38:34.306 { 00:38:34.306 "jsonrpc": "2.0", 00:38:34.306 "method": "framework_start_init", 00:38:34.306 "id": 1 00:38:34.306 } 00:38:34.306 00:38:34.306 [2024-11-26 07:48:18.240374] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:38:34.306 INFO: response: 00:38:34.306 { 00:38:34.306 "jsonrpc": "2.0", 00:38:34.306 "id": 1, 00:38:34.306 "result": true 00:38:34.306 } 00:38:34.306 00:38:34.306 INFO: response: 00:38:34.306 { 00:38:34.306 "jsonrpc": "2.0", 00:38:34.306 "id": 1, 00:38:34.306 "result": true 00:38:34.306 } 00:38:34.306 00:38:34.306 07:48:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.306 07:48:18 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:34.306 07:48:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.306 07:48:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:34.306 INFO: Setting log level to 40 00:38:34.306 INFO: Setting log level to 40 00:38:34.306 INFO: Setting log level to 40 00:38:34.306 [2024-11-26 07:48:18.253705] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:34.306 07:48:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.306 07:48:18 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:38:34.306 07:48:18 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:34.306 07:48:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:34.306 07:48:18 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:38:34.306 07:48:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.306 07:48:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:34.568 Nvme0n1 00:38:34.568 07:48:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.568 07:48:18 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:38:34.568 07:48:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.568 07:48:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:34.568 07:48:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.568 07:48:18 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:38:34.568 07:48:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.568 07:48:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:34.568 07:48:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.568 07:48:18 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:34.568 07:48:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.568 07:48:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:34.568 [2024-11-26 07:48:18.648226] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:34.568 07:48:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.568 07:48:18 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:38:34.568 07:48:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.568 07:48:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:34.568 [ 00:38:34.568 { 00:38:34.568 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:38:34.568 "subtype": "Discovery", 00:38:34.568 "listen_addresses": [], 00:38:34.568 "allow_any_host": true, 00:38:34.568 "hosts": [] 00:38:34.568 }, 00:38:34.568 { 00:38:34.568 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:34.568 "subtype": "NVMe", 00:38:34.568 "listen_addresses": [ 00:38:34.568 { 00:38:34.568 "trtype": "TCP", 00:38:34.568 "adrfam": "IPv4", 00:38:34.568 "traddr": "10.0.0.2", 00:38:34.568 "trsvcid": "4420" 00:38:34.568 } 00:38:34.568 ], 00:38:34.568 "allow_any_host": true, 00:38:34.568 "hosts": [], 00:38:34.568 "serial_number": "SPDK00000000000001", 00:38:34.568 "model_number": "SPDK bdev Controller", 00:38:34.568 "max_namespaces": 1, 00:38:34.568 "min_cntlid": 1, 00:38:34.568 "max_cntlid": 65519, 00:38:34.568 "namespaces": [ 00:38:34.568 { 00:38:34.568 "nsid": 1, 00:38:34.568 "bdev_name": "Nvme0n1", 00:38:34.568 "name": "Nvme0n1", 00:38:34.568 "nguid": "3634473052605494002538450000002D", 00:38:34.568 "uuid": "36344730-5260-5494-0025-38450000002d" 00:38:34.568 } 00:38:34.568 ] 00:38:34.568 } 00:38:34.568 ] 00:38:34.568 07:48:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.568 07:48:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:38:34.568 07:48:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:38:34.568 07:48:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:38:34.829 07:48:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:38:34.829 07:48:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:38:34.829 07:48:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:38:34.829 07:48:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:38:35.090 07:48:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:38:35.090 07:48:19 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:38:35.090 07:48:19 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:38:35.090 07:48:19 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:35.090 07:48:19 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.090 07:48:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:35.090 07:48:19 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.090 07:48:19 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:38:35.090 07:48:19 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:38:35.090 07:48:19 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:35.090 07:48:19 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:38:35.090 07:48:19 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:35.090 07:48:19 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:38:35.090 07:48:19 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:35.090 07:48:19 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:35.350 rmmod nvme_tcp 00:38:35.350 rmmod nvme_fabrics 00:38:35.350 rmmod nvme_keyring 00:38:35.350 07:48:19 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:35.350 07:48:19 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:38:35.350 07:48:19 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:38:35.350 07:48:19 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2431053 ']' 00:38:35.350 07:48:19 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2431053 00:38:35.350 07:48:19 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 2431053 ']' 00:38:35.350 07:48:19 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 2431053 00:38:35.350 07:48:19 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:38:35.350 07:48:19 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:35.350 07:48:19 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2431053 00:38:35.350 07:48:19 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:35.350 07:48:19 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:35.350 07:48:19 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2431053' 00:38:35.350 killing process with pid 2431053 00:38:35.350 07:48:19 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 2431053 00:38:35.350 07:48:19 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 2431053 00:38:35.611 07:48:19 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:35.611 07:48:19 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:35.611 07:48:19 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:35.611 07:48:19 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:38:35.611 07:48:19 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:38:35.611 07:48:19 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:38:35.611 07:48:19 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:35.611 07:48:19 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:35.611 07:48:19 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:35.611 07:48:19 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:35.611 07:48:19 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:35.611 07:48:19 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:38.158 07:48:21 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:38.158 00:38:38.158 real 0m14.033s 00:38:38.158 user 0m10.898s 00:38:38.158 sys 0m7.242s 00:38:38.158 07:48:21 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:38.158 07:48:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:38.158 ************************************ 00:38:38.158 END TEST nvmf_identify_passthru 00:38:38.158 ************************************ 00:38:38.158 07:48:21 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:38:38.158 07:48:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:38.158 07:48:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:38.158 07:48:21 -- common/autotest_common.sh@10 -- # set +x 00:38:38.158 ************************************ 00:38:38.158 START TEST nvmf_dif 00:38:38.158 ************************************ 00:38:38.158 07:48:21 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:38:38.158 * Looking for test storage... 00:38:38.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:38.158 07:48:21 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:38.158 07:48:21 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:38:38.158 07:48:21 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:38.158 07:48:21 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:38.158 07:48:21 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:38.158 07:48:21 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:38.158 07:48:21 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:38.158 07:48:21 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:38:38.158 07:48:21 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:38:38.158 07:48:21 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:38:38.158 07:48:21 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:38:38.158 07:48:21 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:38:38.158 07:48:21 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:38:38.158 07:48:21 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:38:38.158 07:48:21 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:38.158 07:48:21 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:38:38.158 07:48:21 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:38:38.158 07:48:21 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:38.158 07:48:21 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:38.158 07:48:21 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:38:38.158 07:48:21 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:38:38.158 07:48:21 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:38.158 07:48:21 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:38:38.158 07:48:21 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:38:38.158 07:48:21 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:38:38.158 07:48:21 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:38:38.158 07:48:21 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:38.158 07:48:21 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:38:38.158 07:48:21 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:38:38.158 07:48:21 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:38.158 07:48:21 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:38.158 07:48:21 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:38:38.158 07:48:21 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:38.158 07:48:21 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:38.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:38.158 --rc genhtml_branch_coverage=1 00:38:38.158 --rc genhtml_function_coverage=1 00:38:38.158 --rc genhtml_legend=1 00:38:38.158 --rc geninfo_all_blocks=1 00:38:38.158 --rc geninfo_unexecuted_blocks=1 00:38:38.158 00:38:38.158 ' 00:38:38.158 07:48:21 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:38.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:38.158 --rc genhtml_branch_coverage=1 00:38:38.158 --rc genhtml_function_coverage=1 00:38:38.159 --rc genhtml_legend=1 00:38:38.159 --rc geninfo_all_blocks=1 00:38:38.159 --rc geninfo_unexecuted_blocks=1 00:38:38.159 00:38:38.159 ' 00:38:38.159 07:48:21 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:38.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:38.159 --rc genhtml_branch_coverage=1 00:38:38.159 --rc genhtml_function_coverage=1 00:38:38.159 --rc genhtml_legend=1 00:38:38.159 --rc geninfo_all_blocks=1 00:38:38.159 --rc geninfo_unexecuted_blocks=1 00:38:38.159 00:38:38.159 ' 00:38:38.159 07:48:21 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:38.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:38.159 --rc genhtml_branch_coverage=1 00:38:38.159 --rc genhtml_function_coverage=1 00:38:38.159 --rc genhtml_legend=1 00:38:38.159 --rc geninfo_all_blocks=1 00:38:38.159 --rc geninfo_unexecuted_blocks=1 00:38:38.159 00:38:38.159 ' 00:38:38.159 07:48:21 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:38.159 07:48:21 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:38:38.159 07:48:21 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:38.159 07:48:21 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:38.159 07:48:21 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:38.159 07:48:21 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:38.159 07:48:21 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:38.159 07:48:21 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:38.159 07:48:21 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:38.159 07:48:21 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:38.159 07:48:21 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:38.159 07:48:21 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:38.159 07:48:21 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:38.159 07:48:21 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:38.159 07:48:21 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:38.159 07:48:21 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:38.159 07:48:21 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:38.159 07:48:21 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:38.159 07:48:21 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:38.159 07:48:21 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:38:38.159 07:48:21 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:38.159 07:48:21 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:38.159 07:48:21 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:38.159 07:48:21 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.159 07:48:21 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.159 07:48:21 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.159 07:48:21 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:38:38.159 07:48:21 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.159 07:48:21 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:38:38.159 07:48:21 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:38.159 07:48:21 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:38.159 07:48:21 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:38.159 07:48:21 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:38.159 07:48:21 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:38.159 07:48:21 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:38.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:38.159 07:48:21 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:38.159 07:48:21 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:38.159 07:48:21 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:38.159 07:48:21 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:38:38.159 07:48:21 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:38:38.159 07:48:21 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:38:38.159 07:48:21 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:38:38.159 07:48:21 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:38:38.159 07:48:21 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:38.159 07:48:21 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:38.159 07:48:21 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:38.159 07:48:21 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:38.159 07:48:21 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:38.159 07:48:21 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:38.159 07:48:21 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:38.159 07:48:21 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:38.159 07:48:21 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:38.159 07:48:21 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:38.159 07:48:21 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:38:38.159 07:48:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:46.297 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:46.297 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:46.297 07:48:29 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:46.298 07:48:29 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:46.298 07:48:29 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:46.298 Found net devices under 0000:31:00.0: cvl_0_0 00:38:46.298 07:48:29 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:46.298 07:48:29 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:46.298 07:48:29 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:46.298 07:48:29 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:46.298 07:48:29 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:46.298 07:48:29 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:46.298 07:48:29 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:46.298 07:48:29 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:46.298 07:48:29 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:46.298 Found net devices under 0000:31:00.1: cvl_0_1 00:38:46.298 07:48:29 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:46.298 07:48:29 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:46.298 07:48:29 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:38:46.298 07:48:29 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:46.298 07:48:29 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:46.298 07:48:29 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:46.298 07:48:29 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:46.298 07:48:29 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:46.298 07:48:29 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:46.298 07:48:29 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:46.298 07:48:29 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:46.298 07:48:29 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:46.298 07:48:29 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:46.298 07:48:29 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:46.298 07:48:29 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:46.298 07:48:29 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:46.298 07:48:29 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:46.298 07:48:29 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:46.298 07:48:29 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:46.298 07:48:29 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:46.298 07:48:29 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:46.298 07:48:29 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:46.298 07:48:29 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:46.298 07:48:29 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:46.298 07:48:29 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:46.298 07:48:30 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:46.298 07:48:30 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:46.298 07:48:30 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:46.298 07:48:30 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:46.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:46.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:38:46.298 00:38:46.298 --- 10.0.0.2 ping statistics --- 00:38:46.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:46.298 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:38:46.298 07:48:30 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:46.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:46.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:38:46.298 00:38:46.298 --- 10.0.0.1 ping statistics --- 00:38:46.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:46.298 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:38:46.298 07:48:30 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:46.298 07:48:30 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:38:46.298 07:48:30 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:38:46.298 07:48:30 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:50.506 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:38:50.506 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:38:50.506 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:38:50.506 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:38:50.506 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:38:50.506 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:38:50.506 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:38:50.506 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:38:50.506 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:38:50.506 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:38:50.506 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:38:50.506 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:38:50.506 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:38:50.506 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:38:50.506 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:38:50.506 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:38:50.506 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:38:50.506 07:48:34 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:50.506 07:48:34 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:50.506 07:48:34 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:50.506 07:48:34 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:50.506 07:48:34 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:50.506 07:48:34 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:50.506 07:48:34 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:38:50.506 07:48:34 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:38:50.506 07:48:34 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:50.506 07:48:34 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:50.506 07:48:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:50.506 07:48:34 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2437934 00:38:50.506 07:48:34 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2437934 00:38:50.506 07:48:34 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:38:50.506 07:48:34 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 2437934 ']' 00:38:50.506 07:48:34 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:50.506 07:48:34 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:50.506 07:48:34 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:50.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:50.506 07:48:34 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:50.506 07:48:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:50.506 [2024-11-26 07:48:34.398323] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:38:50.506 [2024-11-26 07:48:34.398372] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:50.506 [2024-11-26 07:48:34.481429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:50.506 [2024-11-26 07:48:34.515873] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:50.506 [2024-11-26 07:48:34.515906] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:50.506 [2024-11-26 07:48:34.515918] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:50.506 [2024-11-26 07:48:34.515925] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:50.506 [2024-11-26 07:48:34.515931] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:50.506 [2024-11-26 07:48:34.516487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:51.078 07:48:35 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:51.078 07:48:35 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:38:51.078 07:48:35 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:51.078 07:48:35 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:51.078 07:48:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:51.078 07:48:35 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:51.078 07:48:35 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:38:51.078 07:48:35 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:38:51.078 07:48:35 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:51.078 07:48:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:51.339 [2024-11-26 07:48:35.212397] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:51.339 07:48:35 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:51.339 07:48:35 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:38:51.339 07:48:35 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:51.339 07:48:35 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:51.339 07:48:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:51.339 ************************************ 00:38:51.339 START TEST fio_dif_1_default 00:38:51.339 ************************************ 00:38:51.339 07:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:38:51.339 07:48:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:38:51.339 07:48:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:38:51.339 07:48:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:38:51.339 07:48:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:38:51.339 07:48:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:38:51.339 07:48:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:51.339 07:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:51.339 07:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:51.339 bdev_null0 00:38:51.339 07:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:51.339 07:48:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:51.339 07:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:51.339 07:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:51.339 07:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:51.339 07:48:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:51.339 07:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:51.339 07:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:51.339 07:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:51.339 07:48:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:51.339 07:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:51.339 07:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:51.340 [2024-11-26 07:48:35.296748] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:51.340 { 00:38:51.340 "params": { 00:38:51.340 "name": "Nvme$subsystem", 00:38:51.340 "trtype": "$TEST_TRANSPORT", 00:38:51.340 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:51.340 "adrfam": "ipv4", 00:38:51.340 "trsvcid": "$NVMF_PORT", 00:38:51.340 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:51.340 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:51.340 "hdgst": ${hdgst:-false}, 00:38:51.340 "ddgst": ${ddgst:-false} 00:38:51.340 }, 00:38:51.340 "method": "bdev_nvme_attach_controller" 00:38:51.340 } 00:38:51.340 EOF 00:38:51.340 )") 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:51.340 "params": { 00:38:51.340 "name": "Nvme0", 00:38:51.340 "trtype": "tcp", 00:38:51.340 "traddr": "10.0.0.2", 00:38:51.340 "adrfam": "ipv4", 00:38:51.340 "trsvcid": "4420", 00:38:51.340 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:51.340 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:51.340 "hdgst": false, 00:38:51.340 "ddgst": false 00:38:51.340 }, 00:38:51.340 "method": "bdev_nvme_attach_controller" 00:38:51.340 }' 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:51.340 07:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:51.911 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:51.911 fio-3.35 00:38:51.911 Starting 1 thread 00:39:04.142 00:39:04.142 filename0: (groupid=0, jobs=1): err= 0: pid=2438431: Tue Nov 26 07:48:46 2024 00:39:04.142 read: IOPS=188, BW=756KiB/s (774kB/s)(7584KiB/10036msec) 00:39:04.142 slat (nsec): min=5463, max=32316, avg=6278.37, stdev=1412.71 00:39:04.142 clat (usec): min=733, max=46594, avg=21155.19, stdev=20144.99 00:39:04.142 lat (usec): min=739, max=46627, avg=21161.47, stdev=20145.00 00:39:04.142 clat percentiles (usec): 00:39:04.142 | 1.00th=[ 865], 5.00th=[ 898], 10.00th=[ 914], 20.00th=[ 930], 00:39:04.142 | 30.00th=[ 947], 40.00th=[ 955], 50.00th=[41157], 60.00th=[41157], 00:39:04.142 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:39:04.142 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:39:04.142 | 99.99th=[46400] 00:39:04.142 bw ( KiB/s): min= 672, max= 768, per=100.00%, avg=756.80, stdev=26.01, samples=20 00:39:04.142 iops : min= 168, max= 192, avg=189.20, stdev= 6.50, samples=20 00:39:04.142 lat (usec) : 750=0.42%, 1000=48.42% 00:39:04.142 lat (msec) : 2=0.95%, 50=50.21% 00:39:04.142 cpu : usr=93.13%, sys=6.67%, ctx=14, majf=0, minf=236 00:39:04.142 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:04.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:04.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:04.142 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:04.142 latency : target=0, window=0, percentile=100.00%, depth=4 00:39:04.142 00:39:04.142 Run status group 0 (all jobs): 00:39:04.142 READ: bw=756KiB/s (774kB/s), 756KiB/s-756KiB/s (774kB/s-774kB/s), io=7584KiB (7766kB), run=10036-10036msec 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:04.142 00:39:04.142 real 0m11.165s 00:39:04.142 user 0m24.547s 00:39:04.142 sys 0m1.012s 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:04.142 ************************************ 00:39:04.142 END TEST fio_dif_1_default 00:39:04.142 ************************************ 00:39:04.142 07:48:46 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:39:04.142 07:48:46 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:04.142 07:48:46 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:04.142 07:48:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:04.142 ************************************ 00:39:04.142 START TEST fio_dif_1_multi_subsystems 00:39:04.142 ************************************ 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:04.142 bdev_null0 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:04.142 [2024-11-26 07:48:46.540104] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:04.142 bdev_null1 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:04.142 { 00:39:04.142 "params": { 00:39:04.142 "name": "Nvme$subsystem", 00:39:04.142 "trtype": "$TEST_TRANSPORT", 00:39:04.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:04.142 "adrfam": "ipv4", 00:39:04.142 "trsvcid": "$NVMF_PORT", 00:39:04.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:04.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:04.142 "hdgst": ${hdgst:-false}, 00:39:04.142 "ddgst": ${ddgst:-false} 00:39:04.142 }, 00:39:04.142 "method": "bdev_nvme_attach_controller" 00:39:04.142 } 00:39:04.142 EOF 00:39:04.142 )") 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:04.142 { 00:39:04.142 "params": { 00:39:04.142 "name": "Nvme$subsystem", 00:39:04.142 "trtype": "$TEST_TRANSPORT", 00:39:04.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:04.142 "adrfam": "ipv4", 00:39:04.142 "trsvcid": "$NVMF_PORT", 00:39:04.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:04.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:04.142 "hdgst": ${hdgst:-false}, 00:39:04.142 "ddgst": ${ddgst:-false} 00:39:04.142 }, 00:39:04.142 "method": "bdev_nvme_attach_controller" 00:39:04.142 } 00:39:04.142 EOF 00:39:04.142 )") 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:04.142 "params": { 00:39:04.142 "name": "Nvme0", 00:39:04.142 "trtype": "tcp", 00:39:04.142 "traddr": "10.0.0.2", 00:39:04.142 "adrfam": "ipv4", 00:39:04.142 "trsvcid": "4420", 00:39:04.142 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:04.142 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:04.142 "hdgst": false, 00:39:04.142 "ddgst": false 00:39:04.142 }, 00:39:04.142 "method": "bdev_nvme_attach_controller" 00:39:04.142 },{ 00:39:04.142 "params": { 00:39:04.142 "name": "Nvme1", 00:39:04.142 "trtype": "tcp", 00:39:04.142 "traddr": "10.0.0.2", 00:39:04.142 "adrfam": "ipv4", 00:39:04.142 "trsvcid": "4420", 00:39:04.142 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:04.142 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:04.142 "hdgst": false, 00:39:04.142 "ddgst": false 00:39:04.142 }, 00:39:04.142 "method": "bdev_nvme_attach_controller" 00:39:04.142 }' 00:39:04.142 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:04.143 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:04.143 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:04.143 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:04.143 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:39:04.143 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:04.143 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:04.143 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:04.143 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:04.143 07:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:04.143 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:39:04.143 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:39:04.143 fio-3.35 00:39:04.143 Starting 2 threads 00:39:14.144 00:39:14.144 filename0: (groupid=0, jobs=1): err= 0: pid=2440768: Tue Nov 26 07:48:57 2024 00:39:14.144 read: IOPS=97, BW=389KiB/s (398kB/s)(3904KiB/10035msec) 00:39:14.144 slat (nsec): min=5465, max=29373, avg=6390.22, stdev=1731.27 00:39:14.144 clat (usec): min=40919, max=43008, avg=41106.80, stdev=358.07 00:39:14.144 lat (usec): min=40927, max=43014, avg=41113.19, stdev=358.13 00:39:14.144 clat percentiles (usec): 00:39:14.144 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:39:14.144 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:14.144 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:39:14.144 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:39:14.144 | 99.99th=[43254] 00:39:14.144 bw ( KiB/s): min= 384, max= 416, per=49.87%, avg=388.80, stdev=11.72, samples=20 00:39:14.144 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:39:14.144 lat (msec) : 50=100.00% 00:39:14.144 cpu : usr=95.36%, sys=4.42%, ctx=10, majf=0, minf=126 00:39:14.144 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:14.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.144 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:14.144 latency : target=0, window=0, percentile=100.00%, depth=4 00:39:14.144 filename1: (groupid=0, jobs=1): err= 0: pid=2440769: Tue Nov 26 07:48:57 2024 00:39:14.144 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10010msec) 00:39:14.144 slat (nsec): min=5460, max=29822, avg=6710.52, stdev=1931.35 00:39:14.144 clat (usec): min=40831, max=42813, avg=41003.02, stdev=174.98 00:39:14.144 lat (usec): min=40839, max=42820, avg=41009.73, stdev=175.50 00:39:14.144 clat percentiles (usec): 00:39:14.144 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:39:14.144 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:14.144 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:14.144 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:39:14.144 | 99.99th=[42730] 00:39:14.144 bw ( KiB/s): min= 384, max= 416, per=49.87%, avg=388.80, stdev=11.72, samples=20 00:39:14.144 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:39:14.144 lat (msec) : 50=100.00% 00:39:14.144 cpu : usr=95.48%, sys=4.29%, ctx=32, majf=0, minf=126 00:39:14.144 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:14.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.144 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:14.144 latency : target=0, window=0, percentile=100.00%, depth=4 00:39:14.144 00:39:14.144 Run status group 0 (all jobs): 00:39:14.144 READ: bw=778KiB/s (797kB/s), 389KiB/s-390KiB/s (398kB/s-399kB/s), io=7808KiB (7995kB), run=10010-10035msec 00:39:14.144 07:48:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:39:14.144 07:48:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:39:14.144 07:48:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:39:14.144 07:48:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:14.144 07:48:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:39:14.144 07:48:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:14.144 07:48:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:14.144 07:48:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:14.144 07:48:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:14.144 07:48:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:14.144 07:48:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:14.144 07:48:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:14.144 07:48:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:14.144 07:48:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:39:14.144 07:48:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:14.144 07:48:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:39:14.144 07:48:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:14.144 07:48:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:14.144 07:48:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:14.144 07:48:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:14.144 07:48:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:14.144 07:48:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:14.144 07:48:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:14.144 07:48:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:14.144 00:39:14.144 real 0m11.447s 00:39:14.144 user 0m34.719s 00:39:14.144 sys 0m1.183s 00:39:14.144 07:48:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:14.144 07:48:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:14.144 ************************************ 00:39:14.144 END TEST fio_dif_1_multi_subsystems 00:39:14.144 ************************************ 00:39:14.144 07:48:57 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:39:14.144 07:48:57 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:14.144 07:48:57 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:14.144 07:48:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:14.144 ************************************ 00:39:14.144 START TEST fio_dif_rand_params 00:39:14.144 ************************************ 00:39:14.144 07:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:39:14.144 07:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:39:14.144 07:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:39:14.144 07:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:39:14.144 07:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:39:14.144 07:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:39:14.144 07:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:39:14.144 07:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:39:14.144 07:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:39:14.144 07:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:39:14.144 07:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:14.144 07:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:39:14.144 07:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:39:14.144 07:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:39:14.144 07:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:14.144 07:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:14.144 bdev_null0 00:39:14.144 07:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:14.145 [2024-11-26 07:48:58.071853] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:14.145 { 00:39:14.145 "params": { 00:39:14.145 "name": "Nvme$subsystem", 00:39:14.145 "trtype": "$TEST_TRANSPORT", 00:39:14.145 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:14.145 "adrfam": "ipv4", 00:39:14.145 "trsvcid": "$NVMF_PORT", 00:39:14.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:14.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:14.145 "hdgst": ${hdgst:-false}, 00:39:14.145 "ddgst": ${ddgst:-false} 00:39:14.145 }, 00:39:14.145 "method": "bdev_nvme_attach_controller" 00:39:14.145 } 00:39:14.145 EOF 00:39:14.145 )") 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:14.145 "params": { 00:39:14.145 "name": "Nvme0", 00:39:14.145 "trtype": "tcp", 00:39:14.145 "traddr": "10.0.0.2", 00:39:14.145 "adrfam": "ipv4", 00:39:14.145 "trsvcid": "4420", 00:39:14.145 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:14.145 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:14.145 "hdgst": false, 00:39:14.145 "ddgst": false 00:39:14.145 }, 00:39:14.145 "method": "bdev_nvme_attach_controller" 00:39:14.145 }' 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:14.145 07:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:14.406 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:39:14.406 ... 00:39:14.406 fio-3.35 00:39:14.406 Starting 3 threads 00:39:20.988 00:39:20.988 filename0: (groupid=0, jobs=1): err= 0: pid=2442973: Tue Nov 26 07:49:04 2024 00:39:20.988 read: IOPS=263, BW=33.0MiB/s (34.6MB/s)(165MiB/5008msec) 00:39:20.988 slat (nsec): min=5547, max=63588, avg=7704.02, stdev=2380.24 00:39:20.988 clat (usec): min=5638, max=50827, avg=11353.08, stdev=3197.35 00:39:20.988 lat (usec): min=5645, max=50833, avg=11360.79, stdev=3197.42 00:39:20.988 clat percentiles (usec): 00:39:20.988 | 1.00th=[ 6783], 5.00th=[ 7898], 10.00th=[ 8455], 20.00th=[ 9372], 00:39:20.988 | 30.00th=[10028], 40.00th=[10814], 50.00th=[11338], 60.00th=[11863], 00:39:20.988 | 70.00th=[12256], 80.00th=[12911], 90.00th=[13698], 95.00th=[14222], 00:39:20.988 | 99.00th=[15533], 99.50th=[16712], 99.90th=[50594], 99.95th=[50594], 00:39:20.988 | 99.99th=[50594] 00:39:20.988 bw ( KiB/s): min=32000, max=35584, per=36.16%, avg=33766.40, stdev=1419.97, samples=10 00:39:20.988 iops : min= 250, max= 278, avg=263.80, stdev=11.09, samples=10 00:39:20.988 lat (msec) : 10=28.59%, 20=70.95%, 50=0.23%, 100=0.23% 00:39:20.988 cpu : usr=94.65%, sys=5.07%, ctx=6, majf=0, minf=114 00:39:20.988 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:20.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.988 issued rwts: total=1322,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:20.988 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:20.988 filename0: (groupid=0, jobs=1): err= 0: pid=2442974: Tue Nov 26 07:49:04 2024 00:39:20.988 read: IOPS=246, BW=30.8MiB/s (32.3MB/s)(154MiB/5006msec) 00:39:20.988 slat (nsec): min=5568, max=37198, avg=7651.28, stdev=1937.50 00:39:20.988 clat (usec): min=6065, max=54356, avg=12167.69, stdev=6663.77 00:39:20.988 lat (usec): min=6072, max=54364, avg=12175.35, stdev=6663.98 00:39:20.988 clat percentiles (usec): 00:39:20.988 | 1.00th=[ 6980], 5.00th=[ 7898], 10.00th=[ 8455], 20.00th=[ 9503], 00:39:20.988 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11076], 60.00th=[11600], 00:39:20.988 | 70.00th=[12256], 80.00th=[12911], 90.00th=[13960], 95.00th=[15139], 00:39:20.988 | 99.00th=[51119], 99.50th=[53216], 99.90th=[54264], 99.95th=[54264], 00:39:20.988 | 99.99th=[54264] 00:39:20.988 bw ( KiB/s): min=27136, max=36096, per=33.75%, avg=31513.60, stdev=2914.98, samples=10 00:39:20.988 iops : min= 212, max= 282, avg=246.20, stdev=22.77, samples=10 00:39:20.988 lat (msec) : 10=26.76%, 20=70.56%, 50=1.22%, 100=1.46% 00:39:20.988 cpu : usr=94.27%, sys=5.49%, ctx=11, majf=0, minf=109 00:39:20.988 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:20.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.988 issued rwts: total=1233,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:20.988 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:20.988 filename0: (groupid=0, jobs=1): err= 0: pid=2442975: Tue Nov 26 07:49:04 2024 00:39:20.988 read: IOPS=222, BW=27.9MiB/s (29.2MB/s)(141MiB/5043msec) 00:39:20.988 slat (nsec): min=5536, max=48856, avg=7645.46, stdev=2469.39 00:39:20.988 clat (usec): min=7308, max=55559, avg=13412.60, stdev=7978.22 00:39:20.988 lat (usec): min=7314, max=55565, avg=13420.24, stdev=7978.31 00:39:20.988 clat percentiles (usec): 00:39:20.988 | 1.00th=[ 7898], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[10028], 00:39:20.988 | 30.00th=[10552], 40.00th=[11207], 50.00th=[11731], 60.00th=[12518], 00:39:20.988 | 70.00th=[13435], 80.00th=[14222], 90.00th=[15139], 95.00th=[16319], 00:39:20.988 | 99.00th=[52691], 99.50th=[53216], 99.90th=[55313], 99.95th=[55313], 00:39:20.988 | 99.99th=[55313] 00:39:20.988 bw ( KiB/s): min=22528, max=34816, per=30.76%, avg=28723.20, stdev=3513.83, samples=10 00:39:20.988 iops : min= 176, max= 272, avg=224.40, stdev=27.45, samples=10 00:39:20.988 lat (msec) : 10=20.64%, 20=75.44%, 50=0.62%, 100=3.29% 00:39:20.988 cpu : usr=94.41%, sys=5.34%, ctx=16, majf=0, minf=95 00:39:20.988 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:20.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.988 issued rwts: total=1124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:20.988 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:20.988 00:39:20.988 Run status group 0 (all jobs): 00:39:20.988 READ: bw=91.2MiB/s (95.6MB/s), 27.9MiB/s-33.0MiB/s (29.2MB/s-34.6MB/s), io=460MiB (482MB), run=5006-5043msec 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:20.988 bdev_null0 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:20.988 [2024-11-26 07:49:04.435656] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:20.988 bdev_null1 00:39:20.988 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:20.989 bdev_null2 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:20.989 { 00:39:20.989 "params": { 00:39:20.989 "name": "Nvme$subsystem", 00:39:20.989 "trtype": "$TEST_TRANSPORT", 00:39:20.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:20.989 "adrfam": "ipv4", 00:39:20.989 "trsvcid": "$NVMF_PORT", 00:39:20.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:20.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:20.989 "hdgst": ${hdgst:-false}, 00:39:20.989 "ddgst": ${ddgst:-false} 00:39:20.989 }, 00:39:20.989 "method": "bdev_nvme_attach_controller" 00:39:20.989 } 00:39:20.989 EOF 00:39:20.989 )") 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:20.989 { 00:39:20.989 "params": { 00:39:20.989 "name": "Nvme$subsystem", 00:39:20.989 "trtype": "$TEST_TRANSPORT", 00:39:20.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:20.989 "adrfam": "ipv4", 00:39:20.989 "trsvcid": "$NVMF_PORT", 00:39:20.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:20.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:20.989 "hdgst": ${hdgst:-false}, 00:39:20.989 "ddgst": ${ddgst:-false} 00:39:20.989 }, 00:39:20.989 "method": "bdev_nvme_attach_controller" 00:39:20.989 } 00:39:20.989 EOF 00:39:20.989 )") 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:20.989 { 00:39:20.989 "params": { 00:39:20.989 "name": "Nvme$subsystem", 00:39:20.989 "trtype": "$TEST_TRANSPORT", 00:39:20.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:20.989 "adrfam": "ipv4", 00:39:20.989 "trsvcid": "$NVMF_PORT", 00:39:20.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:20.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:20.989 "hdgst": ${hdgst:-false}, 00:39:20.989 "ddgst": ${ddgst:-false} 00:39:20.989 }, 00:39:20.989 "method": "bdev_nvme_attach_controller" 00:39:20.989 } 00:39:20.989 EOF 00:39:20.989 )") 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:20.989 "params": { 00:39:20.989 "name": "Nvme0", 00:39:20.989 "trtype": "tcp", 00:39:20.989 "traddr": "10.0.0.2", 00:39:20.989 "adrfam": "ipv4", 00:39:20.989 "trsvcid": "4420", 00:39:20.989 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:20.989 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:20.989 "hdgst": false, 00:39:20.989 "ddgst": false 00:39:20.989 }, 00:39:20.989 "method": "bdev_nvme_attach_controller" 00:39:20.989 },{ 00:39:20.989 "params": { 00:39:20.989 "name": "Nvme1", 00:39:20.989 "trtype": "tcp", 00:39:20.989 "traddr": "10.0.0.2", 00:39:20.989 "adrfam": "ipv4", 00:39:20.989 "trsvcid": "4420", 00:39:20.989 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:20.989 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:20.989 "hdgst": false, 00:39:20.989 "ddgst": false 00:39:20.989 }, 00:39:20.989 "method": "bdev_nvme_attach_controller" 00:39:20.989 },{ 00:39:20.989 "params": { 00:39:20.989 "name": "Nvme2", 00:39:20.989 "trtype": "tcp", 00:39:20.989 "traddr": "10.0.0.2", 00:39:20.989 "adrfam": "ipv4", 00:39:20.989 "trsvcid": "4420", 00:39:20.989 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:39:20.989 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:39:20.989 "hdgst": false, 00:39:20.989 "ddgst": false 00:39:20.989 }, 00:39:20.989 "method": "bdev_nvme_attach_controller" 00:39:20.989 }' 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:39:20.989 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:20.990 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:20.990 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:20.990 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:20.990 07:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:20.990 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:20.990 ... 00:39:20.990 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:20.990 ... 00:39:20.990 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:20.990 ... 00:39:20.990 fio-3.35 00:39:20.990 Starting 24 threads 00:39:33.218 00:39:33.218 filename0: (groupid=0, jobs=1): err= 0: pid=2444481: Tue Nov 26 07:49:16 2024 00:39:33.218 read: IOPS=516, BW=2068KiB/s (2118kB/s)(20.2MiB/10016msec) 00:39:33.218 slat (nsec): min=5497, max=76555, avg=11124.31, stdev=9187.08 00:39:33.218 clat (usec): min=2601, max=55557, avg=30858.81, stdev=5268.00 00:39:33.218 lat (usec): min=2620, max=55564, avg=30869.94, stdev=5267.94 00:39:33.218 clat percentiles (usec): 00:39:33.218 | 1.00th=[ 5932], 5.00th=[18744], 10.00th=[26870], 20.00th=[31851], 00:39:33.218 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:39:33.218 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:39:33.218 | 99.00th=[33817], 99.50th=[35914], 99.90th=[55313], 99.95th=[55313], 00:39:33.218 | 99.99th=[55313] 00:39:33.218 bw ( KiB/s): min= 1920, max= 2688, per=4.34%, avg=2064.80, stdev=198.86, samples=20 00:39:33.218 iops : min= 480, max= 672, avg=516.20, stdev=49.71, samples=20 00:39:33.218 lat (msec) : 4=0.93%, 10=0.62%, 20=5.14%, 50=93.01%, 100=0.31% 00:39:33.218 cpu : usr=98.77%, sys=0.90%, ctx=19, majf=0, minf=51 00:39:33.218 IO depths : 1=5.6%, 2=11.2%, 4=23.0%, 8=53.2%, 16=7.0%, 32=0.0%, >=64=0.0% 00:39:33.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.218 complete : 0=0.0%, 4=93.5%, 8=0.7%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.218 issued rwts: total=5178,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.218 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:33.218 filename0: (groupid=0, jobs=1): err= 0: pid=2444482: Tue Nov 26 07:49:16 2024 00:39:33.218 read: IOPS=494, BW=1976KiB/s (2024kB/s)(19.3MiB/10018msec) 00:39:33.218 slat (usec): min=5, max=132, avg=25.41, stdev=21.11 00:39:33.218 clat (usec): min=16308, max=54219, avg=32142.90, stdev=2086.05 00:39:33.218 lat (usec): min=16319, max=54267, avg=32168.31, stdev=2087.22 00:39:33.218 clat percentiles (usec): 00:39:33.218 | 1.00th=[22938], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:39:33.218 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:39:33.218 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:39:33.218 | 99.00th=[35914], 99.50th=[44827], 99.90th=[54264], 99.95th=[54264], 00:39:33.218 | 99.99th=[54264] 00:39:33.218 bw ( KiB/s): min= 1916, max= 2048, per=4.15%, avg=1973.40, stdev=63.40, samples=20 00:39:33.218 iops : min= 479, max= 512, avg=493.35, stdev=15.85, samples=20 00:39:33.218 lat (msec) : 20=0.63%, 50=99.25%, 100=0.12% 00:39:33.218 cpu : usr=99.13%, sys=0.55%, ctx=37, majf=0, minf=48 00:39:33.218 IO depths : 1=5.8%, 2=11.7%, 4=23.9%, 8=51.9%, 16=6.7%, 32=0.0%, >=64=0.0% 00:39:33.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.219 complete : 0=0.0%, 4=93.8%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.219 issued rwts: total=4950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.219 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:33.219 filename0: (groupid=0, jobs=1): err= 0: pid=2444483: Tue Nov 26 07:49:16 2024 00:39:33.219 read: IOPS=493, BW=1973KiB/s (2020kB/s)(19.3MiB/10004msec) 00:39:33.219 slat (usec): min=5, max=102, avg=22.54, stdev=13.88 00:39:33.219 clat (usec): min=5019, max=56240, avg=32251.08, stdev=3624.78 00:39:33.219 lat (usec): min=5025, max=56252, avg=32273.62, stdev=3625.54 00:39:33.219 clat percentiles (usec): 00:39:33.219 | 1.00th=[20841], 5.00th=[26870], 10.00th=[31589], 20.00th=[31851], 00:39:33.219 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:39:33.219 | 70.00th=[32637], 80.00th=[32637], 90.00th=[33424], 95.00th=[35914], 00:39:33.219 | 99.00th=[46924], 99.50th=[51119], 99.90th=[53216], 99.95th=[56361], 00:39:33.219 | 99.99th=[56361] 00:39:33.219 bw ( KiB/s): min= 1795, max= 2064, per=4.12%, avg=1959.74, stdev=68.60, samples=19 00:39:33.219 iops : min= 448, max= 516, avg=489.89, stdev=17.25, samples=19 00:39:33.219 lat (msec) : 10=0.24%, 20=0.63%, 50=98.52%, 100=0.61% 00:39:33.219 cpu : usr=98.80%, sys=0.87%, ctx=16, majf=0, minf=37 00:39:33.219 IO depths : 1=4.2%, 2=8.5%, 4=18.0%, 8=60.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:39:33.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.219 complete : 0=0.0%, 4=92.4%, 8=2.9%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.219 issued rwts: total=4934,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.219 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:33.219 filename0: (groupid=0, jobs=1): err= 0: pid=2444484: Tue Nov 26 07:49:16 2024 00:39:33.219 read: IOPS=498, BW=1992KiB/s (2040kB/s)(19.5MiB/10002msec) 00:39:33.219 slat (usec): min=5, max=110, avg=14.03, stdev=12.77 00:39:33.219 clat (usec): min=11491, max=48240, avg=31999.87, stdev=2665.43 00:39:33.219 lat (usec): min=11499, max=48247, avg=32013.90, stdev=2665.12 00:39:33.219 clat percentiles (usec): 00:39:33.219 | 1.00th=[16581], 5.00th=[31065], 10.00th=[31589], 20.00th=[32113], 00:39:33.219 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:39:33.219 | 70.00th=[32637], 80.00th=[32637], 90.00th=[33162], 95.00th=[33424], 00:39:33.219 | 99.00th=[34341], 99.50th=[39060], 99.90th=[42206], 99.95th=[47973], 00:39:33.219 | 99.99th=[48497] 00:39:33.219 bw ( KiB/s): min= 1920, max= 2176, per=4.20%, avg=1996.63, stdev=80.28, samples=19 00:39:33.219 iops : min= 480, max= 544, avg=499.16, stdev=20.07, samples=19 00:39:33.219 lat (msec) : 20=1.73%, 50=98.27% 00:39:33.219 cpu : usr=98.97%, sys=0.67%, ctx=76, majf=0, minf=69 00:39:33.219 IO depths : 1=5.8%, 2=11.7%, 4=23.8%, 8=52.0%, 16=6.7%, 32=0.0%, >=64=0.0% 00:39:33.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.219 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.219 issued rwts: total=4982,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.219 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:33.219 filename0: (groupid=0, jobs=1): err= 0: pid=2444485: Tue Nov 26 07:49:16 2024 00:39:33.219 read: IOPS=496, BW=1986KiB/s (2034kB/s)(19.4MiB/10004msec) 00:39:33.219 slat (usec): min=5, max=107, avg=18.68, stdev=15.03 00:39:33.219 clat (usec): min=3237, max=80420, avg=32081.17, stdev=4179.06 00:39:33.219 lat (usec): min=3243, max=80435, avg=32099.85, stdev=4180.01 00:39:33.219 clat percentiles (usec): 00:39:33.219 | 1.00th=[17695], 5.00th=[24511], 10.00th=[29230], 20.00th=[31851], 00:39:33.219 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:39:33.219 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[36963], 00:39:33.219 | 99.00th=[46924], 99.50th=[49546], 99.90th=[61080], 99.95th=[80217], 00:39:33.219 | 99.99th=[80217] 00:39:33.219 bw ( KiB/s): min= 1771, max= 2144, per=4.16%, avg=1979.95, stdev=80.91, samples=19 00:39:33.219 iops : min= 442, max= 536, avg=494.95, stdev=20.33, samples=19 00:39:33.219 lat (msec) : 4=0.20%, 20=1.07%, 50=98.29%, 100=0.44% 00:39:33.219 cpu : usr=98.85%, sys=0.81%, ctx=14, majf=0, minf=47 00:39:33.219 IO depths : 1=2.4%, 2=4.8%, 4=11.6%, 8=68.6%, 16=12.7%, 32=0.0%, >=64=0.0% 00:39:33.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.219 complete : 0=0.0%, 4=91.1%, 8=5.6%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.219 issued rwts: total=4968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.219 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:33.219 filename0: (groupid=0, jobs=1): err= 0: pid=2444486: Tue Nov 26 07:49:16 2024 00:39:33.219 read: IOPS=493, BW=1973KiB/s (2021kB/s)(19.3MiB/10022msec) 00:39:33.219 slat (usec): min=5, max=113, avg=25.47, stdev=16.74 00:39:33.219 clat (usec): min=15038, max=38449, avg=32221.53, stdev=1441.95 00:39:33.219 lat (usec): min=15044, max=38481, avg=32246.99, stdev=1442.23 00:39:33.219 clat percentiles (usec): 00:39:33.219 | 1.00th=[30278], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:39:33.219 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:39:33.219 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:39:33.219 | 99.00th=[34341], 99.50th=[34866], 99.90th=[38536], 99.95th=[38536], 00:39:33.219 | 99.99th=[38536] 00:39:33.219 bw ( KiB/s): min= 1916, max= 2048, per=4.15%, avg=1971.15, stdev=64.39, samples=20 00:39:33.219 iops : min= 479, max= 512, avg=492.75, stdev=16.13, samples=20 00:39:33.219 lat (msec) : 20=0.65%, 50=99.35% 00:39:33.219 cpu : usr=99.05%, sys=0.62%, ctx=15, majf=0, minf=29 00:39:33.219 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:33.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.219 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.219 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.219 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:33.219 filename0: (groupid=0, jobs=1): err= 0: pid=2444487: Tue Nov 26 07:49:16 2024 00:39:33.219 read: IOPS=491, BW=1964KiB/s (2011kB/s)(19.2MiB/10004msec) 00:39:33.219 slat (nsec): min=5623, max=99399, avg=30102.44, stdev=15302.53 00:39:33.219 clat (usec): min=23986, max=40845, avg=32326.26, stdev=904.00 00:39:33.219 lat (usec): min=24007, max=40871, avg=32356.36, stdev=903.63 00:39:33.219 clat percentiles (usec): 00:39:33.219 | 1.00th=[31327], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:39:33.219 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:39:33.219 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:39:33.219 | 99.00th=[34341], 99.50th=[36963], 99.90th=[40633], 99.95th=[40633], 00:39:33.219 | 99.99th=[40633] 00:39:33.219 bw ( KiB/s): min= 1920, max= 2048, per=4.12%, avg=1960.42, stdev=61.13, samples=19 00:39:33.219 iops : min= 480, max= 512, avg=490.11, stdev=15.28, samples=19 00:39:33.219 lat (msec) : 50=100.00% 00:39:33.219 cpu : usr=98.80%, sys=0.86%, ctx=19, majf=0, minf=37 00:39:33.219 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:33.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.219 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.219 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.219 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:33.219 filename0: (groupid=0, jobs=1): err= 0: pid=2444488: Tue Nov 26 07:49:16 2024 00:39:33.219 read: IOPS=493, BW=1974KiB/s (2021kB/s)(19.3MiB/10011msec) 00:39:33.219 slat (usec): min=5, max=103, avg=25.11, stdev=15.23 00:39:33.219 clat (usec): min=16252, max=47857, avg=32206.02, stdev=1796.36 00:39:33.219 lat (usec): min=16270, max=47896, avg=32231.13, stdev=1797.23 00:39:33.219 clat percentiles (usec): 00:39:33.219 | 1.00th=[22938], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:39:33.219 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:39:33.219 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:39:33.219 | 99.00th=[36439], 99.50th=[39584], 99.90th=[47973], 99.95th=[47973], 00:39:33.219 | 99.99th=[47973] 00:39:33.219 bw ( KiB/s): min= 1920, max= 2048, per=4.13%, avg=1965.47, stdev=61.57, samples=19 00:39:33.219 iops : min= 480, max= 512, avg=491.37, stdev=15.39, samples=19 00:39:33.219 lat (msec) : 20=0.45%, 50=99.55% 00:39:33.219 cpu : usr=98.91%, sys=0.74%, ctx=16, majf=0, minf=24 00:39:33.219 IO depths : 1=5.9%, 2=11.9%, 4=24.2%, 8=51.3%, 16=6.7%, 32=0.0%, >=64=0.0% 00:39:33.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.219 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.219 issued rwts: total=4940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.219 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:33.219 filename1: (groupid=0, jobs=1): err= 0: pid=2444489: Tue Nov 26 07:49:16 2024 00:39:33.219 read: IOPS=494, BW=1980KiB/s (2027kB/s)(19.4MiB/10021msec) 00:39:33.219 slat (usec): min=5, max=107, avg=22.41, stdev=17.46 00:39:33.219 clat (usec): min=12821, max=41056, avg=32125.97, stdev=2099.15 00:39:33.219 lat (usec): min=12831, max=41115, avg=32148.38, stdev=2100.49 00:39:33.219 clat percentiles (usec): 00:39:33.219 | 1.00th=[20317], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:39:33.219 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:39:33.219 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:39:33.219 | 99.00th=[36439], 99.50th=[40109], 99.90th=[41157], 99.95th=[41157], 00:39:33.219 | 99.99th=[41157] 00:39:33.219 bw ( KiB/s): min= 1904, max= 2176, per=4.16%, avg=1976.80, stdev=78.12, samples=20 00:39:33.219 iops : min= 476, max= 544, avg=494.20, stdev=19.53, samples=20 00:39:33.219 lat (msec) : 20=0.87%, 50=99.13% 00:39:33.219 cpu : usr=98.74%, sys=0.91%, ctx=23, majf=0, minf=42 00:39:33.219 IO depths : 1=5.8%, 2=11.7%, 4=24.2%, 8=51.6%, 16=6.7%, 32=0.0%, >=64=0.0% 00:39:33.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.219 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.219 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.219 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:33.219 filename1: (groupid=0, jobs=1): err= 0: pid=2444490: Tue Nov 26 07:49:16 2024 00:39:33.219 read: IOPS=492, BW=1969KiB/s (2016kB/s)(19.2MiB/10002msec) 00:39:33.219 slat (usec): min=5, max=104, avg=19.67, stdev=15.49 00:39:33.219 clat (usec): min=13994, max=61234, avg=32316.89, stdev=2647.95 00:39:33.219 lat (usec): min=14002, max=61249, avg=32336.56, stdev=2647.99 00:39:33.219 clat percentiles (usec): 00:39:33.219 | 1.00th=[24249], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:39:33.219 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:39:33.219 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:39:33.220 | 99.00th=[34866], 99.50th=[54264], 99.90th=[61080], 99.95th=[61080], 00:39:33.220 | 99.99th=[61080] 00:39:33.220 bw ( KiB/s): min= 1795, max= 2048, per=4.13%, avg=1965.63, stdev=72.79, samples=19 00:39:33.220 iops : min= 448, max= 512, avg=491.37, stdev=18.29, samples=19 00:39:33.220 lat (msec) : 20=0.49%, 50=98.90%, 100=0.61% 00:39:33.220 cpu : usr=99.00%, sys=0.67%, ctx=13, majf=0, minf=42 00:39:33.220 IO depths : 1=5.9%, 2=12.0%, 4=24.8%, 8=50.7%, 16=6.6%, 32=0.0%, >=64=0.0% 00:39:33.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.220 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.220 issued rwts: total=4924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.220 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:33.220 filename1: (groupid=0, jobs=1): err= 0: pid=2444491: Tue Nov 26 07:49:16 2024 00:39:33.220 read: IOPS=491, BW=1965KiB/s (2012kB/s)(19.2MiB/10001msec) 00:39:33.220 slat (usec): min=5, max=109, avg=31.29, stdev=16.04 00:39:33.220 clat (usec): min=16110, max=51615, avg=32278.73, stdev=1600.85 00:39:33.220 lat (usec): min=16117, max=51631, avg=32310.02, stdev=1601.04 00:39:33.220 clat percentiles (usec): 00:39:33.220 | 1.00th=[31327], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:39:33.220 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:39:33.220 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:39:33.220 | 99.00th=[34341], 99.50th=[36963], 99.90th=[51643], 99.95th=[51643], 00:39:33.220 | 99.99th=[51643] 00:39:33.220 bw ( KiB/s): min= 1795, max= 2048, per=4.12%, avg=1960.58, stdev=74.17, samples=19 00:39:33.220 iops : min= 448, max= 512, avg=490.11, stdev=18.64, samples=19 00:39:33.220 lat (msec) : 20=0.33%, 50=99.35%, 100=0.33% 00:39:33.220 cpu : usr=98.69%, sys=0.96%, ctx=14, majf=0, minf=35 00:39:33.220 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:33.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.220 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.220 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.220 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:33.220 filename1: (groupid=0, jobs=1): err= 0: pid=2444492: Tue Nov 26 07:49:16 2024 00:39:33.220 read: IOPS=491, BW=1965KiB/s (2012kB/s)(19.2MiB/10001msec) 00:39:33.220 slat (nsec): min=5540, max=97971, avg=25118.25, stdev=15798.70 00:39:33.220 clat (usec): min=24695, max=42849, avg=32368.72, stdev=1161.46 00:39:33.220 lat (usec): min=24707, max=42858, avg=32393.84, stdev=1160.60 00:39:33.220 clat percentiles (usec): 00:39:33.220 | 1.00th=[28443], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:39:33.220 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:39:33.220 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33424], 00:39:33.220 | 99.00th=[36963], 99.50th=[39584], 99.90th=[41157], 99.95th=[41157], 00:39:33.220 | 99.99th=[42730] 00:39:33.220 bw ( KiB/s): min= 1916, max= 2048, per=4.12%, avg=1960.21, stdev=59.64, samples=19 00:39:33.220 iops : min= 479, max= 512, avg=490.05, stdev=14.91, samples=19 00:39:33.220 lat (msec) : 50=100.00% 00:39:33.220 cpu : usr=98.81%, sys=0.84%, ctx=15, majf=0, minf=34 00:39:33.220 IO depths : 1=5.8%, 2=11.7%, 4=24.1%, 8=51.6%, 16=6.7%, 32=0.0%, >=64=0.0% 00:39:33.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.220 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.220 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.220 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:33.220 filename1: (groupid=0, jobs=1): err= 0: pid=2444493: Tue Nov 26 07:49:16 2024 00:39:33.220 read: IOPS=506, BW=2028KiB/s (2077kB/s)(19.8MiB/10016msec) 00:39:33.220 slat (nsec): min=5546, max=90188, avg=12989.71, stdev=10536.75 00:39:33.220 clat (usec): min=2690, max=35229, avg=31455.45, stdev=4659.24 00:39:33.220 lat (usec): min=2708, max=35237, avg=31468.44, stdev=4659.02 00:39:33.220 clat percentiles (usec): 00:39:33.220 | 1.00th=[ 5800], 5.00th=[28705], 10.00th=[31851], 20.00th=[32113], 00:39:33.220 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:39:33.220 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:39:33.220 | 99.00th=[34341], 99.50th=[34341], 99.90th=[35390], 99.95th=[35390], 00:39:33.220 | 99.99th=[35390] 00:39:33.220 bw ( KiB/s): min= 1920, max= 2864, per=4.26%, avg=2024.80, stdev=207.61, samples=20 00:39:33.220 iops : min= 480, max= 716, avg=506.20, stdev=51.90, samples=20 00:39:33.220 lat (msec) : 4=0.95%, 10=1.20%, 20=1.67%, 50=96.18% 00:39:33.220 cpu : usr=98.75%, sys=0.91%, ctx=12, majf=0, minf=80 00:39:33.220 IO depths : 1=6.0%, 2=12.1%, 4=24.4%, 8=50.9%, 16=6.5%, 32=0.0%, >=64=0.0% 00:39:33.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.220 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.220 issued rwts: total=5078,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.220 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:33.220 filename1: (groupid=0, jobs=1): err= 0: pid=2444494: Tue Nov 26 07:49:16 2024 00:39:33.220 read: IOPS=492, BW=1969KiB/s (2016kB/s)(19.2MiB/10011msec) 00:39:33.220 slat (usec): min=5, max=119, avg=25.83, stdev=20.45 00:39:33.220 clat (usec): min=14833, max=53461, avg=32250.18, stdev=1405.31 00:39:33.220 lat (usec): min=14844, max=53482, avg=32276.01, stdev=1406.00 00:39:33.220 clat percentiles (usec): 00:39:33.220 | 1.00th=[30278], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:39:33.220 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:39:33.220 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:39:33.220 | 99.00th=[34341], 99.50th=[36439], 99.90th=[49021], 99.95th=[53216], 00:39:33.220 | 99.99th=[53216] 00:39:33.220 bw ( KiB/s): min= 1920, max= 2048, per=4.14%, avg=1967.16, stdev=63.44, samples=19 00:39:33.220 iops : min= 480, max= 512, avg=491.79, stdev=15.86, samples=19 00:39:33.220 lat (msec) : 20=0.14%, 50=99.80%, 100=0.06% 00:39:33.220 cpu : usr=99.05%, sys=0.61%, ctx=15, majf=0, minf=46 00:39:33.220 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:39:33.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.220 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.220 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.220 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:33.220 filename1: (groupid=0, jobs=1): err= 0: pid=2444495: Tue Nov 26 07:49:16 2024 00:39:33.220 read: IOPS=497, BW=1990KiB/s (2038kB/s)(19.4MiB/10002msec) 00:39:33.220 slat (usec): min=5, max=117, avg=22.40, stdev=18.72 00:39:33.220 clat (usec): min=8485, max=52155, avg=31963.49, stdev=2662.69 00:39:33.220 lat (usec): min=8497, max=52212, avg=31985.89, stdev=2663.23 00:39:33.220 clat percentiles (usec): 00:39:33.220 | 1.00th=[14746], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:39:33.220 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:39:33.220 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:39:33.220 | 99.00th=[34341], 99.50th=[34866], 99.90th=[39584], 99.95th=[40109], 00:39:33.220 | 99.99th=[52167] 00:39:33.220 bw ( KiB/s): min= 1904, max= 2304, per=4.18%, avg=1987.37, stdev=99.00, samples=19 00:39:33.220 iops : min= 476, max= 576, avg=496.84, stdev=24.75, samples=19 00:39:33.220 lat (msec) : 10=0.22%, 20=1.65%, 50=98.09%, 100=0.04% 00:39:33.220 cpu : usr=98.96%, sys=0.73%, ctx=17, majf=0, minf=56 00:39:33.220 IO depths : 1=6.0%, 2=12.1%, 4=24.6%, 8=50.8%, 16=6.5%, 32=0.0%, >=64=0.0% 00:39:33.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.220 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.220 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.220 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:33.220 filename1: (groupid=0, jobs=1): err= 0: pid=2444496: Tue Nov 26 07:49:16 2024 00:39:33.220 read: IOPS=495, BW=1982KiB/s (2029kB/s)(19.4MiB/10004msec) 00:39:33.220 slat (usec): min=5, max=105, avg=23.29, stdev=15.50 00:39:33.220 clat (usec): min=7825, max=54151, avg=32106.59, stdev=3445.73 00:39:33.220 lat (usec): min=7833, max=54172, avg=32129.88, stdev=3446.48 00:39:33.220 clat percentiles (usec): 00:39:33.220 | 1.00th=[18482], 5.00th=[27132], 10.00th=[31065], 20.00th=[31851], 00:39:33.220 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:39:33.220 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33424], 95.00th=[35914], 00:39:33.220 | 99.00th=[42206], 99.50th=[48497], 99.90th=[54264], 99.95th=[54264], 00:39:33.220 | 99.99th=[54264] 00:39:33.220 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1973.05, stdev=67.26, samples=19 00:39:33.220 iops : min= 448, max= 512, avg=493.26, stdev=16.82, samples=19 00:39:33.220 lat (msec) : 10=0.12%, 20=1.13%, 50=98.26%, 100=0.48% 00:39:33.220 cpu : usr=98.92%, sys=0.74%, ctx=13, majf=0, minf=30 00:39:33.220 IO depths : 1=3.8%, 2=7.8%, 4=16.9%, 8=61.4%, 16=10.1%, 32=0.0%, >=64=0.0% 00:39:33.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.220 complete : 0=0.0%, 4=92.2%, 8=3.5%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.220 issued rwts: total=4956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.220 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:33.220 filename2: (groupid=0, jobs=1): err= 0: pid=2444497: Tue Nov 26 07:49:16 2024 00:39:33.220 read: IOPS=497, BW=1989KiB/s (2037kB/s)(19.4MiB/10010msec) 00:39:33.220 slat (usec): min=5, max=114, avg=21.07, stdev=14.40 00:39:33.220 clat (usec): min=11752, max=58622, avg=32011.41, stdev=3952.57 00:39:33.220 lat (usec): min=11759, max=58637, avg=32032.48, stdev=3954.06 00:39:33.220 clat percentiles (usec): 00:39:33.220 | 1.00th=[19268], 5.00th=[24511], 10.00th=[28705], 20.00th=[31851], 00:39:33.220 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:39:33.220 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33817], 95.00th=[36439], 00:39:33.220 | 99.00th=[46924], 99.50th=[51119], 99.90th=[58459], 99.95th=[58459], 00:39:33.220 | 99.99th=[58459] 00:39:33.220 bw ( KiB/s): min= 1840, max= 2160, per=4.17%, avg=1981.47, stdev=86.70, samples=19 00:39:33.220 iops : min= 460, max= 540, avg=495.37, stdev=21.67, samples=19 00:39:33.220 lat (msec) : 20=1.55%, 50=97.89%, 100=0.56% 00:39:33.220 cpu : usr=98.85%, sys=0.81%, ctx=15, majf=0, minf=43 00:39:33.220 IO depths : 1=3.8%, 2=7.9%, 4=17.5%, 8=61.2%, 16=9.6%, 32=0.0%, >=64=0.0% 00:39:33.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.220 complete : 0=0.0%, 4=92.2%, 8=3.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.220 issued rwts: total=4978,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.220 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:33.220 filename2: (groupid=0, jobs=1): err= 0: pid=2444498: Tue Nov 26 07:49:16 2024 00:39:33.220 read: IOPS=493, BW=1974KiB/s (2021kB/s)(19.3MiB/10014msec) 00:39:33.221 slat (nsec): min=5488, max=97392, avg=20021.83, stdev=14159.16 00:39:33.221 clat (usec): min=15961, max=59914, avg=32264.90, stdev=3183.35 00:39:33.221 lat (usec): min=15966, max=59920, avg=32284.93, stdev=3183.76 00:39:33.221 clat percentiles (usec): 00:39:33.221 | 1.00th=[20579], 5.00th=[28443], 10.00th=[31589], 20.00th=[32113], 00:39:33.221 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:39:33.221 | 70.00th=[32637], 80.00th=[32637], 90.00th=[33162], 95.00th=[33817], 00:39:33.221 | 99.00th=[44303], 99.50th=[47973], 99.90th=[60031], 99.95th=[60031], 00:39:33.221 | 99.99th=[60031] 00:39:33.221 bw ( KiB/s): min= 1792, max= 2096, per=4.15%, avg=1973.05, stdev=76.74, samples=19 00:39:33.221 iops : min= 448, max= 524, avg=493.26, stdev=19.19, samples=19 00:39:33.221 lat (msec) : 20=0.85%, 50=98.70%, 100=0.45% 00:39:33.221 cpu : usr=98.75%, sys=0.90%, ctx=16, majf=0, minf=35 00:39:33.221 IO depths : 1=4.3%, 2=8.8%, 4=18.9%, 8=58.9%, 16=9.2%, 32=0.0%, >=64=0.0% 00:39:33.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.221 complete : 0=0.0%, 4=92.7%, 8=2.5%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.221 issued rwts: total=4942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.221 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:33.221 filename2: (groupid=0, jobs=1): err= 0: pid=2444499: Tue Nov 26 07:49:16 2024 00:39:33.221 read: IOPS=492, BW=1970KiB/s (2018kB/s)(19.3MiB/10024msec) 00:39:33.221 slat (usec): min=5, max=100, avg=19.87, stdev=14.24 00:39:33.221 clat (usec): min=11439, max=41329, avg=32289.33, stdev=1552.36 00:39:33.221 lat (usec): min=11447, max=41337, avg=32309.20, stdev=1552.45 00:39:33.221 clat percentiles (usec): 00:39:33.221 | 1.00th=[29230], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:39:33.221 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:39:33.221 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:39:33.221 | 99.00th=[34341], 99.50th=[34866], 99.90th=[41157], 99.95th=[41157], 00:39:33.221 | 99.99th=[41157] 00:39:33.221 bw ( KiB/s): min= 1904, max= 2048, per=4.14%, avg=1970.55, stdev=64.98, samples=20 00:39:33.221 iops : min= 476, max= 512, avg=492.60, stdev=16.28, samples=20 00:39:33.221 lat (msec) : 20=0.32%, 50=99.68% 00:39:33.221 cpu : usr=98.91%, sys=0.75%, ctx=13, majf=0, minf=42 00:39:33.221 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:33.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.221 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.221 issued rwts: total=4938,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.221 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:33.221 filename2: (groupid=0, jobs=1): err= 0: pid=2444500: Tue Nov 26 07:49:16 2024 00:39:33.221 read: IOPS=492, BW=1969KiB/s (2016kB/s)(19.2MiB/10013msec) 00:39:33.221 slat (nsec): min=5511, max=95346, avg=18970.56, stdev=14252.62 00:39:33.221 clat (usec): min=17786, max=47169, avg=32348.74, stdev=1486.16 00:39:33.221 lat (usec): min=17793, max=47192, avg=32367.71, stdev=1485.62 00:39:33.221 clat percentiles (usec): 00:39:33.221 | 1.00th=[26870], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:39:33.221 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:39:33.221 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:39:33.221 | 99.00th=[39060], 99.50th=[42206], 99.90th=[43779], 99.95th=[45351], 00:39:33.221 | 99.99th=[46924] 00:39:33.221 bw ( KiB/s): min= 1920, max= 2052, per=4.14%, avg=1966.30, stdev=62.82, samples=20 00:39:33.221 iops : min= 480, max= 513, avg=491.35, stdev=15.87, samples=20 00:39:33.221 lat (msec) : 20=0.04%, 50=99.96% 00:39:33.221 cpu : usr=98.76%, sys=0.89%, ctx=13, majf=0, minf=27 00:39:33.221 IO depths : 1=6.0%, 2=12.2%, 4=24.9%, 8=50.4%, 16=6.5%, 32=0.0%, >=64=0.0% 00:39:33.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.221 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.221 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.221 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:33.221 filename2: (groupid=0, jobs=1): err= 0: pid=2444501: Tue Nov 26 07:49:16 2024 00:39:33.221 read: IOPS=499, BW=1996KiB/s (2044kB/s)(19.5MiB/10002msec) 00:39:33.221 slat (usec): min=5, max=116, avg=19.80, stdev=18.17 00:39:33.221 clat (usec): min=8599, max=39867, avg=31888.26, stdev=2837.97 00:39:33.221 lat (usec): min=8611, max=39876, avg=31908.06, stdev=2838.25 00:39:33.221 clat percentiles (usec): 00:39:33.221 | 1.00th=[14746], 5.00th=[31065], 10.00th=[31589], 20.00th=[31851], 00:39:33.221 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:39:33.221 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:39:33.221 | 99.00th=[34341], 99.50th=[34341], 99.90th=[39584], 99.95th=[40109], 00:39:33.221 | 99.99th=[40109] 00:39:33.221 bw ( KiB/s): min= 1920, max= 2304, per=4.19%, avg=1994.11, stdev=98.37, samples=19 00:39:33.221 iops : min= 480, max= 576, avg=498.53, stdev=24.59, samples=19 00:39:33.221 lat (msec) : 10=0.32%, 20=1.70%, 50=97.98% 00:39:33.221 cpu : usr=98.73%, sys=0.92%, ctx=36, majf=0, minf=44 00:39:33.221 IO depths : 1=6.1%, 2=12.2%, 4=24.5%, 8=50.8%, 16=6.5%, 32=0.0%, >=64=0.0% 00:39:33.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.221 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.221 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.221 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:33.221 filename2: (groupid=0, jobs=1): err= 0: pid=2444502: Tue Nov 26 07:49:16 2024 00:39:33.221 read: IOPS=497, BW=1988KiB/s (2036kB/s)(19.4MiB/10004msec) 00:39:33.221 slat (usec): min=5, max=121, avg=24.78, stdev=20.37 00:39:33.221 clat (usec): min=7817, max=56434, avg=31981.72, stdev=4332.72 00:39:33.221 lat (usec): min=7826, max=56440, avg=32006.50, stdev=4333.72 00:39:33.221 clat percentiles (usec): 00:39:33.221 | 1.00th=[16188], 5.00th=[23462], 10.00th=[28705], 20.00th=[31851], 00:39:33.221 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:39:33.221 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33817], 95.00th=[36963], 00:39:33.221 | 99.00th=[50594], 99.50th=[51119], 99.90th=[54264], 99.95th=[56361], 00:39:33.221 | 99.99th=[56361] 00:39:33.221 bw ( KiB/s): min= 1795, max= 2048, per=4.15%, avg=1973.21, stdev=71.34, samples=19 00:39:33.221 iops : min= 448, max= 512, avg=493.26, stdev=17.94, samples=19 00:39:33.221 lat (msec) : 10=0.12%, 20=1.87%, 50=96.78%, 100=1.23% 00:39:33.221 cpu : usr=98.85%, sys=0.79%, ctx=35, majf=0, minf=35 00:39:33.221 IO depths : 1=3.2%, 2=6.9%, 4=16.2%, 8=63.2%, 16=10.4%, 32=0.0%, >=64=0.0% 00:39:33.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.221 complete : 0=0.0%, 4=92.0%, 8=3.5%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.221 issued rwts: total=4972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.221 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:33.221 filename2: (groupid=0, jobs=1): err= 0: pid=2444503: Tue Nov 26 07:49:16 2024 00:39:33.221 read: IOPS=496, BW=1986KiB/s (2034kB/s)(19.4MiB/10001msec) 00:39:33.221 slat (usec): min=5, max=134, avg=22.09, stdev=19.01 00:39:33.221 clat (usec): min=12658, max=44774, avg=32023.03, stdev=2226.49 00:39:33.221 lat (usec): min=12669, max=44796, avg=32045.12, stdev=2227.83 00:39:33.221 clat percentiles (usec): 00:39:33.221 | 1.00th=[18744], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:39:33.221 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:39:33.221 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:39:33.221 | 99.00th=[33817], 99.50th=[34341], 99.90th=[44827], 99.95th=[44827], 00:39:33.221 | 99.99th=[44827] 00:39:33.221 bw ( KiB/s): min= 1920, max= 2224, per=4.18%, avg=1989.89, stdev=95.29, samples=19 00:39:33.221 iops : min= 480, max= 556, avg=497.47, stdev=23.82, samples=19 00:39:33.221 lat (msec) : 20=1.11%, 50=98.89% 00:39:33.221 cpu : usr=98.83%, sys=0.84%, ctx=16, majf=0, minf=36 00:39:33.221 IO depths : 1=5.9%, 2=12.1%, 4=24.7%, 8=50.7%, 16=6.6%, 32=0.0%, >=64=0.0% 00:39:33.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.221 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.221 issued rwts: total=4966,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.221 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:33.221 filename2: (groupid=0, jobs=1): err= 0: pid=2444504: Tue Nov 26 07:49:16 2024 00:39:33.221 read: IOPS=494, BW=1979KiB/s (2026kB/s)(19.3MiB/10002msec) 00:39:33.221 slat (usec): min=5, max=121, avg=24.79, stdev=17.41 00:39:33.221 clat (usec): min=13773, max=54358, avg=32116.48, stdev=3069.75 00:39:33.221 lat (usec): min=13810, max=54363, avg=32141.27, stdev=3070.68 00:39:33.221 clat percentiles (usec): 00:39:33.221 | 1.00th=[22414], 5.00th=[27132], 10.00th=[31589], 20.00th=[31851], 00:39:33.221 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:39:33.221 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33817], 00:39:33.221 | 99.00th=[42206], 99.50th=[50070], 99.90th=[51643], 99.95th=[54264], 00:39:33.221 | 99.99th=[54264] 00:39:33.221 bw ( KiB/s): min= 1792, max= 2144, per=4.15%, avg=1975.58, stdev=84.73, samples=19 00:39:33.221 iops : min= 448, max= 536, avg=493.89, stdev=21.18, samples=19 00:39:33.221 lat (msec) : 20=0.65%, 50=98.91%, 100=0.44% 00:39:33.221 cpu : usr=98.84%, sys=0.83%, ctx=15, majf=0, minf=37 00:39:33.221 IO depths : 1=4.9%, 2=9.9%, 4=20.7%, 8=56.4%, 16=8.2%, 32=0.0%, >=64=0.0% 00:39:33.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.221 complete : 0=0.0%, 4=93.1%, 8=1.7%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.221 issued rwts: total=4948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.221 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:33.221 00:39:33.221 Run status group 0 (all jobs): 00:39:33.221 READ: bw=46.4MiB/s (48.7MB/s), 1964KiB/s-2068KiB/s (2011kB/s-2118kB/s), io=465MiB (488MB), run=10001-10024msec 00:39:33.221 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:39:33.221 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:33.221 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:33.221 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:33.221 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:33.221 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:33.221 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.221 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:33.221 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.221 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:33.221 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:33.222 bdev_null0 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:33.222 [2024-11-26 07:49:16.307021] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:33.222 bdev_null1 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:33.222 { 00:39:33.222 "params": { 00:39:33.222 "name": "Nvme$subsystem", 00:39:33.222 "trtype": "$TEST_TRANSPORT", 00:39:33.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:33.222 "adrfam": "ipv4", 00:39:33.222 "trsvcid": "$NVMF_PORT", 00:39:33.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:33.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:33.222 "hdgst": ${hdgst:-false}, 00:39:33.222 "ddgst": ${ddgst:-false} 00:39:33.222 }, 00:39:33.222 "method": "bdev_nvme_attach_controller" 00:39:33.222 } 00:39:33.222 EOF 00:39:33.222 )") 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:33.222 { 00:39:33.222 "params": { 00:39:33.222 "name": "Nvme$subsystem", 00:39:33.222 "trtype": "$TEST_TRANSPORT", 00:39:33.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:33.222 "adrfam": "ipv4", 00:39:33.222 "trsvcid": "$NVMF_PORT", 00:39:33.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:33.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:33.222 "hdgst": ${hdgst:-false}, 00:39:33.222 "ddgst": ${ddgst:-false} 00:39:33.222 }, 00:39:33.222 "method": "bdev_nvme_attach_controller" 00:39:33.222 } 00:39:33.222 EOF 00:39:33.222 )") 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:39:33.222 07:49:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:33.222 "params": { 00:39:33.223 "name": "Nvme0", 00:39:33.223 "trtype": "tcp", 00:39:33.223 "traddr": "10.0.0.2", 00:39:33.223 "adrfam": "ipv4", 00:39:33.223 "trsvcid": "4420", 00:39:33.223 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:33.223 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:33.223 "hdgst": false, 00:39:33.223 "ddgst": false 00:39:33.223 }, 00:39:33.223 "method": "bdev_nvme_attach_controller" 00:39:33.223 },{ 00:39:33.223 "params": { 00:39:33.223 "name": "Nvme1", 00:39:33.223 "trtype": "tcp", 00:39:33.223 "traddr": "10.0.0.2", 00:39:33.223 "adrfam": "ipv4", 00:39:33.223 "trsvcid": "4420", 00:39:33.223 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:33.223 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:33.223 "hdgst": false, 00:39:33.223 "ddgst": false 00:39:33.223 }, 00:39:33.223 "method": "bdev_nvme_attach_controller" 00:39:33.223 }' 00:39:33.223 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:33.223 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:33.223 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:33.223 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:33.223 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:39:33.223 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:33.223 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:33.223 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:33.223 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:33.223 07:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:33.223 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:39:33.223 ... 00:39:33.223 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:39:33.223 ... 00:39:33.223 fio-3.35 00:39:33.223 Starting 4 threads 00:39:38.511 00:39:38.511 filename0: (groupid=0, jobs=1): err= 0: pid=2446793: Tue Nov 26 07:49:22 2024 00:39:38.511 read: IOPS=2207, BW=17.2MiB/s (18.1MB/s)(86.3MiB/5002msec) 00:39:38.511 slat (nsec): min=5473, max=55189, avg=6227.75, stdev=2293.47 00:39:38.511 clat (usec): min=1048, max=6341, avg=3606.64, stdev=721.08 00:39:38.511 lat (usec): min=1072, max=6347, avg=3612.87, stdev=720.78 00:39:38.511 clat percentiles (usec): 00:39:38.511 | 1.00th=[ 2376], 5.00th=[ 2671], 10.00th=[ 2868], 20.00th=[ 3130], 00:39:38.511 | 30.00th=[ 3294], 40.00th=[ 3392], 50.00th=[ 3458], 60.00th=[ 3556], 00:39:38.511 | 70.00th=[ 3654], 80.00th=[ 3851], 90.00th=[ 4883], 95.00th=[ 5211], 00:39:38.511 | 99.00th=[ 5538], 99.50th=[ 5735], 99.90th=[ 5997], 99.95th=[ 6259], 00:39:38.511 | 99.99th=[ 6325] 00:39:38.511 bw ( KiB/s): min=16752, max=18645, per=26.21%, avg=17666.50, stdev=654.05, samples=10 00:39:38.511 iops : min= 2094, max= 2330, avg=2208.20, stdev=81.62, samples=10 00:39:38.511 lat (msec) : 2=0.52%, 4=81.54%, 10=17.94% 00:39:38.511 cpu : usr=96.36%, sys=3.38%, ctx=6, majf=0, minf=29 00:39:38.511 IO depths : 1=0.1%, 2=1.3%, 4=71.1%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:38.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:38.511 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:38.511 issued rwts: total=11042,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:38.511 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:38.511 filename0: (groupid=0, jobs=1): err= 0: pid=2446795: Tue Nov 26 07:49:22 2024 00:39:38.511 read: IOPS=2077, BW=16.2MiB/s (17.0MB/s)(81.2MiB/5001msec) 00:39:38.511 slat (nsec): min=5478, max=62453, avg=6059.35, stdev=1887.41 00:39:38.511 clat (usec): min=1583, max=8741, avg=3834.78, stdev=723.93 00:39:38.511 lat (usec): min=1589, max=8778, avg=3840.84, stdev=723.90 00:39:38.511 clat percentiles (usec): 00:39:38.511 | 1.00th=[ 2769], 5.00th=[ 3097], 10.00th=[ 3228], 20.00th=[ 3359], 00:39:38.511 | 30.00th=[ 3458], 40.00th=[ 3523], 50.00th=[ 3621], 60.00th=[ 3687], 00:39:38.511 | 70.00th=[ 3785], 80.00th=[ 4113], 90.00th=[ 5211], 95.00th=[ 5407], 00:39:38.511 | 99.00th=[ 5800], 99.50th=[ 5997], 99.90th=[ 6390], 99.95th=[ 8717], 00:39:38.511 | 99.99th=[ 8717] 00:39:38.511 bw ( KiB/s): min=16240, max=17360, per=24.61%, avg=16588.44, stdev=339.11, samples=9 00:39:38.511 iops : min= 2030, max= 2170, avg=2073.56, stdev=42.39, samples=9 00:39:38.511 lat (msec) : 2=0.03%, 4=76.21%, 10=23.77% 00:39:38.511 cpu : usr=96.96%, sys=2.80%, ctx=6, majf=0, minf=60 00:39:38.511 IO depths : 1=0.1%, 2=0.2%, 4=72.3%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:38.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:38.511 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:38.511 issued rwts: total=10389,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:38.511 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:38.511 filename1: (groupid=0, jobs=1): err= 0: pid=2446796: Tue Nov 26 07:49:22 2024 00:39:38.511 read: IOPS=2074, BW=16.2MiB/s (17.0MB/s)(81.1MiB/5002msec) 00:39:38.511 slat (nsec): min=5471, max=95699, avg=6117.05, stdev=2161.81 00:39:38.511 clat (usec): min=1459, max=8539, avg=3838.09, stdev=704.03 00:39:38.511 lat (usec): min=1465, max=8567, avg=3844.21, stdev=704.00 00:39:38.511 clat percentiles (usec): 00:39:38.511 | 1.00th=[ 2769], 5.00th=[ 3163], 10.00th=[ 3228], 20.00th=[ 3425], 00:39:38.511 | 30.00th=[ 3458], 40.00th=[ 3523], 50.00th=[ 3589], 60.00th=[ 3687], 00:39:38.511 | 70.00th=[ 3818], 80.00th=[ 4080], 90.00th=[ 5211], 95.00th=[ 5407], 00:39:38.511 | 99.00th=[ 5800], 99.50th=[ 5932], 99.90th=[ 6587], 99.95th=[ 6718], 00:39:38.511 | 99.99th=[ 8455] 00:39:38.511 bw ( KiB/s): min=16240, max=16976, per=24.66%, avg=16624.00, stdev=233.65, samples=9 00:39:38.511 iops : min= 2030, max= 2122, avg=2078.00, stdev=29.21, samples=9 00:39:38.511 lat (msec) : 2=0.01%, 4=77.48%, 10=22.51% 00:39:38.511 cpu : usr=96.68%, sys=3.06%, ctx=6, majf=0, minf=39 00:39:38.511 IO depths : 1=0.1%, 2=0.2%, 4=72.5%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:38.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:38.511 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:38.511 issued rwts: total=10379,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:38.511 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:38.511 filename1: (groupid=0, jobs=1): err= 0: pid=2446797: Tue Nov 26 07:49:22 2024 00:39:38.511 read: IOPS=2067, BW=16.2MiB/s (16.9MB/s)(80.8MiB/5001msec) 00:39:38.511 slat (nsec): min=5470, max=45966, avg=7259.22, stdev=2185.08 00:39:38.511 clat (usec): min=702, max=6601, avg=3850.36, stdev=682.93 00:39:38.511 lat (usec): min=716, max=6607, avg=3857.62, stdev=682.81 00:39:38.511 clat percentiles (usec): 00:39:38.511 | 1.00th=[ 2933], 5.00th=[ 3195], 10.00th=[ 3294], 20.00th=[ 3425], 00:39:38.511 | 30.00th=[ 3490], 40.00th=[ 3523], 50.00th=[ 3589], 60.00th=[ 3720], 00:39:38.511 | 70.00th=[ 3785], 80.00th=[ 4113], 90.00th=[ 5211], 95.00th=[ 5407], 00:39:38.511 | 99.00th=[ 5800], 99.50th=[ 5997], 99.90th=[ 6325], 99.95th=[ 6390], 00:39:38.511 | 99.99th=[ 6587] 00:39:38.511 bw ( KiB/s): min=16224, max=16944, per=24.48%, avg=16503.11, stdev=269.89, samples=9 00:39:38.511 iops : min= 2028, max= 2118, avg=2062.89, stdev=33.74, samples=9 00:39:38.511 lat (usec) : 750=0.01% 00:39:38.511 lat (msec) : 2=0.03%, 4=76.40%, 10=23.56% 00:39:38.511 cpu : usr=96.40%, sys=3.36%, ctx=6, majf=0, minf=52 00:39:38.511 IO depths : 1=0.1%, 2=0.1%, 4=71.8%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:38.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:38.511 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:38.511 issued rwts: total=10339,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:38.511 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:38.511 00:39:38.511 Run status group 0 (all jobs): 00:39:38.512 READ: bw=65.8MiB/s (69.0MB/s), 16.2MiB/s-17.2MiB/s (16.9MB/s-18.1MB/s), io=329MiB (345MB), run=5001-5002msec 00:39:38.773 07:49:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:39:38.773 07:49:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:38.773 07:49:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:38.773 07:49:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:38.773 07:49:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:38.773 07:49:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:38.773 07:49:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.773 07:49:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:38.773 07:49:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.773 07:49:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:38.773 07:49:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.773 07:49:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:38.773 07:49:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.773 07:49:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:38.773 07:49:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:38.773 07:49:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:39:38.773 07:49:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:38.773 07:49:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.773 07:49:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:38.774 07:49:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.774 07:49:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:38.774 07:49:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.774 07:49:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:38.774 07:49:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.774 00:39:38.774 real 0m24.754s 00:39:38.774 user 5m18.813s 00:39:38.774 sys 0m4.544s 00:39:38.774 07:49:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:38.774 07:49:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:38.774 ************************************ 00:39:38.774 END TEST fio_dif_rand_params 00:39:38.774 ************************************ 00:39:38.774 07:49:22 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:39:38.774 07:49:22 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:38.774 07:49:22 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:38.774 07:49:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:38.774 ************************************ 00:39:38.774 START TEST fio_dif_digest 00:39:38.774 ************************************ 00:39:38.774 07:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:39:38.774 07:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:39:38.774 07:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:39:38.774 07:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:39:38.774 07:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:39:38.774 07:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:39:38.774 07:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:39:38.774 07:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:39:38.774 07:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:39:38.774 07:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:39:38.774 07:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:39:38.774 07:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:39:38.774 07:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:39:38.774 07:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:39:38.774 07:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:39:38.774 07:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:39:38.774 07:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:39:38.774 07:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.774 07:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:38.774 bdev_null0 00:39:38.774 07:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.774 07:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:38.774 07:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.774 07:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:38.774 07:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.774 07:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:38.774 07:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.774 07:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:39.036 [2024-11-26 07:49:22.910926] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:39.036 { 00:39:39.036 "params": { 00:39:39.036 "name": "Nvme$subsystem", 00:39:39.036 "trtype": "$TEST_TRANSPORT", 00:39:39.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:39.036 "adrfam": "ipv4", 00:39:39.036 "trsvcid": "$NVMF_PORT", 00:39:39.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:39.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:39.036 "hdgst": ${hdgst:-false}, 00:39:39.036 "ddgst": ${ddgst:-false} 00:39:39.036 }, 00:39:39.036 "method": "bdev_nvme_attach_controller" 00:39:39.036 } 00:39:39.036 EOF 00:39:39.036 )") 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:39.036 "params": { 00:39:39.036 "name": "Nvme0", 00:39:39.036 "trtype": "tcp", 00:39:39.036 "traddr": "10.0.0.2", 00:39:39.036 "adrfam": "ipv4", 00:39:39.036 "trsvcid": "4420", 00:39:39.036 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:39.036 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:39.036 "hdgst": true, 00:39:39.036 "ddgst": true 00:39:39.036 }, 00:39:39.036 "method": "bdev_nvme_attach_controller" 00:39:39.036 }' 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:39.036 07:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:39.036 07:49:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:39.297 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:39:39.297 ... 00:39:39.297 fio-3.35 00:39:39.297 Starting 3 threads 00:39:51.533 00:39:51.533 filename0: (groupid=0, jobs=1): err= 0: pid=2448202: Tue Nov 26 07:49:34 2024 00:39:51.533 read: IOPS=228, BW=28.6MiB/s (30.0MB/s)(287MiB/10049msec) 00:39:51.533 slat (nsec): min=5873, max=32265, avg=6623.32, stdev=1176.41 00:39:51.533 clat (usec): min=7246, max=51048, avg=13074.25, stdev=1448.86 00:39:51.533 lat (usec): min=7252, max=51054, avg=13080.87, stdev=1448.82 00:39:51.533 clat percentiles (usec): 00:39:51.533 | 1.00th=[ 8979], 5.00th=[10552], 10.00th=[11863], 20.00th=[12387], 00:39:51.533 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13173], 60.00th=[13435], 00:39:51.533 | 70.00th=[13566], 80.00th=[13960], 90.00th=[14353], 95.00th=[14746], 00:39:51.533 | 99.00th=[15533], 99.50th=[15795], 99.90th=[16581], 99.95th=[16712], 00:39:51.533 | 99.99th=[51119] 00:39:51.533 bw ( KiB/s): min=28416, max=30976, per=34.72%, avg=29376.00, stdev=760.67, samples=20 00:39:51.533 iops : min= 222, max= 242, avg=229.50, stdev= 5.94, samples=20 00:39:51.533 lat (msec) : 10=3.74%, 20=96.21%, 100=0.04% 00:39:51.533 cpu : usr=94.56%, sys=5.21%, ctx=21, majf=0, minf=72 00:39:51.533 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:51.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:51.533 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:51.533 issued rwts: total=2298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:51.533 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:51.533 filename0: (groupid=0, jobs=1): err= 0: pid=2448203: Tue Nov 26 07:49:34 2024 00:39:51.533 read: IOPS=206, BW=25.8MiB/s (27.1MB/s)(259MiB/10044msec) 00:39:51.533 slat (nsec): min=5885, max=60255, avg=7670.93, stdev=1941.27 00:39:51.533 clat (usec): min=10728, max=97343, avg=14505.10, stdev=5455.83 00:39:51.533 lat (usec): min=10735, max=97350, avg=14512.77, stdev=5455.81 00:39:51.533 clat percentiles (usec): 00:39:51.533 | 1.00th=[11600], 5.00th=[12387], 10.00th=[12780], 20.00th=[13173], 00:39:51.533 | 30.00th=[13435], 40.00th=[13566], 50.00th=[13829], 60.00th=[14091], 00:39:51.533 | 70.00th=[14353], 80.00th=[14746], 90.00th=[15270], 95.00th=[15664], 00:39:51.533 | 99.00th=[54789], 99.50th=[55313], 99.90th=[58459], 99.95th=[96994], 00:39:51.533 | 99.99th=[96994] 00:39:51.533 bw ( KiB/s): min=23040, max=28160, per=31.33%, avg=26511.50, stdev=1665.32, samples=20 00:39:51.533 iops : min= 180, max= 220, avg=207.10, stdev=13.00, samples=20 00:39:51.533 lat (msec) : 20=98.41%, 50=0.19%, 100=1.40% 00:39:51.533 cpu : usr=94.86%, sys=4.64%, ctx=472, majf=0, minf=181 00:39:51.533 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:51.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:51.533 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:51.533 issued rwts: total=2073,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:51.533 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:51.533 filename0: (groupid=0, jobs=1): err= 0: pid=2448204: Tue Nov 26 07:49:34 2024 00:39:51.533 read: IOPS=227, BW=28.4MiB/s (29.8MB/s)(284MiB/10007msec) 00:39:51.533 slat (nsec): min=6174, max=32167, avg=8999.12, stdev=1438.62 00:39:51.533 clat (usec): min=6725, max=16441, avg=13200.99, stdev=1255.26 00:39:51.533 lat (usec): min=6731, max=16450, avg=13209.99, stdev=1255.32 00:39:51.533 clat percentiles (usec): 00:39:51.533 | 1.00th=[ 8979], 5.00th=[10552], 10.00th=[11863], 20.00th=[12518], 00:39:51.533 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13304], 60.00th=[13566], 00:39:51.533 | 70.00th=[13829], 80.00th=[14091], 90.00th=[14484], 95.00th=[14877], 00:39:51.533 | 99.00th=[15664], 99.50th=[16057], 99.90th=[16450], 99.95th=[16450], 00:39:51.533 | 99.99th=[16450] 00:39:51.533 bw ( KiB/s): min=28160, max=30464, per=34.34%, avg=29056.00, stdev=697.39, samples=20 00:39:51.533 iops : min= 220, max= 238, avg=227.00, stdev= 5.45, samples=20 00:39:51.533 lat (msec) : 10=3.79%, 20=96.21% 00:39:51.533 cpu : usr=94.00%, sys=5.16%, ctx=310, majf=0, minf=140 00:39:51.533 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:51.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:51.533 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:51.533 issued rwts: total=2272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:51.533 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:51.533 00:39:51.533 Run status group 0 (all jobs): 00:39:51.533 READ: bw=82.6MiB/s (86.6MB/s), 25.8MiB/s-28.6MiB/s (27.1MB/s-30.0MB/s), io=830MiB (871MB), run=10007-10049msec 00:39:51.533 07:49:34 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:39:51.533 07:49:34 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:39:51.533 07:49:34 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:39:51.533 07:49:34 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:51.533 07:49:34 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:39:51.533 07:49:34 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:51.533 07:49:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:51.533 07:49:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:51.533 07:49:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:51.533 07:49:34 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:51.533 07:49:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:51.533 07:49:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:51.533 07:49:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:51.533 00:39:51.533 real 0m11.314s 00:39:51.533 user 0m43.536s 00:39:51.533 sys 0m1.811s 00:39:51.533 07:49:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:51.533 07:49:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:51.533 ************************************ 00:39:51.533 END TEST fio_dif_digest 00:39:51.533 ************************************ 00:39:51.533 07:49:34 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:39:51.533 07:49:34 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:39:51.533 07:49:34 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:51.533 07:49:34 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:39:51.533 07:49:34 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:51.533 07:49:34 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:39:51.533 07:49:34 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:51.533 07:49:34 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:51.533 rmmod nvme_tcp 00:39:51.533 rmmod nvme_fabrics 00:39:51.533 rmmod nvme_keyring 00:39:51.533 07:49:34 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:51.534 07:49:34 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:39:51.534 07:49:34 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:39:51.534 07:49:34 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2437934 ']' 00:39:51.534 07:49:34 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2437934 00:39:51.534 07:49:34 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 2437934 ']' 00:39:51.534 07:49:34 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 2437934 00:39:51.534 07:49:34 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:39:51.534 07:49:34 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:51.534 07:49:34 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2437934 00:39:51.534 07:49:34 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:51.534 07:49:34 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:51.534 07:49:34 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2437934' 00:39:51.534 killing process with pid 2437934 00:39:51.534 07:49:34 nvmf_dif -- common/autotest_common.sh@973 -- # kill 2437934 00:39:51.534 07:49:34 nvmf_dif -- common/autotest_common.sh@978 -- # wait 2437934 00:39:51.534 07:49:34 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:39:51.534 07:49:34 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:54.080 Waiting for block devices as requested 00:39:54.080 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:54.341 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:54.341 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:54.341 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:54.341 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:54.602 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:54.602 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:54.602 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:54.865 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:39:54.865 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:55.126 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:55.126 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:55.126 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:55.126 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:55.392 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:55.392 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:55.392 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:55.654 07:49:39 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:55.654 07:49:39 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:55.654 07:49:39 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:39:55.654 07:49:39 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:39:55.654 07:49:39 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:55.654 07:49:39 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:39:55.654 07:49:39 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:55.654 07:49:39 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:55.654 07:49:39 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:55.654 07:49:39 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:55.654 07:49:39 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:58.201 07:49:41 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:58.201 00:39:58.201 real 1m20.082s 00:39:58.201 user 8m4.878s 00:39:58.201 sys 0m22.710s 00:39:58.201 07:49:41 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:58.201 07:49:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:58.201 ************************************ 00:39:58.201 END TEST nvmf_dif 00:39:58.201 ************************************ 00:39:58.201 07:49:41 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:58.201 07:49:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:58.201 07:49:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:58.201 07:49:41 -- common/autotest_common.sh@10 -- # set +x 00:39:58.201 ************************************ 00:39:58.201 START TEST nvmf_abort_qd_sizes 00:39:58.201 ************************************ 00:39:58.201 07:49:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:58.201 * Looking for test storage... 00:39:58.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:58.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:58.201 --rc genhtml_branch_coverage=1 00:39:58.201 --rc genhtml_function_coverage=1 00:39:58.201 --rc genhtml_legend=1 00:39:58.201 --rc geninfo_all_blocks=1 00:39:58.201 --rc geninfo_unexecuted_blocks=1 00:39:58.201 00:39:58.201 ' 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:58.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:58.201 --rc genhtml_branch_coverage=1 00:39:58.201 --rc genhtml_function_coverage=1 00:39:58.201 --rc genhtml_legend=1 00:39:58.201 --rc geninfo_all_blocks=1 00:39:58.201 --rc geninfo_unexecuted_blocks=1 00:39:58.201 00:39:58.201 ' 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:58.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:58.201 --rc genhtml_branch_coverage=1 00:39:58.201 --rc genhtml_function_coverage=1 00:39:58.201 --rc genhtml_legend=1 00:39:58.201 --rc geninfo_all_blocks=1 00:39:58.201 --rc geninfo_unexecuted_blocks=1 00:39:58.201 00:39:58.201 ' 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:58.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:58.201 --rc genhtml_branch_coverage=1 00:39:58.201 --rc genhtml_function_coverage=1 00:39:58.201 --rc genhtml_legend=1 00:39:58.201 --rc geninfo_all_blocks=1 00:39:58.201 --rc geninfo_unexecuted_blocks=1 00:39:58.201 00:39:58.201 ' 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:58.201 07:49:42 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:58.202 07:49:42 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:58.202 07:49:42 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:58.202 07:49:42 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:39:58.202 07:49:42 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:58.202 07:49:42 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:58.202 07:49:42 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:58.202 07:49:42 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:58.202 07:49:42 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:58.202 07:49:42 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:58.202 07:49:42 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:39:58.202 07:49:42 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:58.202 07:49:42 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:39:58.202 07:49:42 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:58.202 07:49:42 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:58.202 07:49:42 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:58.202 07:49:42 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:58.202 07:49:42 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:58.202 07:49:42 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:58.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:58.202 07:49:42 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:58.202 07:49:42 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:58.202 07:49:42 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:58.202 07:49:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:39:58.202 07:49:42 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:58.202 07:49:42 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:58.202 07:49:42 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:58.202 07:49:42 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:58.202 07:49:42 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:58.202 07:49:42 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:58.202 07:49:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:58.202 07:49:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:58.202 07:49:42 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:58.202 07:49:42 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:58.202 07:49:42 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:39:58.202 07:49:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:40:06.344 Found 0000:31:00.0 (0x8086 - 0x159b) 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:40:06.344 Found 0000:31:00.1 (0x8086 - 0x159b) 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:40:06.344 Found net devices under 0000:31:00.0: cvl_0_0 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:40:06.344 Found net devices under 0000:31:00.1: cvl_0_1 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:06.344 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:06.345 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:06.345 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:06.345 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:06.345 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:06.345 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:06.345 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:06.345 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:06.345 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:06.345 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:06.345 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:06.345 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:06.345 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:06.345 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:06.345 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:06.345 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:06.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:06.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.680 ms 00:40:06.345 00:40:06.345 --- 10.0.0.2 ping statistics --- 00:40:06.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:06.345 rtt min/avg/max/mdev = 0.680/0.680/0.680/0.000 ms 00:40:06.345 07:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:06.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:06.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:40:06.345 00:40:06.345 --- 10.0.0.1 ping statistics --- 00:40:06.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:06.345 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:40:06.345 07:49:50 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:06.345 07:49:50 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:40:06.345 07:49:50 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:40:06.345 07:49:50 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:09.745 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:40:09.745 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:40:09.745 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:40:09.745 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:40:09.745 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:40:09.745 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:40:09.745 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:40:09.745 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:40:09.745 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:40:09.745 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:40:09.745 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:40:09.745 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:40:09.745 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:40:09.745 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:40:09.745 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:40:09.745 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:40:09.745 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:40:10.318 07:49:54 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:10.318 07:49:54 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:10.318 07:49:54 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:10.318 07:49:54 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:10.318 07:49:54 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:10.318 07:49:54 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:10.318 07:49:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:40:10.318 07:49:54 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:10.318 07:49:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:10.318 07:49:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:10.318 07:49:54 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2458550 00:40:10.318 07:49:54 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2458550 00:40:10.318 07:49:54 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:40:10.318 07:49:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 2458550 ']' 00:40:10.318 07:49:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:10.318 07:49:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:10.318 07:49:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:10.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:10.318 07:49:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:10.318 07:49:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:10.318 [2024-11-26 07:49:54.330103] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:40:10.318 [2024-11-26 07:49:54.330165] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:10.318 [2024-11-26 07:49:54.419831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:10.580 [2024-11-26 07:49:54.462704] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:10.580 [2024-11-26 07:49:54.462742] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:10.580 [2024-11-26 07:49:54.462751] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:10.580 [2024-11-26 07:49:54.462757] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:10.580 [2024-11-26 07:49:54.462763] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:10.580 [2024-11-26 07:49:54.464397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:10.580 [2024-11-26 07:49:54.464513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:10.580 [2024-11-26 07:49:54.464669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:10.580 [2024-11-26 07:49:54.464669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:11.150 07:49:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:11.150 07:49:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:40:11.150 07:49:55 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:11.150 07:49:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:11.150 07:49:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:11.150 07:49:55 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:11.151 07:49:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:40:11.151 07:49:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:40:11.151 07:49:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:40:11.151 07:49:55 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:40:11.151 07:49:55 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:40:11.151 07:49:55 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:40:11.151 07:49:55 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:40:11.151 07:49:55 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:40:11.151 07:49:55 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:40:11.151 07:49:55 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:40:11.151 07:49:55 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:40:11.151 07:49:55 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:40:11.151 07:49:55 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:40:11.151 07:49:55 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:40:11.151 07:49:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:40:11.151 07:49:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:40:11.151 07:49:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:40:11.151 07:49:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:11.151 07:49:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:11.151 07:49:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:11.151 ************************************ 00:40:11.151 START TEST spdk_target_abort 00:40:11.151 ************************************ 00:40:11.151 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:40:11.151 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:40:11.151 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:40:11.151 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:11.151 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:11.723 spdk_targetn1 00:40:11.723 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:11.723 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:11.723 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:11.723 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:11.723 [2024-11-26 07:49:55.551828] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:11.723 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:11.723 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:40:11.723 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:11.723 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:11.723 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:11.723 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:40:11.723 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:11.723 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:11.723 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:11.723 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:40:11.723 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:11.723 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:11.723 [2024-11-26 07:49:55.596366] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:11.723 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:11.723 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:40:11.723 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:40:11.723 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:40:11.723 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:40:11.723 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:40:11.724 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:40:11.724 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:40:11.724 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:40:11.724 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:40:11.724 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:11.724 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:40:11.724 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:11.724 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:40:11.724 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:11.724 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:40:11.724 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:11.724 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:40:11.724 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:11.724 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:11.724 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:11.724 07:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:11.724 [2024-11-26 07:49:55.797784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:680 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:40:11.724 [2024-11-26 07:49:55.797812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0056 p:1 m:0 dnr:0 00:40:11.724 [2024-11-26 07:49:55.828347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:1704 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:40:11.724 [2024-11-26 07:49:55.828365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00d7 p:1 m:0 dnr:0 00:40:11.984 [2024-11-26 07:49:55.876389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3464 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:40:11.984 [2024-11-26 07:49:55.876407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00b3 p:0 m:0 dnr:0 00:40:11.984 [2024-11-26 07:49:55.892302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:4016 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:40:11.984 [2024-11-26 07:49:55.892318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00f8 p:0 m:0 dnr:0 00:40:11.984 [2024-11-26 07:49:55.892376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:4024 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:40:11.984 [2024-11-26 07:49:55.892383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00f8 p:0 m:0 dnr:0 00:40:15.282 Initializing NVMe Controllers 00:40:15.282 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:40:15.282 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:15.282 Initialization complete. Launching workers. 00:40:15.282 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 13162, failed: 5 00:40:15.282 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3262, failed to submit 9905 00:40:15.282 success 790, unsuccessful 2472, failed 0 00:40:15.282 07:49:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:15.283 07:49:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:15.283 [2024-11-26 07:49:59.176053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:180 nsid:1 lba:1208 len:8 PRP1 0x200004e4c000 PRP2 0x0 00:40:15.283 [2024-11-26 07:49:59.176094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:180 cdw0:0 sqhd:009c p:1 m:0 dnr:0 00:40:15.283 [2024-11-26 07:49:59.216024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:171 nsid:1 lba:1976 len:8 PRP1 0x200004e3a000 PRP2 0x0 00:40:15.283 [2024-11-26 07:49:59.216050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:171 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:40:15.283 [2024-11-26 07:49:59.255032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:179 nsid:1 lba:2864 len:8 PRP1 0x200004e48000 PRP2 0x0 00:40:15.283 [2024-11-26 07:49:59.255055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:179 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:40:15.283 [2024-11-26 07:49:59.293795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:183 nsid:1 lba:3744 len:8 PRP1 0x200004e4a000 PRP2 0x0 00:40:15.283 [2024-11-26 07:49:59.293818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:183 cdw0:0 sqhd:00db p:0 m:0 dnr:0 00:40:16.666 [2024-11-26 07:50:00.788258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:183 nsid:1 lba:38112 len:8 PRP1 0x200004e50000 PRP2 0x0 00:40:16.666 [2024-11-26 07:50:00.788307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:183 cdw0:0 sqhd:009d p:1 m:0 dnr:0 00:40:17.237 [2024-11-26 07:50:01.231100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:175 nsid:1 lba:48168 len:8 PRP1 0x200004e5a000 PRP2 0x0 00:40:17.237 [2024-11-26 07:50:01.231132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:175 cdw0:0 sqhd:008c p:0 m:0 dnr:0 00:40:18.177 Initializing NVMe Controllers 00:40:18.177 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:40:18.177 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:18.177 Initialization complete. Launching workers. 00:40:18.177 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8587, failed: 6 00:40:18.177 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1216, failed to submit 7377 00:40:18.177 success 351, unsuccessful 865, failed 0 00:40:18.177 07:50:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:18.177 07:50:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:21.476 [2024-11-26 07:50:05.175510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:171 nsid:1 lba:289552 len:8 PRP1 0x200004af0000 PRP2 0x0 00:40:21.476 [2024-11-26 07:50:05.175540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:171 cdw0:0 sqhd:00da p:1 m:0 dnr:0 00:40:21.736 Initializing NVMe Controllers 00:40:21.736 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:40:21.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:21.736 Initialization complete. Launching workers. 00:40:21.736 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42045, failed: 1 00:40:21.736 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2691, failed to submit 39355 00:40:21.736 success 580, unsuccessful 2111, failed 0 00:40:21.736 07:50:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:40:21.736 07:50:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:21.736 07:50:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:21.736 07:50:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:21.736 07:50:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:40:21.736 07:50:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:21.736 07:50:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:23.647 07:50:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:23.648 07:50:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2458550 00:40:23.648 07:50:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 2458550 ']' 00:40:23.648 07:50:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 2458550 00:40:23.648 07:50:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:40:23.648 07:50:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:23.648 07:50:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2458550 00:40:23.648 07:50:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:23.648 07:50:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:23.648 07:50:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2458550' 00:40:23.648 killing process with pid 2458550 00:40:23.648 07:50:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 2458550 00:40:23.648 07:50:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 2458550 00:40:23.648 00:40:23.648 real 0m12.423s 00:40:23.648 user 0m50.728s 00:40:23.648 sys 0m1.876s 00:40:23.648 07:50:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:23.648 07:50:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:23.648 ************************************ 00:40:23.648 END TEST spdk_target_abort 00:40:23.648 ************************************ 00:40:23.648 07:50:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:40:23.648 07:50:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:23.648 07:50:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:23.648 07:50:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:23.648 ************************************ 00:40:23.648 START TEST kernel_target_abort 00:40:23.648 ************************************ 00:40:23.648 07:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:40:23.648 07:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:40:23.648 07:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:40:23.648 07:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:40:23.648 07:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:40:23.648 07:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:23.648 07:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:23.648 07:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:40:23.648 07:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:23.648 07:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:40:23.648 07:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:40:23.648 07:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:40:23.648 07:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:40:23.648 07:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:40:23.648 07:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:40:23.648 07:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:23.648 07:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:23.648 07:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:40:23.648 07:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:40:23.648 07:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:40:23.648 07:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:40:23.648 07:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:40:23.648 07:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:40:27.859 Waiting for block devices as requested 00:40:27.859 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:27.859 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:27.859 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:27.859 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:27.859 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:27.859 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:27.859 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:28.121 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:28.121 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:40:28.121 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:28.382 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:28.382 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:28.382 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:28.642 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:28.642 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:28.642 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:28.642 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:40:29.215 No valid GPT data, bailing 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:40:29.215 00:40:29.215 Discovery Log Number of Records 2, Generation counter 2 00:40:29.215 =====Discovery Log Entry 0====== 00:40:29.215 trtype: tcp 00:40:29.215 adrfam: ipv4 00:40:29.215 subtype: current discovery subsystem 00:40:29.215 treq: not specified, sq flow control disable supported 00:40:29.215 portid: 1 00:40:29.215 trsvcid: 4420 00:40:29.215 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:40:29.215 traddr: 10.0.0.1 00:40:29.215 eflags: none 00:40:29.215 sectype: none 00:40:29.215 =====Discovery Log Entry 1====== 00:40:29.215 trtype: tcp 00:40:29.215 adrfam: ipv4 00:40:29.215 subtype: nvme subsystem 00:40:29.215 treq: not specified, sq flow control disable supported 00:40:29.215 portid: 1 00:40:29.215 trsvcid: 4420 00:40:29.215 subnqn: nqn.2016-06.io.spdk:testnqn 00:40:29.215 traddr: 10.0.0.1 00:40:29.215 eflags: none 00:40:29.215 sectype: none 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:29.215 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:40:29.216 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:29.216 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:40:29.216 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:29.216 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:29.216 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:29.216 07:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:32.515 Initializing NVMe Controllers 00:40:32.515 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:32.515 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:32.515 Initialization complete. Launching workers. 00:40:32.515 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66804, failed: 0 00:40:32.515 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 66804, failed to submit 0 00:40:32.515 success 0, unsuccessful 66804, failed 0 00:40:32.515 07:50:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:32.515 07:50:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:35.811 Initializing NVMe Controllers 00:40:35.811 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:35.811 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:35.811 Initialization complete. Launching workers. 00:40:35.811 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 107997, failed: 0 00:40:35.811 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27170, failed to submit 80827 00:40:35.811 success 0, unsuccessful 27170, failed 0 00:40:35.811 07:50:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:35.811 07:50:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:39.108 Initializing NVMe Controllers 00:40:39.108 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:39.108 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:39.108 Initialization complete. Launching workers. 00:40:39.108 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 101985, failed: 0 00:40:39.108 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25514, failed to submit 76471 00:40:39.108 success 0, unsuccessful 25514, failed 0 00:40:39.108 07:50:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:40:39.108 07:50:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:40:39.108 07:50:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:40:39.108 07:50:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:39.108 07:50:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:39.108 07:50:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:40:39.108 07:50:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:39.108 07:50:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:40:39.108 07:50:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:40:39.108 07:50:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:42.410 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:40:42.410 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:40:42.410 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:40:42.410 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:40:42.410 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:40:42.410 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:40:42.410 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:40:42.410 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:40:42.410 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:40:42.410 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:40:42.410 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:40:42.671 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:40:42.671 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:40:42.671 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:40:42.671 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:40:42.671 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:40:44.585 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:40:44.585 00:40:44.585 real 0m20.957s 00:40:44.585 user 0m10.301s 00:40:44.585 sys 0m6.443s 00:40:44.585 07:50:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:44.585 07:50:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:44.585 ************************************ 00:40:44.585 END TEST kernel_target_abort 00:40:44.585 ************************************ 00:40:44.846 07:50:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:40:44.846 07:50:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:40:44.846 07:50:28 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:44.846 07:50:28 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:40:44.846 07:50:28 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:44.846 07:50:28 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:40:44.846 07:50:28 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:44.846 07:50:28 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:44.846 rmmod nvme_tcp 00:40:44.846 rmmod nvme_fabrics 00:40:44.846 rmmod nvme_keyring 00:40:44.846 07:50:28 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:44.846 07:50:28 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:40:44.846 07:50:28 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:40:44.846 07:50:28 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2458550 ']' 00:40:44.846 07:50:28 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2458550 00:40:44.846 07:50:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 2458550 ']' 00:40:44.846 07:50:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 2458550 00:40:44.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2458550) - No such process 00:40:44.846 07:50:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 2458550 is not found' 00:40:44.846 Process with pid 2458550 is not found 00:40:44.846 07:50:28 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:40:44.846 07:50:28 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:40:49.055 Waiting for block devices as requested 00:40:49.055 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:49.055 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:49.055 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:49.055 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:49.055 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:49.055 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:49.055 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:49.055 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:49.055 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:40:49.316 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:49.316 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:49.578 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:49.578 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:49.578 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:49.578 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:49.839 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:49.839 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:50.099 07:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:50.099 07:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:50.099 07:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:40:50.099 07:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:40:50.099 07:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:50.099 07:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:40:50.099 07:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:50.099 07:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:50.099 07:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:50.099 07:50:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:50.099 07:50:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:52.644 07:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:52.644 00:40:52.644 real 0m54.289s 00:40:52.644 user 1m6.991s 00:40:52.644 sys 0m19.977s 00:40:52.644 07:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:52.644 07:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:52.644 ************************************ 00:40:52.644 END TEST nvmf_abort_qd_sizes 00:40:52.644 ************************************ 00:40:52.644 07:50:36 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:40:52.644 07:50:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:52.644 07:50:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:52.644 07:50:36 -- common/autotest_common.sh@10 -- # set +x 00:40:52.644 ************************************ 00:40:52.644 START TEST keyring_file 00:40:52.644 ************************************ 00:40:52.644 07:50:36 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:40:52.644 * Looking for test storage... 00:40:52.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:40:52.644 07:50:36 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:52.644 07:50:36 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:40:52.644 07:50:36 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:52.644 07:50:36 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:52.644 07:50:36 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:52.644 07:50:36 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:52.644 07:50:36 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:52.644 07:50:36 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:40:52.644 07:50:36 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:40:52.644 07:50:36 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:40:52.644 07:50:36 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:40:52.644 07:50:36 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:40:52.644 07:50:36 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:40:52.644 07:50:36 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:40:52.644 07:50:36 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:52.644 07:50:36 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:40:52.644 07:50:36 keyring_file -- scripts/common.sh@345 -- # : 1 00:40:52.644 07:50:36 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:52.644 07:50:36 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:52.644 07:50:36 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:40:52.644 07:50:36 keyring_file -- scripts/common.sh@353 -- # local d=1 00:40:52.644 07:50:36 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:52.644 07:50:36 keyring_file -- scripts/common.sh@355 -- # echo 1 00:40:52.644 07:50:36 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:40:52.644 07:50:36 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:40:52.644 07:50:36 keyring_file -- scripts/common.sh@353 -- # local d=2 00:40:52.644 07:50:36 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:52.644 07:50:36 keyring_file -- scripts/common.sh@355 -- # echo 2 00:40:52.644 07:50:36 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:40:52.644 07:50:36 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:52.644 07:50:36 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:52.644 07:50:36 keyring_file -- scripts/common.sh@368 -- # return 0 00:40:52.644 07:50:36 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:52.644 07:50:36 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:52.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:52.645 --rc genhtml_branch_coverage=1 00:40:52.645 --rc genhtml_function_coverage=1 00:40:52.645 --rc genhtml_legend=1 00:40:52.645 --rc geninfo_all_blocks=1 00:40:52.645 --rc geninfo_unexecuted_blocks=1 00:40:52.645 00:40:52.645 ' 00:40:52.645 07:50:36 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:52.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:52.645 --rc genhtml_branch_coverage=1 00:40:52.645 --rc genhtml_function_coverage=1 00:40:52.645 --rc genhtml_legend=1 00:40:52.645 --rc geninfo_all_blocks=1 00:40:52.645 --rc geninfo_unexecuted_blocks=1 00:40:52.645 00:40:52.645 ' 00:40:52.645 07:50:36 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:52.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:52.645 --rc genhtml_branch_coverage=1 00:40:52.645 --rc genhtml_function_coverage=1 00:40:52.645 --rc genhtml_legend=1 00:40:52.645 --rc geninfo_all_blocks=1 00:40:52.645 --rc geninfo_unexecuted_blocks=1 00:40:52.645 00:40:52.645 ' 00:40:52.645 07:50:36 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:52.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:52.645 --rc genhtml_branch_coverage=1 00:40:52.645 --rc genhtml_function_coverage=1 00:40:52.645 --rc genhtml_legend=1 00:40:52.645 --rc geninfo_all_blocks=1 00:40:52.645 --rc geninfo_unexecuted_blocks=1 00:40:52.645 00:40:52.645 ' 00:40:52.645 07:50:36 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:40:52.645 07:50:36 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:52.645 07:50:36 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:40:52.645 07:50:36 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:52.645 07:50:36 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:52.645 07:50:36 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:52.645 07:50:36 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:52.645 07:50:36 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:52.645 07:50:36 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:52.645 07:50:36 keyring_file -- paths/export.sh@5 -- # export PATH 00:40:52.645 07:50:36 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@51 -- # : 0 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:52.645 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:52.645 07:50:36 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:40:52.645 07:50:36 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:40:52.645 07:50:36 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:40:52.645 07:50:36 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:40:52.645 07:50:36 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:40:52.645 07:50:36 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:40:52.645 07:50:36 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:40:52.645 07:50:36 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:52.645 07:50:36 keyring_file -- keyring/common.sh@17 -- # name=key0 00:40:52.645 07:50:36 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:52.645 07:50:36 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:52.645 07:50:36 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:52.645 07:50:36 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.sTxLwK7BXk 00:40:52.645 07:50:36 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@733 -- # python - 00:40:52.645 07:50:36 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.sTxLwK7BXk 00:40:52.645 07:50:36 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.sTxLwK7BXk 00:40:52.645 07:50:36 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.sTxLwK7BXk 00:40:52.645 07:50:36 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:40:52.645 07:50:36 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:52.645 07:50:36 keyring_file -- keyring/common.sh@17 -- # name=key1 00:40:52.645 07:50:36 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:40:52.645 07:50:36 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:52.645 07:50:36 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:52.645 07:50:36 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.pKsScGy1dM 00:40:52.645 07:50:36 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:40:52.645 07:50:36 keyring_file -- nvmf/common.sh@733 -- # python - 00:40:52.645 07:50:36 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.pKsScGy1dM 00:40:52.645 07:50:36 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.pKsScGy1dM 00:40:52.645 07:50:36 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.pKsScGy1dM 00:40:52.645 07:50:36 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:40:52.645 07:50:36 keyring_file -- keyring/file.sh@30 -- # tgtpid=2469471 00:40:52.645 07:50:36 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2469471 00:40:52.645 07:50:36 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2469471 ']' 00:40:52.645 07:50:36 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:52.645 07:50:36 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:52.645 07:50:36 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:52.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:52.645 07:50:36 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:52.645 07:50:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:52.645 [2024-11-26 07:50:36.685853] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:40:52.645 [2024-11-26 07:50:36.685942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2469471 ] 00:40:52.645 [2024-11-26 07:50:36.763688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:52.906 [2024-11-26 07:50:36.800437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:53.478 07:50:37 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:53.478 07:50:37 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:40:53.478 07:50:37 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:40:53.478 07:50:37 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:53.478 07:50:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:53.478 [2024-11-26 07:50:37.479413] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:53.478 null0 00:40:53.478 [2024-11-26 07:50:37.511462] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:40:53.478 [2024-11-26 07:50:37.511796] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:40:53.478 07:50:37 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:53.478 07:50:37 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:53.478 07:50:37 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:40:53.478 07:50:37 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:53.478 07:50:37 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:40:53.478 07:50:37 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:53.478 07:50:37 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:40:53.478 07:50:37 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:53.478 07:50:37 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:53.478 07:50:37 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:53.478 07:50:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:53.478 [2024-11-26 07:50:37.543530] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:40:53.478 request: 00:40:53.478 { 00:40:53.478 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:40:53.478 "secure_channel": false, 00:40:53.478 "listen_address": { 00:40:53.478 "trtype": "tcp", 00:40:53.478 "traddr": "127.0.0.1", 00:40:53.478 "trsvcid": "4420" 00:40:53.478 }, 00:40:53.478 "method": "nvmf_subsystem_add_listener", 00:40:53.478 "req_id": 1 00:40:53.478 } 00:40:53.478 Got JSON-RPC error response 00:40:53.478 response: 00:40:53.478 { 00:40:53.478 "code": -32602, 00:40:53.478 "message": "Invalid parameters" 00:40:53.478 } 00:40:53.478 07:50:37 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:40:53.478 07:50:37 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:40:53.478 07:50:37 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:53.478 07:50:37 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:53.478 07:50:37 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:53.478 07:50:37 keyring_file -- keyring/file.sh@47 -- # bperfpid=2469490 00:40:53.478 07:50:37 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2469490 /var/tmp/bperf.sock 00:40:53.478 07:50:37 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:40:53.478 07:50:37 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2469490 ']' 00:40:53.478 07:50:37 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:53.478 07:50:37 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:53.478 07:50:37 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:53.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:53.478 07:50:37 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:53.478 07:50:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:53.478 [2024-11-26 07:50:37.599929] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:40:53.478 [2024-11-26 07:50:37.599977] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2469490 ] 00:40:53.740 [2024-11-26 07:50:37.694267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:53.740 [2024-11-26 07:50:37.730435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:54.313 07:50:38 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:54.313 07:50:38 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:40:54.313 07:50:38 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.sTxLwK7BXk 00:40:54.313 07:50:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.sTxLwK7BXk 00:40:54.573 07:50:38 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.pKsScGy1dM 00:40:54.574 07:50:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.pKsScGy1dM 00:40:54.834 07:50:38 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:40:54.834 07:50:38 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:40:54.835 07:50:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:54.835 07:50:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:54.835 07:50:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:54.835 07:50:38 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.sTxLwK7BXk == \/\t\m\p\/\t\m\p\.\s\T\x\L\w\K\7\B\X\k ]] 00:40:54.835 07:50:38 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:40:54.835 07:50:38 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:40:54.835 07:50:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:54.835 07:50:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:54.835 07:50:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:55.095 07:50:39 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.pKsScGy1dM == \/\t\m\p\/\t\m\p\.\p\K\s\S\c\G\y\1\d\M ]] 00:40:55.095 07:50:39 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:40:55.095 07:50:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:55.095 07:50:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:55.095 07:50:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:55.095 07:50:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:55.095 07:50:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:55.355 07:50:39 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:40:55.355 07:50:39 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:40:55.355 07:50:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:55.355 07:50:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:55.355 07:50:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:55.355 07:50:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:55.355 07:50:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:55.355 07:50:39 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:40:55.355 07:50:39 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:55.355 07:50:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:55.614 [2024-11-26 07:50:39.553059] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:55.614 nvme0n1 00:40:55.614 07:50:39 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:40:55.614 07:50:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:55.614 07:50:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:55.614 07:50:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:55.614 07:50:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:55.614 07:50:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:55.876 07:50:39 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:40:55.876 07:50:39 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:40:55.876 07:50:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:55.877 07:50:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:55.877 07:50:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:55.877 07:50:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:55.877 07:50:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:55.877 07:50:39 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:40:55.877 07:50:39 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:56.139 Running I/O for 1 seconds... 00:40:57.081 15629.00 IOPS, 61.05 MiB/s 00:40:57.081 Latency(us) 00:40:57.081 [2024-11-26T06:50:41.218Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:57.081 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:40:57.081 nvme0n1 : 1.01 15644.42 61.11 0.00 0.00 8150.05 4724.05 14308.69 00:40:57.081 [2024-11-26T06:50:41.218Z] =================================================================================================================== 00:40:57.081 [2024-11-26T06:50:41.218Z] Total : 15644.42 61.11 0.00 0.00 8150.05 4724.05 14308.69 00:40:57.081 { 00:40:57.081 "results": [ 00:40:57.081 { 00:40:57.081 "job": "nvme0n1", 00:40:57.081 "core_mask": "0x2", 00:40:57.081 "workload": "randrw", 00:40:57.081 "percentage": 50, 00:40:57.081 "status": "finished", 00:40:57.081 "queue_depth": 128, 00:40:57.081 "io_size": 4096, 00:40:57.081 "runtime": 1.00726, 00:40:57.081 "iops": 15644.421499910648, 00:40:57.081 "mibps": 61.11102148402597, 00:40:57.081 "io_failed": 0, 00:40:57.081 "io_timeout": 0, 00:40:57.081 "avg_latency_us": 8150.047061809874, 00:40:57.081 "min_latency_us": 4724.053333333333, 00:40:57.081 "max_latency_us": 14308.693333333333 00:40:57.081 } 00:40:57.081 ], 00:40:57.081 "core_count": 1 00:40:57.081 } 00:40:57.081 07:50:41 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:57.081 07:50:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:57.342 07:50:41 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:40:57.342 07:50:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:57.342 07:50:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:57.342 07:50:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:57.342 07:50:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:57.342 07:50:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:57.342 07:50:41 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:40:57.342 07:50:41 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:40:57.342 07:50:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:57.342 07:50:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:57.342 07:50:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:57.342 07:50:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:57.342 07:50:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:57.604 07:50:41 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:40:57.604 07:50:41 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:57.604 07:50:41 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:40:57.604 07:50:41 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:57.604 07:50:41 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:40:57.604 07:50:41 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:57.604 07:50:41 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:40:57.604 07:50:41 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:57.604 07:50:41 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:57.604 07:50:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:57.866 [2024-11-26 07:50:41.784362] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:40:57.866 [2024-11-26 07:50:41.785358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc569f0 (107): Transport endpoint is not connected 00:40:57.866 [2024-11-26 07:50:41.786351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc569f0 (9): Bad file descriptor 00:40:57.866 [2024-11-26 07:50:41.787350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:40:57.866 [2024-11-26 07:50:41.787360] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:40:57.866 [2024-11-26 07:50:41.787374] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:40:57.866 [2024-11-26 07:50:41.787385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:40:57.866 request: 00:40:57.866 { 00:40:57.866 "name": "nvme0", 00:40:57.866 "trtype": "tcp", 00:40:57.866 "traddr": "127.0.0.1", 00:40:57.866 "adrfam": "ipv4", 00:40:57.866 "trsvcid": "4420", 00:40:57.866 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:57.866 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:57.866 "prchk_reftag": false, 00:40:57.866 "prchk_guard": false, 00:40:57.866 "hdgst": false, 00:40:57.866 "ddgst": false, 00:40:57.866 "psk": "key1", 00:40:57.866 "allow_unrecognized_csi": false, 00:40:57.866 "method": "bdev_nvme_attach_controller", 00:40:57.866 "req_id": 1 00:40:57.866 } 00:40:57.866 Got JSON-RPC error response 00:40:57.866 response: 00:40:57.866 { 00:40:57.866 "code": -5, 00:40:57.866 "message": "Input/output error" 00:40:57.866 } 00:40:57.866 07:50:41 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:40:57.866 07:50:41 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:57.866 07:50:41 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:57.866 07:50:41 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:57.866 07:50:41 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:40:57.866 07:50:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:57.866 07:50:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:57.866 07:50:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:57.866 07:50:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:57.866 07:50:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:57.866 07:50:41 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:40:57.866 07:50:41 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:40:57.866 07:50:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:57.866 07:50:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:57.866 07:50:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:57.866 07:50:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:57.866 07:50:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:58.127 07:50:42 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:40:58.127 07:50:42 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:40:58.127 07:50:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:58.388 07:50:42 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:40:58.388 07:50:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:40:58.388 07:50:42 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:40:58.388 07:50:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:58.388 07:50:42 keyring_file -- keyring/file.sh@78 -- # jq length 00:40:58.649 07:50:42 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:40:58.649 07:50:42 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.sTxLwK7BXk 00:40:58.649 07:50:42 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.sTxLwK7BXk 00:40:58.649 07:50:42 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:40:58.649 07:50:42 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.sTxLwK7BXk 00:40:58.649 07:50:42 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:40:58.649 07:50:42 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:58.649 07:50:42 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:40:58.649 07:50:42 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:58.649 07:50:42 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.sTxLwK7BXk 00:40:58.649 07:50:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.sTxLwK7BXk 00:40:58.911 [2024-11-26 07:50:42.827056] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.sTxLwK7BXk': 0100660 00:40:58.911 [2024-11-26 07:50:42.827075] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:40:58.911 request: 00:40:58.911 { 00:40:58.911 "name": "key0", 00:40:58.911 "path": "/tmp/tmp.sTxLwK7BXk", 00:40:58.911 "method": "keyring_file_add_key", 00:40:58.911 "req_id": 1 00:40:58.911 } 00:40:58.911 Got JSON-RPC error response 00:40:58.911 response: 00:40:58.911 { 00:40:58.911 "code": -1, 00:40:58.911 "message": "Operation not permitted" 00:40:58.911 } 00:40:58.911 07:50:42 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:40:58.911 07:50:42 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:58.911 07:50:42 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:58.911 07:50:42 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:58.911 07:50:42 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.sTxLwK7BXk 00:40:58.911 07:50:42 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.sTxLwK7BXk 00:40:58.911 07:50:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.sTxLwK7BXk 00:40:58.911 07:50:42 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.sTxLwK7BXk 00:40:58.911 07:50:43 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:40:58.911 07:50:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:58.911 07:50:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:58.911 07:50:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:58.911 07:50:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:58.911 07:50:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:59.173 07:50:43 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:40:59.173 07:50:43 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:59.173 07:50:43 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:40:59.173 07:50:43 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:59.173 07:50:43 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:40:59.173 07:50:43 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:59.173 07:50:43 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:40:59.173 07:50:43 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:59.173 07:50:43 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:59.173 07:50:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:59.434 [2024-11-26 07:50:43.328326] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.sTxLwK7BXk': No such file or directory 00:40:59.434 [2024-11-26 07:50:43.328343] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:40:59.434 [2024-11-26 07:50:43.328360] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:40:59.434 [2024-11-26 07:50:43.328368] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:40:59.434 [2024-11-26 07:50:43.328376] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:40:59.434 [2024-11-26 07:50:43.328383] bdev_nvme.c:6769:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:40:59.434 request: 00:40:59.434 { 00:40:59.434 "name": "nvme0", 00:40:59.434 "trtype": "tcp", 00:40:59.434 "traddr": "127.0.0.1", 00:40:59.434 "adrfam": "ipv4", 00:40:59.434 "trsvcid": "4420", 00:40:59.434 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:59.434 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:59.434 "prchk_reftag": false, 00:40:59.434 "prchk_guard": false, 00:40:59.434 "hdgst": false, 00:40:59.434 "ddgst": false, 00:40:59.434 "psk": "key0", 00:40:59.434 "allow_unrecognized_csi": false, 00:40:59.434 "method": "bdev_nvme_attach_controller", 00:40:59.434 "req_id": 1 00:40:59.434 } 00:40:59.434 Got JSON-RPC error response 00:40:59.434 response: 00:40:59.434 { 00:40:59.434 "code": -19, 00:40:59.434 "message": "No such device" 00:40:59.434 } 00:40:59.434 07:50:43 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:40:59.434 07:50:43 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:59.434 07:50:43 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:59.434 07:50:43 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:59.434 07:50:43 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:40:59.434 07:50:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:59.434 07:50:43 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:40:59.434 07:50:43 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:59.434 07:50:43 keyring_file -- keyring/common.sh@17 -- # name=key0 00:40:59.434 07:50:43 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:59.434 07:50:43 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:59.434 07:50:43 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:59.434 07:50:43 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.uhST1xAqCX 00:40:59.434 07:50:43 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:59.434 07:50:43 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:59.434 07:50:43 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:40:59.434 07:50:43 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:59.434 07:50:43 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:40:59.434 07:50:43 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:40:59.434 07:50:43 keyring_file -- nvmf/common.sh@733 -- # python - 00:40:59.434 07:50:43 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.uhST1xAqCX 00:40:59.434 07:50:43 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.uhST1xAqCX 00:40:59.434 07:50:43 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.uhST1xAqCX 00:40:59.434 07:50:43 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uhST1xAqCX 00:40:59.434 07:50:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uhST1xAqCX 00:40:59.696 07:50:43 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:59.696 07:50:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:59.957 nvme0n1 00:40:59.957 07:50:43 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:40:59.957 07:50:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:59.957 07:50:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:59.957 07:50:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:59.957 07:50:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:59.957 07:50:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:00.231 07:50:44 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:41:00.231 07:50:44 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:41:00.231 07:50:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:41:00.231 07:50:44 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:41:00.231 07:50:44 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:41:00.231 07:50:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:00.231 07:50:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:00.231 07:50:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:00.556 07:50:44 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:41:00.556 07:50:44 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:41:00.556 07:50:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:00.556 07:50:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:00.556 07:50:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:00.556 07:50:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:00.556 07:50:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:00.556 07:50:44 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:41:00.556 07:50:44 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:41:00.556 07:50:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:41:00.859 07:50:44 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:41:00.859 07:50:44 keyring_file -- keyring/file.sh@105 -- # jq length 00:41:00.859 07:50:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:00.859 07:50:44 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:41:00.859 07:50:44 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uhST1xAqCX 00:41:00.859 07:50:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uhST1xAqCX 00:41:01.120 07:50:45 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.pKsScGy1dM 00:41:01.120 07:50:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.pKsScGy1dM 00:41:01.381 07:50:45 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:01.381 07:50:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:01.642 nvme0n1 00:41:01.642 07:50:45 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:41:01.642 07:50:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:41:01.902 07:50:45 keyring_file -- keyring/file.sh@113 -- # config='{ 00:41:01.902 "subsystems": [ 00:41:01.902 { 00:41:01.902 "subsystem": "keyring", 00:41:01.902 "config": [ 00:41:01.902 { 00:41:01.902 "method": "keyring_file_add_key", 00:41:01.902 "params": { 00:41:01.902 "name": "key0", 00:41:01.902 "path": "/tmp/tmp.uhST1xAqCX" 00:41:01.902 } 00:41:01.902 }, 00:41:01.902 { 00:41:01.902 "method": "keyring_file_add_key", 00:41:01.902 "params": { 00:41:01.902 "name": "key1", 00:41:01.902 "path": "/tmp/tmp.pKsScGy1dM" 00:41:01.902 } 00:41:01.902 } 00:41:01.902 ] 00:41:01.902 }, 00:41:01.902 { 00:41:01.902 "subsystem": "iobuf", 00:41:01.902 "config": [ 00:41:01.902 { 00:41:01.902 "method": "iobuf_set_options", 00:41:01.902 "params": { 00:41:01.902 "small_pool_count": 8192, 00:41:01.903 "large_pool_count": 1024, 00:41:01.903 "small_bufsize": 8192, 00:41:01.903 "large_bufsize": 135168, 00:41:01.903 "enable_numa": false 00:41:01.903 } 00:41:01.903 } 00:41:01.903 ] 00:41:01.903 }, 00:41:01.903 { 00:41:01.903 "subsystem": "sock", 00:41:01.903 "config": [ 00:41:01.903 { 00:41:01.903 "method": "sock_set_default_impl", 00:41:01.903 "params": { 00:41:01.903 "impl_name": "posix" 00:41:01.903 } 00:41:01.903 }, 00:41:01.903 { 00:41:01.903 "method": "sock_impl_set_options", 00:41:01.903 "params": { 00:41:01.903 "impl_name": "ssl", 00:41:01.903 "recv_buf_size": 4096, 00:41:01.903 "send_buf_size": 4096, 00:41:01.903 "enable_recv_pipe": true, 00:41:01.903 "enable_quickack": false, 00:41:01.903 "enable_placement_id": 0, 00:41:01.903 "enable_zerocopy_send_server": true, 00:41:01.903 "enable_zerocopy_send_client": false, 00:41:01.903 "zerocopy_threshold": 0, 00:41:01.903 "tls_version": 0, 00:41:01.903 "enable_ktls": false 00:41:01.903 } 00:41:01.903 }, 00:41:01.903 { 00:41:01.903 "method": "sock_impl_set_options", 00:41:01.903 "params": { 00:41:01.903 "impl_name": "posix", 00:41:01.903 "recv_buf_size": 2097152, 00:41:01.903 "send_buf_size": 2097152, 00:41:01.903 "enable_recv_pipe": true, 00:41:01.903 "enable_quickack": false, 00:41:01.903 "enable_placement_id": 0, 00:41:01.903 "enable_zerocopy_send_server": true, 00:41:01.903 "enable_zerocopy_send_client": false, 00:41:01.903 "zerocopy_threshold": 0, 00:41:01.903 "tls_version": 0, 00:41:01.903 "enable_ktls": false 00:41:01.903 } 00:41:01.903 } 00:41:01.903 ] 00:41:01.903 }, 00:41:01.903 { 00:41:01.903 "subsystem": "vmd", 00:41:01.903 "config": [] 00:41:01.903 }, 00:41:01.903 { 00:41:01.903 "subsystem": "accel", 00:41:01.903 "config": [ 00:41:01.903 { 00:41:01.903 "method": "accel_set_options", 00:41:01.903 "params": { 00:41:01.903 "small_cache_size": 128, 00:41:01.903 "large_cache_size": 16, 00:41:01.903 "task_count": 2048, 00:41:01.903 "sequence_count": 2048, 00:41:01.903 "buf_count": 2048 00:41:01.903 } 00:41:01.903 } 00:41:01.903 ] 00:41:01.903 }, 00:41:01.903 { 00:41:01.903 "subsystem": "bdev", 00:41:01.903 "config": [ 00:41:01.903 { 00:41:01.903 "method": "bdev_set_options", 00:41:01.903 "params": { 00:41:01.903 "bdev_io_pool_size": 65535, 00:41:01.903 "bdev_io_cache_size": 256, 00:41:01.903 "bdev_auto_examine": true, 00:41:01.903 "iobuf_small_cache_size": 128, 00:41:01.903 "iobuf_large_cache_size": 16 00:41:01.903 } 00:41:01.903 }, 00:41:01.903 { 00:41:01.903 "method": "bdev_raid_set_options", 00:41:01.903 "params": { 00:41:01.903 "process_window_size_kb": 1024, 00:41:01.903 "process_max_bandwidth_mb_sec": 0 00:41:01.903 } 00:41:01.903 }, 00:41:01.903 { 00:41:01.903 "method": "bdev_iscsi_set_options", 00:41:01.903 "params": { 00:41:01.903 "timeout_sec": 30 00:41:01.903 } 00:41:01.903 }, 00:41:01.903 { 00:41:01.903 "method": "bdev_nvme_set_options", 00:41:01.903 "params": { 00:41:01.903 "action_on_timeout": "none", 00:41:01.903 "timeout_us": 0, 00:41:01.903 "timeout_admin_us": 0, 00:41:01.903 "keep_alive_timeout_ms": 10000, 00:41:01.903 "arbitration_burst": 0, 00:41:01.903 "low_priority_weight": 0, 00:41:01.903 "medium_priority_weight": 0, 00:41:01.903 "high_priority_weight": 0, 00:41:01.903 "nvme_adminq_poll_period_us": 10000, 00:41:01.903 "nvme_ioq_poll_period_us": 0, 00:41:01.903 "io_queue_requests": 512, 00:41:01.903 "delay_cmd_submit": true, 00:41:01.903 "transport_retry_count": 4, 00:41:01.903 "bdev_retry_count": 3, 00:41:01.903 "transport_ack_timeout": 0, 00:41:01.903 "ctrlr_loss_timeout_sec": 0, 00:41:01.903 "reconnect_delay_sec": 0, 00:41:01.903 "fast_io_fail_timeout_sec": 0, 00:41:01.903 "disable_auto_failback": false, 00:41:01.903 "generate_uuids": false, 00:41:01.903 "transport_tos": 0, 00:41:01.903 "nvme_error_stat": false, 00:41:01.903 "rdma_srq_size": 0, 00:41:01.903 "io_path_stat": false, 00:41:01.903 "allow_accel_sequence": false, 00:41:01.903 "rdma_max_cq_size": 0, 00:41:01.903 "rdma_cm_event_timeout_ms": 0, 00:41:01.903 "dhchap_digests": [ 00:41:01.903 "sha256", 00:41:01.903 "sha384", 00:41:01.903 "sha512" 00:41:01.903 ], 00:41:01.903 "dhchap_dhgroups": [ 00:41:01.903 "null", 00:41:01.903 "ffdhe2048", 00:41:01.903 "ffdhe3072", 00:41:01.903 "ffdhe4096", 00:41:01.903 "ffdhe6144", 00:41:01.903 "ffdhe8192" 00:41:01.903 ] 00:41:01.903 } 00:41:01.903 }, 00:41:01.903 { 00:41:01.903 "method": "bdev_nvme_attach_controller", 00:41:01.903 "params": { 00:41:01.903 "name": "nvme0", 00:41:01.903 "trtype": "TCP", 00:41:01.903 "adrfam": "IPv4", 00:41:01.903 "traddr": "127.0.0.1", 00:41:01.903 "trsvcid": "4420", 00:41:01.903 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:01.903 "prchk_reftag": false, 00:41:01.903 "prchk_guard": false, 00:41:01.903 "ctrlr_loss_timeout_sec": 0, 00:41:01.903 "reconnect_delay_sec": 0, 00:41:01.903 "fast_io_fail_timeout_sec": 0, 00:41:01.903 "psk": "key0", 00:41:01.903 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:01.903 "hdgst": false, 00:41:01.903 "ddgst": false, 00:41:01.903 "multipath": "multipath" 00:41:01.903 } 00:41:01.903 }, 00:41:01.903 { 00:41:01.903 "method": "bdev_nvme_set_hotplug", 00:41:01.903 "params": { 00:41:01.903 "period_us": 100000, 00:41:01.903 "enable": false 00:41:01.903 } 00:41:01.903 }, 00:41:01.903 { 00:41:01.903 "method": "bdev_wait_for_examine" 00:41:01.903 } 00:41:01.903 ] 00:41:01.903 }, 00:41:01.903 { 00:41:01.903 "subsystem": "nbd", 00:41:01.903 "config": [] 00:41:01.903 } 00:41:01.903 ] 00:41:01.903 }' 00:41:01.903 07:50:45 keyring_file -- keyring/file.sh@115 -- # killprocess 2469490 00:41:01.903 07:50:45 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2469490 ']' 00:41:01.903 07:50:45 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2469490 00:41:01.903 07:50:45 keyring_file -- common/autotest_common.sh@959 -- # uname 00:41:01.903 07:50:45 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:01.903 07:50:45 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2469490 00:41:01.903 07:50:45 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:01.903 07:50:45 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:01.903 07:50:45 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2469490' 00:41:01.903 killing process with pid 2469490 00:41:01.903 07:50:45 keyring_file -- common/autotest_common.sh@973 -- # kill 2469490 00:41:01.903 Received shutdown signal, test time was about 1.000000 seconds 00:41:01.903 00:41:01.903 Latency(us) 00:41:01.903 [2024-11-26T06:50:46.040Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:01.903 [2024-11-26T06:50:46.040Z] =================================================================================================================== 00:41:01.903 [2024-11-26T06:50:46.040Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:01.903 07:50:45 keyring_file -- common/autotest_common.sh@978 -- # wait 2469490 00:41:01.903 07:50:45 keyring_file -- keyring/file.sh@118 -- # bperfpid=2471296 00:41:01.903 07:50:45 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2471296 /var/tmp/bperf.sock 00:41:01.903 07:50:45 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2471296 ']' 00:41:01.903 07:50:45 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:01.903 07:50:45 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:01.903 07:50:45 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:41:01.903 07:50:45 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:01.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:01.903 07:50:45 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:01.903 07:50:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:01.903 07:50:45 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:41:01.903 "subsystems": [ 00:41:01.903 { 00:41:01.903 "subsystem": "keyring", 00:41:01.903 "config": [ 00:41:01.904 { 00:41:01.904 "method": "keyring_file_add_key", 00:41:01.904 "params": { 00:41:01.904 "name": "key0", 00:41:01.904 "path": "/tmp/tmp.uhST1xAqCX" 00:41:01.904 } 00:41:01.904 }, 00:41:01.904 { 00:41:01.904 "method": "keyring_file_add_key", 00:41:01.904 "params": { 00:41:01.904 "name": "key1", 00:41:01.904 "path": "/tmp/tmp.pKsScGy1dM" 00:41:01.904 } 00:41:01.904 } 00:41:01.904 ] 00:41:01.904 }, 00:41:01.904 { 00:41:01.904 "subsystem": "iobuf", 00:41:01.904 "config": [ 00:41:01.904 { 00:41:01.904 "method": "iobuf_set_options", 00:41:01.904 "params": { 00:41:01.904 "small_pool_count": 8192, 00:41:01.904 "large_pool_count": 1024, 00:41:01.904 "small_bufsize": 8192, 00:41:01.904 "large_bufsize": 135168, 00:41:01.904 "enable_numa": false 00:41:01.904 } 00:41:01.904 } 00:41:01.904 ] 00:41:01.904 }, 00:41:01.904 { 00:41:01.904 "subsystem": "sock", 00:41:01.904 "config": [ 00:41:01.904 { 00:41:01.904 "method": "sock_set_default_impl", 00:41:01.904 "params": { 00:41:01.904 "impl_name": "posix" 00:41:01.904 } 00:41:01.904 }, 00:41:01.904 { 00:41:01.904 "method": "sock_impl_set_options", 00:41:01.904 "params": { 00:41:01.904 "impl_name": "ssl", 00:41:01.904 "recv_buf_size": 4096, 00:41:01.904 "send_buf_size": 4096, 00:41:01.904 "enable_recv_pipe": true, 00:41:01.904 "enable_quickack": false, 00:41:01.904 "enable_placement_id": 0, 00:41:01.904 "enable_zerocopy_send_server": true, 00:41:01.904 "enable_zerocopy_send_client": false, 00:41:01.904 "zerocopy_threshold": 0, 00:41:01.904 "tls_version": 0, 00:41:01.904 "enable_ktls": false 00:41:01.904 } 00:41:01.904 }, 00:41:01.904 { 00:41:01.904 "method": "sock_impl_set_options", 00:41:01.904 "params": { 00:41:01.904 "impl_name": "posix", 00:41:01.904 "recv_buf_size": 2097152, 00:41:01.904 "send_buf_size": 2097152, 00:41:01.904 "enable_recv_pipe": true, 00:41:01.904 "enable_quickack": false, 00:41:01.904 "enable_placement_id": 0, 00:41:01.904 "enable_zerocopy_send_server": true, 00:41:01.904 "enable_zerocopy_send_client": false, 00:41:01.904 "zerocopy_threshold": 0, 00:41:01.904 "tls_version": 0, 00:41:01.904 "enable_ktls": false 00:41:01.904 } 00:41:01.904 } 00:41:01.904 ] 00:41:01.904 }, 00:41:01.904 { 00:41:01.904 "subsystem": "vmd", 00:41:01.904 "config": [] 00:41:01.904 }, 00:41:01.904 { 00:41:01.904 "subsystem": "accel", 00:41:01.904 "config": [ 00:41:01.904 { 00:41:01.904 "method": "accel_set_options", 00:41:01.904 "params": { 00:41:01.904 "small_cache_size": 128, 00:41:01.904 "large_cache_size": 16, 00:41:01.904 "task_count": 2048, 00:41:01.904 "sequence_count": 2048, 00:41:01.904 "buf_count": 2048 00:41:01.904 } 00:41:01.904 } 00:41:01.904 ] 00:41:01.904 }, 00:41:01.904 { 00:41:01.904 "subsystem": "bdev", 00:41:01.904 "config": [ 00:41:01.904 { 00:41:01.904 "method": "bdev_set_options", 00:41:01.904 "params": { 00:41:01.904 "bdev_io_pool_size": 65535, 00:41:01.904 "bdev_io_cache_size": 256, 00:41:01.904 "bdev_auto_examine": true, 00:41:01.904 "iobuf_small_cache_size": 128, 00:41:01.904 "iobuf_large_cache_size": 16 00:41:01.904 } 00:41:01.904 }, 00:41:01.904 { 00:41:01.904 "method": "bdev_raid_set_options", 00:41:01.904 "params": { 00:41:01.904 "process_window_size_kb": 1024, 00:41:01.904 "process_max_bandwidth_mb_sec": 0 00:41:01.904 } 00:41:01.904 }, 00:41:01.904 { 00:41:01.904 "method": "bdev_iscsi_set_options", 00:41:01.904 "params": { 00:41:01.904 "timeout_sec": 30 00:41:01.904 } 00:41:01.904 }, 00:41:01.904 { 00:41:01.904 "method": "bdev_nvme_set_options", 00:41:01.904 "params": { 00:41:01.904 "action_on_timeout": "none", 00:41:01.904 "timeout_us": 0, 00:41:01.904 "timeout_admin_us": 0, 00:41:01.904 "keep_alive_timeout_ms": 10000, 00:41:01.904 "arbitration_burst": 0, 00:41:01.904 "low_priority_weight": 0, 00:41:01.904 "medium_priority_weight": 0, 00:41:01.904 "high_priority_weight": 0, 00:41:01.904 "nvme_adminq_poll_period_us": 10000, 00:41:01.904 "nvme_ioq_poll_period_us": 0, 00:41:01.904 "io_queue_requests": 512, 00:41:01.904 "delay_cmd_submit": true, 00:41:01.904 "transport_retry_count": 4, 00:41:01.904 "bdev_retry_count": 3, 00:41:01.904 "transport_ack_timeout": 0, 00:41:01.904 "ctrlr_loss_timeout_sec": 0, 00:41:01.904 "reconnect_delay_sec": 0, 00:41:01.904 "fast_io_fail_timeout_sec": 0, 00:41:01.904 "disable_auto_failback": false, 00:41:01.904 "generate_uuids": false, 00:41:01.904 "transport_tos": 0, 00:41:01.904 "nvme_error_stat": false, 00:41:01.904 "rdma_srq_size": 0, 00:41:01.904 "io_path_stat": false, 00:41:01.904 "allow_accel_sequence": false, 00:41:01.904 "rdma_max_cq_size": 0, 00:41:01.904 "rdma_cm_event_timeout_ms": 0, 00:41:01.904 "dhchap_digests": [ 00:41:01.904 "sha256", 00:41:01.904 "sha384", 00:41:01.904 "sha512" 00:41:01.904 ], 00:41:01.904 "dhchap_dhgroups": [ 00:41:01.904 "null", 00:41:01.904 "ffdhe2048", 00:41:01.904 "ffdhe3072", 00:41:01.904 "ffdhe4096", 00:41:01.904 "ffdhe6144", 00:41:01.904 "ffdhe8192" 00:41:01.904 ] 00:41:01.904 } 00:41:01.904 }, 00:41:01.904 { 00:41:01.904 "method": "bdev_nvme_attach_controller", 00:41:01.904 "params": { 00:41:01.904 "name": "nvme0", 00:41:01.904 "trtype": "TCP", 00:41:01.904 "adrfam": "IPv4", 00:41:01.904 "traddr": "127.0.0.1", 00:41:01.904 "trsvcid": "4420", 00:41:01.904 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:01.904 "prchk_reftag": false, 00:41:01.904 "prchk_guard": false, 00:41:01.904 "ctrlr_loss_timeout_sec": 0, 00:41:01.904 "reconnect_delay_sec": 0, 00:41:01.904 "fast_io_fail_timeout_sec": 0, 00:41:01.904 "psk": "key0", 00:41:01.904 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:01.904 "hdgst": false, 00:41:01.904 "ddgst": false, 00:41:01.904 "multipath": "multipath" 00:41:01.904 } 00:41:01.904 }, 00:41:01.904 { 00:41:01.904 "method": "bdev_nvme_set_hotplug", 00:41:01.904 "params": { 00:41:01.904 "period_us": 100000, 00:41:01.904 "enable": false 00:41:01.904 } 00:41:01.904 }, 00:41:01.904 { 00:41:01.904 "method": "bdev_wait_for_examine" 00:41:01.904 } 00:41:01.904 ] 00:41:01.904 }, 00:41:01.904 { 00:41:01.904 "subsystem": "nbd", 00:41:01.904 "config": [] 00:41:01.904 } 00:41:01.904 ] 00:41:01.904 }' 00:41:01.904 [2024-11-26 07:50:46.011302] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:41:01.904 [2024-11-26 07:50:46.011356] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2471296 ] 00:41:02.165 [2024-11-26 07:50:46.099190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:02.165 [2024-11-26 07:50:46.128532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:02.165 [2024-11-26 07:50:46.271689] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:41:02.735 07:50:46 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:02.735 07:50:46 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:41:02.735 07:50:46 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:41:02.735 07:50:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:02.735 07:50:46 keyring_file -- keyring/file.sh@121 -- # jq length 00:41:02.996 07:50:46 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:41:02.996 07:50:46 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:41:02.996 07:50:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:02.996 07:50:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:02.996 07:50:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:02.996 07:50:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:02.996 07:50:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:03.255 07:50:47 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:41:03.256 07:50:47 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:41:03.256 07:50:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:03.256 07:50:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:03.256 07:50:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:03.256 07:50:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:03.256 07:50:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:03.256 07:50:47 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:41:03.256 07:50:47 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:41:03.256 07:50:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:41:03.256 07:50:47 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:41:03.516 07:50:47 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:41:03.516 07:50:47 keyring_file -- keyring/file.sh@1 -- # cleanup 00:41:03.516 07:50:47 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.uhST1xAqCX /tmp/tmp.pKsScGy1dM 00:41:03.516 07:50:47 keyring_file -- keyring/file.sh@20 -- # killprocess 2471296 00:41:03.516 07:50:47 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2471296 ']' 00:41:03.516 07:50:47 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2471296 00:41:03.516 07:50:47 keyring_file -- common/autotest_common.sh@959 -- # uname 00:41:03.516 07:50:47 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:03.516 07:50:47 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2471296 00:41:03.516 07:50:47 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:03.516 07:50:47 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:03.516 07:50:47 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2471296' 00:41:03.516 killing process with pid 2471296 00:41:03.516 07:50:47 keyring_file -- common/autotest_common.sh@973 -- # kill 2471296 00:41:03.516 Received shutdown signal, test time was about 1.000000 seconds 00:41:03.516 00:41:03.516 Latency(us) 00:41:03.516 [2024-11-26T06:50:47.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:03.516 [2024-11-26T06:50:47.653Z] =================================================================================================================== 00:41:03.516 [2024-11-26T06:50:47.653Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:41:03.516 07:50:47 keyring_file -- common/autotest_common.sh@978 -- # wait 2471296 00:41:03.776 07:50:47 keyring_file -- keyring/file.sh@21 -- # killprocess 2469471 00:41:03.776 07:50:47 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2469471 ']' 00:41:03.776 07:50:47 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2469471 00:41:03.776 07:50:47 keyring_file -- common/autotest_common.sh@959 -- # uname 00:41:03.776 07:50:47 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:03.776 07:50:47 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2469471 00:41:03.776 07:50:47 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:03.776 07:50:47 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:03.776 07:50:47 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2469471' 00:41:03.776 killing process with pid 2469471 00:41:03.776 07:50:47 keyring_file -- common/autotest_common.sh@973 -- # kill 2469471 00:41:03.776 07:50:47 keyring_file -- common/autotest_common.sh@978 -- # wait 2469471 00:41:04.036 00:41:04.036 real 0m11.662s 00:41:04.036 user 0m27.982s 00:41:04.036 sys 0m2.601s 00:41:04.036 07:50:47 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:04.036 07:50:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:04.036 ************************************ 00:41:04.036 END TEST keyring_file 00:41:04.036 ************************************ 00:41:04.036 07:50:47 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:41:04.036 07:50:47 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:41:04.036 07:50:47 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:04.036 07:50:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:04.036 07:50:47 -- common/autotest_common.sh@10 -- # set +x 00:41:04.036 ************************************ 00:41:04.036 START TEST keyring_linux 00:41:04.036 ************************************ 00:41:04.036 07:50:48 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:41:04.036 Joined session keyring: 154851808 00:41:04.037 * Looking for test storage... 00:41:04.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:41:04.037 07:50:48 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:04.037 07:50:48 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:41:04.037 07:50:48 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:04.299 07:50:48 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:04.299 07:50:48 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:04.299 07:50:48 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:04.299 07:50:48 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:04.299 07:50:48 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:41:04.299 07:50:48 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:41:04.299 07:50:48 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:41:04.299 07:50:48 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:41:04.299 07:50:48 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:41:04.299 07:50:48 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:41:04.299 07:50:48 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:41:04.299 07:50:48 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:04.299 07:50:48 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:41:04.299 07:50:48 keyring_linux -- scripts/common.sh@345 -- # : 1 00:41:04.299 07:50:48 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:04.299 07:50:48 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:04.299 07:50:48 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:41:04.299 07:50:48 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:41:04.299 07:50:48 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:04.299 07:50:48 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:41:04.299 07:50:48 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:41:04.299 07:50:48 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:41:04.299 07:50:48 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:41:04.299 07:50:48 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:04.299 07:50:48 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:41:04.299 07:50:48 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:41:04.299 07:50:48 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:04.299 07:50:48 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:04.299 07:50:48 keyring_linux -- scripts/common.sh@368 -- # return 0 00:41:04.299 07:50:48 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:04.299 07:50:48 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:04.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:04.299 --rc genhtml_branch_coverage=1 00:41:04.299 --rc genhtml_function_coverage=1 00:41:04.299 --rc genhtml_legend=1 00:41:04.299 --rc geninfo_all_blocks=1 00:41:04.299 --rc geninfo_unexecuted_blocks=1 00:41:04.299 00:41:04.299 ' 00:41:04.299 07:50:48 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:04.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:04.299 --rc genhtml_branch_coverage=1 00:41:04.299 --rc genhtml_function_coverage=1 00:41:04.299 --rc genhtml_legend=1 00:41:04.299 --rc geninfo_all_blocks=1 00:41:04.299 --rc geninfo_unexecuted_blocks=1 00:41:04.299 00:41:04.299 ' 00:41:04.299 07:50:48 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:04.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:04.299 --rc genhtml_branch_coverage=1 00:41:04.299 --rc genhtml_function_coverage=1 00:41:04.299 --rc genhtml_legend=1 00:41:04.299 --rc geninfo_all_blocks=1 00:41:04.299 --rc geninfo_unexecuted_blocks=1 00:41:04.299 00:41:04.299 ' 00:41:04.299 07:50:48 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:04.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:04.299 --rc genhtml_branch_coverage=1 00:41:04.299 --rc genhtml_function_coverage=1 00:41:04.299 --rc genhtml_legend=1 00:41:04.299 --rc geninfo_all_blocks=1 00:41:04.299 --rc geninfo_unexecuted_blocks=1 00:41:04.299 00:41:04.299 ' 00:41:04.299 07:50:48 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:41:04.299 07:50:48 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:04.299 07:50:48 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:41:04.299 07:50:48 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:04.299 07:50:48 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:04.299 07:50:48 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:04.299 07:50:48 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:04.299 07:50:48 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:04.299 07:50:48 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:04.299 07:50:48 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:04.299 07:50:48 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:04.299 07:50:48 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:04.299 07:50:48 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:04.299 07:50:48 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:04.299 07:50:48 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:04.299 07:50:48 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:04.299 07:50:48 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:04.299 07:50:48 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:04.299 07:50:48 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:04.299 07:50:48 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:04.299 07:50:48 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:41:04.299 07:50:48 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:04.299 07:50:48 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:04.299 07:50:48 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:04.299 07:50:48 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:04.299 07:50:48 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:04.299 07:50:48 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:04.299 07:50:48 keyring_linux -- paths/export.sh@5 -- # export PATH 00:41:04.299 07:50:48 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:04.299 07:50:48 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:41:04.299 07:50:48 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:04.299 07:50:48 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:04.299 07:50:48 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:04.299 07:50:48 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:04.299 07:50:48 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:04.299 07:50:48 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:04.299 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:04.299 07:50:48 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:04.299 07:50:48 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:04.299 07:50:48 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:04.299 07:50:48 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:41:04.299 07:50:48 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:41:04.299 07:50:48 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:41:04.299 07:50:48 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:41:04.299 07:50:48 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:41:04.299 07:50:48 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:41:04.299 07:50:48 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:41:04.299 07:50:48 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:41:04.299 07:50:48 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:41:04.299 07:50:48 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:41:04.299 07:50:48 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:41:04.299 07:50:48 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:41:04.299 07:50:48 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:41:04.299 07:50:48 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:41:04.299 07:50:48 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:41:04.299 07:50:48 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:41:04.299 07:50:48 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:41:04.299 07:50:48 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:41:04.299 07:50:48 keyring_linux -- nvmf/common.sh@733 -- # python - 00:41:04.299 07:50:48 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:41:04.299 07:50:48 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:41:04.299 /tmp/:spdk-test:key0 00:41:04.299 07:50:48 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:41:04.299 07:50:48 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:41:04.299 07:50:48 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:41:04.299 07:50:48 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:41:04.299 07:50:48 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:41:04.300 07:50:48 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:41:04.300 07:50:48 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:41:04.300 07:50:48 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:41:04.300 07:50:48 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:41:04.300 07:50:48 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:41:04.300 07:50:48 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:41:04.300 07:50:48 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:41:04.300 07:50:48 keyring_linux -- nvmf/common.sh@733 -- # python - 00:41:04.300 07:50:48 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:41:04.300 07:50:48 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:41:04.300 /tmp/:spdk-test:key1 00:41:04.300 07:50:48 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:41:04.300 07:50:48 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2471734 00:41:04.300 07:50:48 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2471734 00:41:04.300 07:50:48 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2471734 ']' 00:41:04.300 07:50:48 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:04.300 07:50:48 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:04.300 07:50:48 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:04.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:04.300 07:50:48 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:04.300 07:50:48 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:41:04.300 [2024-11-26 07:50:48.408182] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:41:04.300 [2024-11-26 07:50:48.408282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2471734 ] 00:41:04.560 [2024-11-26 07:50:48.492461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:04.560 [2024-11-26 07:50:48.533979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:05.130 07:50:49 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:05.130 07:50:49 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:41:05.130 07:50:49 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:41:05.130 07:50:49 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:05.130 07:50:49 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:41:05.130 [2024-11-26 07:50:49.217197] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:05.130 null0 00:41:05.130 [2024-11-26 07:50:49.249242] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:41:05.130 [2024-11-26 07:50:49.249646] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:41:05.391 07:50:49 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:05.391 07:50:49 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:41:05.391 443332307 00:41:05.391 07:50:49 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:41:05.391 222320550 00:41:05.391 07:50:49 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2472019 00:41:05.391 07:50:49 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2472019 /var/tmp/bperf.sock 00:41:05.391 07:50:49 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:41:05.391 07:50:49 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2472019 ']' 00:41:05.391 07:50:49 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:05.391 07:50:49 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:05.391 07:50:49 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:05.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:05.391 07:50:49 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:05.391 07:50:49 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:41:05.391 [2024-11-26 07:50:49.328194] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:41:05.391 [2024-11-26 07:50:49.328258] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2472019 ] 00:41:05.391 [2024-11-26 07:50:49.425701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:05.391 [2024-11-26 07:50:49.455667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:06.334 07:50:50 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:06.334 07:50:50 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:41:06.334 07:50:50 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:41:06.334 07:50:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:41:06.334 07:50:50 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:41:06.334 07:50:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:41:06.595 07:50:50 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:41:06.595 07:50:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:41:06.595 [2024-11-26 07:50:50.692075] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:41:06.856 nvme0n1 00:41:06.857 07:50:50 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:41:06.857 07:50:50 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:41:06.857 07:50:50 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:41:06.857 07:50:50 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:41:06.857 07:50:50 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:41:06.857 07:50:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:06.857 07:50:50 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:41:06.857 07:50:50 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:41:06.857 07:50:50 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:41:06.857 07:50:50 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:41:06.857 07:50:50 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:06.857 07:50:50 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:41:06.857 07:50:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:07.117 07:50:51 keyring_linux -- keyring/linux.sh@25 -- # sn=443332307 00:41:07.117 07:50:51 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:41:07.117 07:50:51 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:41:07.117 07:50:51 keyring_linux -- keyring/linux.sh@26 -- # [[ 443332307 == \4\4\3\3\3\2\3\0\7 ]] 00:41:07.117 07:50:51 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 443332307 00:41:07.118 07:50:51 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:41:07.118 07:50:51 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:41:07.118 Running I/O for 1 seconds... 00:41:08.502 16277.00 IOPS, 63.58 MiB/s 00:41:08.502 Latency(us) 00:41:08.502 [2024-11-26T06:50:52.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:08.502 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:41:08.502 nvme0n1 : 1.01 16277.93 63.59 0.00 0.00 7830.00 6635.52 14527.15 00:41:08.502 [2024-11-26T06:50:52.639Z] =================================================================================================================== 00:41:08.502 [2024-11-26T06:50:52.639Z] Total : 16277.93 63.59 0.00 0.00 7830.00 6635.52 14527.15 00:41:08.502 { 00:41:08.502 "results": [ 00:41:08.502 { 00:41:08.502 "job": "nvme0n1", 00:41:08.502 "core_mask": "0x2", 00:41:08.502 "workload": "randread", 00:41:08.502 "status": "finished", 00:41:08.502 "queue_depth": 128, 00:41:08.502 "io_size": 4096, 00:41:08.502 "runtime": 1.007868, 00:41:08.502 "iops": 16277.925283866538, 00:41:08.502 "mibps": 63.585645640103664, 00:41:08.502 "io_failed": 0, 00:41:08.502 "io_timeout": 0, 00:41:08.502 "avg_latency_us": 7830.000250314925, 00:41:08.502 "min_latency_us": 6635.52, 00:41:08.502 "max_latency_us": 14527.146666666667 00:41:08.502 } 00:41:08.502 ], 00:41:08.502 "core_count": 1 00:41:08.502 } 00:41:08.502 07:50:52 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:41:08.502 07:50:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:41:08.502 07:50:52 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:41:08.502 07:50:52 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:41:08.502 07:50:52 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:41:08.502 07:50:52 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:41:08.503 07:50:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:08.503 07:50:52 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:41:08.503 07:50:52 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:41:08.503 07:50:52 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:41:08.503 07:50:52 keyring_linux -- keyring/linux.sh@23 -- # return 00:41:08.503 07:50:52 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:41:08.503 07:50:52 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:41:08.503 07:50:52 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:41:08.503 07:50:52 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:41:08.503 07:50:52 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:08.503 07:50:52 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:41:08.503 07:50:52 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:08.503 07:50:52 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:41:08.503 07:50:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:41:08.764 [2024-11-26 07:50:52.769948] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:41:08.764 [2024-11-26 07:50:52.769954] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:41:08.764 [2024-11-26 07:50:52.770943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d3240 (9): Bad file descriptor 00:41:08.764 [2024-11-26 07:50:52.771942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:41:08.764 [2024-11-26 07:50:52.771956] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:41:08.764 [2024-11-26 07:50:52.771964] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:41:08.764 [2024-11-26 07:50:52.771974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:41:08.764 request: 00:41:08.764 { 00:41:08.764 "name": "nvme0", 00:41:08.764 "trtype": "tcp", 00:41:08.764 "traddr": "127.0.0.1", 00:41:08.764 "adrfam": "ipv4", 00:41:08.764 "trsvcid": "4420", 00:41:08.764 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:08.764 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:08.764 "prchk_reftag": false, 00:41:08.764 "prchk_guard": false, 00:41:08.764 "hdgst": false, 00:41:08.764 "ddgst": false, 00:41:08.764 "psk": ":spdk-test:key1", 00:41:08.764 "allow_unrecognized_csi": false, 00:41:08.764 "method": "bdev_nvme_attach_controller", 00:41:08.765 "req_id": 1 00:41:08.765 } 00:41:08.765 Got JSON-RPC error response 00:41:08.765 response: 00:41:08.765 { 00:41:08.765 "code": -5, 00:41:08.765 "message": "Input/output error" 00:41:08.765 } 00:41:08.765 07:50:52 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:41:08.765 07:50:52 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:08.765 07:50:52 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:08.765 07:50:52 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:08.765 07:50:52 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:41:08.765 07:50:52 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:41:08.765 07:50:52 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:41:08.765 07:50:52 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:41:08.765 07:50:52 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:41:08.765 07:50:52 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:41:08.765 07:50:52 keyring_linux -- keyring/linux.sh@33 -- # sn=443332307 00:41:08.765 07:50:52 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 443332307 00:41:08.765 1 links removed 00:41:08.765 07:50:52 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:41:08.765 07:50:52 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:41:08.765 07:50:52 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:41:08.765 07:50:52 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:41:08.765 07:50:52 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:41:08.765 07:50:52 keyring_linux -- keyring/linux.sh@33 -- # sn=222320550 00:41:08.765 07:50:52 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 222320550 00:41:08.765 1 links removed 00:41:08.765 07:50:52 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2472019 00:41:08.765 07:50:52 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2472019 ']' 00:41:08.765 07:50:52 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2472019 00:41:08.765 07:50:52 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:41:08.765 07:50:52 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:08.765 07:50:52 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2472019 00:41:08.765 07:50:52 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:08.765 07:50:52 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:08.765 07:50:52 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2472019' 00:41:08.765 killing process with pid 2472019 00:41:08.765 07:50:52 keyring_linux -- common/autotest_common.sh@973 -- # kill 2472019 00:41:08.765 Received shutdown signal, test time was about 1.000000 seconds 00:41:08.765 00:41:08.765 Latency(us) 00:41:08.765 [2024-11-26T06:50:52.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:08.765 [2024-11-26T06:50:52.902Z] =================================================================================================================== 00:41:08.765 [2024-11-26T06:50:52.902Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:08.765 07:50:52 keyring_linux -- common/autotest_common.sh@978 -- # wait 2472019 00:41:09.026 07:50:52 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2471734 00:41:09.026 07:50:52 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2471734 ']' 00:41:09.026 07:50:52 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2471734 00:41:09.026 07:50:52 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:41:09.026 07:50:52 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:09.026 07:50:52 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2471734 00:41:09.026 07:50:53 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:09.026 07:50:53 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:09.026 07:50:53 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2471734' 00:41:09.026 killing process with pid 2471734 00:41:09.026 07:50:53 keyring_linux -- common/autotest_common.sh@973 -- # kill 2471734 00:41:09.026 07:50:53 keyring_linux -- common/autotest_common.sh@978 -- # wait 2471734 00:41:09.287 00:41:09.287 real 0m5.229s 00:41:09.287 user 0m9.668s 00:41:09.287 sys 0m1.427s 00:41:09.287 07:50:53 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:09.287 07:50:53 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:41:09.287 ************************************ 00:41:09.287 END TEST keyring_linux 00:41:09.287 ************************************ 00:41:09.287 07:50:53 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:41:09.287 07:50:53 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:41:09.287 07:50:53 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:41:09.287 07:50:53 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:41:09.287 07:50:53 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:41:09.287 07:50:53 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:41:09.287 07:50:53 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:41:09.287 07:50:53 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:41:09.287 07:50:53 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:41:09.287 07:50:53 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:41:09.287 07:50:53 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:41:09.287 07:50:53 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:41:09.287 07:50:53 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:41:09.287 07:50:53 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:41:09.287 07:50:53 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:41:09.287 07:50:53 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:41:09.287 07:50:53 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:41:09.287 07:50:53 -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:09.288 07:50:53 -- common/autotest_common.sh@10 -- # set +x 00:41:09.288 07:50:53 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:41:09.288 07:50:53 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:41:09.288 07:50:53 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:41:09.288 07:50:53 -- common/autotest_common.sh@10 -- # set +x 00:41:17.430 INFO: APP EXITING 00:41:17.430 INFO: killing all VMs 00:41:17.430 INFO: killing vhost app 00:41:17.430 INFO: EXIT DONE 00:41:20.729 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:41:20.729 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:41:20.729 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:41:20.729 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:41:20.729 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:41:20.729 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:41:20.729 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:41:20.729 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:41:20.729 0000:65:00.0 (144d a80a): Already using the nvme driver 00:41:20.729 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:41:20.990 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:41:20.990 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:41:20.990 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:41:20.990 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:41:20.990 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:41:20.990 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:41:20.990 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:41:25.195 Cleaning 00:41:25.195 Removing: /var/run/dpdk/spdk0/config 00:41:25.195 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:41:25.195 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:41:25.195 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:41:25.195 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:41:25.195 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:41:25.195 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:41:25.195 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:41:25.195 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:41:25.195 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:41:25.195 Removing: /var/run/dpdk/spdk0/hugepage_info 00:41:25.195 Removing: /var/run/dpdk/spdk1/config 00:41:25.195 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:41:25.195 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:41:25.195 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:41:25.195 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:41:25.195 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:41:25.195 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:41:25.195 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:41:25.195 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:41:25.195 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:41:25.195 Removing: /var/run/dpdk/spdk1/hugepage_info 00:41:25.195 Removing: /var/run/dpdk/spdk2/config 00:41:25.195 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:41:25.195 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:41:25.195 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:41:25.195 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:41:25.195 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:41:25.195 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:41:25.195 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:41:25.195 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:41:25.195 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:41:25.195 Removing: /var/run/dpdk/spdk2/hugepage_info 00:41:25.195 Removing: /var/run/dpdk/spdk3/config 00:41:25.195 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:41:25.195 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:41:25.195 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:41:25.195 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:41:25.195 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:41:25.195 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:41:25.195 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:41:25.195 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:41:25.456 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:41:25.456 Removing: /var/run/dpdk/spdk3/hugepage_info 00:41:25.456 Removing: /var/run/dpdk/spdk4/config 00:41:25.456 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:41:25.456 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:41:25.456 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:41:25.456 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:41:25.456 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:41:25.456 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:41:25.456 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:41:25.456 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:41:25.456 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:41:25.456 Removing: /var/run/dpdk/spdk4/hugepage_info 00:41:25.456 Removing: /dev/shm/bdev_svc_trace.1 00:41:25.456 Removing: /dev/shm/nvmf_trace.0 00:41:25.456 Removing: /dev/shm/spdk_tgt_trace.pid1856184 00:41:25.456 Removing: /var/run/dpdk/spdk0 00:41:25.456 Removing: /var/run/dpdk/spdk1 00:41:25.456 Removing: /var/run/dpdk/spdk2 00:41:25.456 Removing: /var/run/dpdk/spdk3 00:41:25.456 Removing: /var/run/dpdk/spdk4 00:41:25.456 Removing: /var/run/dpdk/spdk_pid1854423 00:41:25.456 Removing: /var/run/dpdk/spdk_pid1856184 00:41:25.456 Removing: /var/run/dpdk/spdk_pid1856714 00:41:25.456 Removing: /var/run/dpdk/spdk_pid1857751 00:41:25.456 Removing: /var/run/dpdk/spdk_pid1858089 00:41:25.456 Removing: /var/run/dpdk/spdk_pid1859161 00:41:25.456 Removing: /var/run/dpdk/spdk_pid1859485 00:41:25.456 Removing: /var/run/dpdk/spdk_pid1859660 00:41:25.456 Removing: /var/run/dpdk/spdk_pid1860768 00:41:25.456 Removing: /var/run/dpdk/spdk_pid1861556 00:41:25.456 Removing: /var/run/dpdk/spdk_pid1861944 00:41:25.456 Removing: /var/run/dpdk/spdk_pid1862346 00:41:25.456 Removing: /var/run/dpdk/spdk_pid1862761 00:41:25.456 Removing: /var/run/dpdk/spdk_pid1863079 00:41:25.456 Removing: /var/run/dpdk/spdk_pid1863220 00:41:25.456 Removing: /var/run/dpdk/spdk_pid1863553 00:41:25.456 Removing: /var/run/dpdk/spdk_pid1863942 00:41:25.456 Removing: /var/run/dpdk/spdk_pid1865026 00:41:25.456 Removing: /var/run/dpdk/spdk_pid1868595 00:41:25.456 Removing: /var/run/dpdk/spdk_pid1868967 00:41:25.456 Removing: /var/run/dpdk/spdk_pid1869320 00:41:25.456 Removing: /var/run/dpdk/spdk_pid1869346 00:41:25.456 Removing: /var/run/dpdk/spdk_pid1869862 00:41:25.456 Removing: /var/run/dpdk/spdk_pid1870053 00:41:25.456 Removing: /var/run/dpdk/spdk_pid1870491 00:41:25.456 Removing: /var/run/dpdk/spdk_pid1870765 00:41:25.456 Removing: /var/run/dpdk/spdk_pid1871148 00:41:25.456 Removing: /var/run/dpdk/spdk_pid1871214 00:41:25.456 Removing: /var/run/dpdk/spdk_pid1871516 00:41:25.456 Removing: /var/run/dpdk/spdk_pid1871693 00:41:25.456 Removing: /var/run/dpdk/spdk_pid1872300 00:41:25.456 Removing: /var/run/dpdk/spdk_pid1872499 00:41:25.456 Removing: /var/run/dpdk/spdk_pid1872775 00:41:25.456 Removing: /var/run/dpdk/spdk_pid1878013 00:41:25.456 Removing: /var/run/dpdk/spdk_pid1883883 00:41:25.456 Removing: /var/run/dpdk/spdk_pid1896470 00:41:25.456 Removing: /var/run/dpdk/spdk_pid1897216 00:41:25.717 Removing: /var/run/dpdk/spdk_pid1903544 00:41:25.717 Removing: /var/run/dpdk/spdk_pid1904018 00:41:25.717 Removing: /var/run/dpdk/spdk_pid1909660 00:41:25.717 Removing: /var/run/dpdk/spdk_pid1917317 00:41:25.717 Removing: /var/run/dpdk/spdk_pid1920415 00:41:25.717 Removing: /var/run/dpdk/spdk_pid1933996 00:41:25.717 Removing: /var/run/dpdk/spdk_pid1946082 00:41:25.717 Removing: /var/run/dpdk/spdk_pid1948102 00:41:25.717 Removing: /var/run/dpdk/spdk_pid1949348 00:41:25.717 Removing: /var/run/dpdk/spdk_pid1972401 00:41:25.717 Removing: /var/run/dpdk/spdk_pid1977849 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2038580 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2045597 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2053203 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2061739 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2061741 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2062741 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2063747 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2064754 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2065430 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2065432 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2065783 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2065928 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2066082 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2067185 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2068637 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2069671 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2070340 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2070342 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2070683 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2072115 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2073409 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2083888 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2120549 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2126452 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2128441 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2130581 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2130804 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2130817 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2131141 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2131671 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2133880 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2134959 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2135339 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2138051 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2138758 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2139475 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2145163 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2152378 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2152379 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2152380 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2158194 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2169486 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2174307 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2182206 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2183708 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2185448 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2187080 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2193139 00:41:25.717 Removing: /var/run/dpdk/spdk_pid2198967 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2204606 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2215344 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2215353 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2221104 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2221283 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2221510 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2222109 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2222116 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2228180 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2228941 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2234862 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2237950 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2244961 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2252174 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2262779 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2272476 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2272478 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2297534 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2298328 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2299155 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2299890 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2300904 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2301640 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2302327 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2303010 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2308753 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2309088 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2316804 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2317065 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2324561 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2330266 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2342323 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2342997 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2348679 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2349078 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2354478 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2361885 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2364886 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2378448 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2390111 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2392106 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2393113 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2414223 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2419533 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2422726 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2431406 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2431420 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2438094 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2440366 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2442814 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2444034 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2446522 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2447909 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2458705 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2459273 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2459945 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2463007 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2463675 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2464245 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2469471 00:41:25.978 Removing: /var/run/dpdk/spdk_pid2469490 00:41:26.239 Removing: /var/run/dpdk/spdk_pid2471296 00:41:26.239 Removing: /var/run/dpdk/spdk_pid2471734 00:41:26.239 Removing: /var/run/dpdk/spdk_pid2472019 00:41:26.239 Clean 00:41:26.239 07:51:10 -- common/autotest_common.sh@1453 -- # return 0 00:41:26.239 07:51:10 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:41:26.239 07:51:10 -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:26.239 07:51:10 -- common/autotest_common.sh@10 -- # set +x 00:41:26.239 07:51:10 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:41:26.239 07:51:10 -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:26.239 07:51:10 -- common/autotest_common.sh@10 -- # set +x 00:41:26.239 07:51:10 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:41:26.239 07:51:10 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:41:26.239 07:51:10 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:41:26.239 07:51:10 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:41:26.239 07:51:10 -- spdk/autotest.sh@398 -- # hostname 00:41:26.239 07:51:10 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:41:26.500 geninfo: WARNING: invalid characters removed from testname! 00:41:53.077 07:51:35 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:54.987 07:51:38 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:56.393 07:51:40 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:58.301 07:51:41 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:59.685 07:51:43 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:01.598 07:51:45 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:02.982 07:51:46 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:42:02.982 07:51:46 -- spdk/autorun.sh@1 -- $ timing_finish 00:42:02.982 07:51:46 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:42:02.982 07:51:46 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:42:02.982 07:51:46 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:42:02.982 07:51:46 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:42:02.982 + [[ -n 1768397 ]] 00:42:02.982 + sudo kill 1768397 00:42:02.992 [Pipeline] } 00:42:03.006 [Pipeline] // stage 00:42:03.011 [Pipeline] } 00:42:03.024 [Pipeline] // timeout 00:42:03.028 [Pipeline] } 00:42:03.040 [Pipeline] // catchError 00:42:03.045 [Pipeline] } 00:42:03.058 [Pipeline] // wrap 00:42:03.062 [Pipeline] } 00:42:03.074 [Pipeline] // catchError 00:42:03.082 [Pipeline] stage 00:42:03.083 [Pipeline] { (Epilogue) 00:42:03.094 [Pipeline] catchError 00:42:03.096 [Pipeline] { 00:42:03.108 [Pipeline] echo 00:42:03.110 Cleanup processes 00:42:03.116 [Pipeline] sh 00:42:03.404 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:42:03.404 2486040 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:42:03.419 [Pipeline] sh 00:42:03.704 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:42:03.704 ++ grep -v 'sudo pgrep' 00:42:03.704 ++ awk '{print $1}' 00:42:03.704 + sudo kill -9 00:42:03.704 + true 00:42:03.717 [Pipeline] sh 00:42:04.002 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:42:16.241 [Pipeline] sh 00:42:16.527 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:42:16.527 Artifacts sizes are good 00:42:16.600 [Pipeline] archiveArtifacts 00:42:16.631 Archiving artifacts 00:42:16.783 [Pipeline] sh 00:42:17.069 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:42:17.085 [Pipeline] cleanWs 00:42:17.096 [WS-CLEANUP] Deleting project workspace... 00:42:17.096 [WS-CLEANUP] Deferred wipeout is used... 00:42:17.103 [WS-CLEANUP] done 00:42:17.105 [Pipeline] } 00:42:17.124 [Pipeline] // catchError 00:42:17.134 [Pipeline] sh 00:42:17.420 + logger -p user.info -t JENKINS-CI 00:42:17.429 [Pipeline] } 00:42:17.441 [Pipeline] // stage 00:42:17.445 [Pipeline] } 00:42:17.458 [Pipeline] // node 00:42:17.462 [Pipeline] End of Pipeline 00:42:17.497 Finished: SUCCESS